1. 5

    My goal with infrastructure is to forget its existence, both in maintenance and on my DO invoice. Until it’s as cheap as a systemd unit on a 5$ VPS, I don’t care about it for small-scale projects.

    I think k8s is far more interesting for its API-based approaches (we’re bringing DCOM back), but I’d rather see the concepts implemented in something much simpler for again, small-scale things.

    1. 4

      imo, what’s really nice about using Kubernetes is

      • everything is an object
      • since you edit objects via an API, everything has an API
      • everything is in its own netns, cgroup

      We could absolutely build an infrastructure platform that does all of the above minus the boilerplate (and maybe even minus containers), but I don’t think that exists yet. These days I’m happy enough running Kubernetes everywhere (even single node) just so I don’t have to deal with netns myself.

      1. 2

        There is systemd which offers all of that minus boilerplate and minus containers.

        1. 2

          minus containers

          plus containers, of course https://man7.org/linux/man-pages/man5/systemd.nspawn.5.html

          1. 3

            Well, if you want, you can use them, but these aren’t required. That is why I said “minus containers”.

            1. 2

              Ah, I misunderstood you then.

          2. 1

            systemd is great but you still need something to deploy those unit files. And you have to control what goes where, which is usually the job of a scheduler.

            For a single machine, systemd over k8s any day. Anything more than a handful, it’s debatable.

            1. 1

              I was planning on writing multi-node scheduler for systemd in Erlang/Elixir. Maybe one day.

              However even in basic form you can manage that - ship everything everywhere and then use socket activation and LB in front of everything to start services as needed.

            2. 1

              Well, not quite. I don’t want to write an essay here, but here’s one example of why it really doesn’t:

              Let’s say we decide we want to create network namespaces to isolate all our services, and then selectively poke holes between them for services to communicate. Let’s look at how we would solve this in Kubernetes vs systemd.

              In Kubernetes, we could create an object type (CustomResourceDefinition) called FirewallRule. We’d then hook into an HTTP API, which notifies our program of any changes to these objects. On change, some code runs to reconcile reality with the state of the object. Of course, Kubernetes transparently handles the creation of network namespaces, and provides a builtin object type to poke holes between them (including a layer of abstraction on top where programs running on separate machines look like they’re in the same namespace), so in reality we would just use that.

              In systemd, we cannot create custom object types. Instead, to create a network namespace, we would wrap our .service with a shell script to start up the network namespace. To poke holes, we might create another .service unit, which spawns a program with some arguments that specify properties (source, destination, etc). We have to be careful to specify that the second unit depends on the first unit starting (otherwise the netns doesn’t exist).

              Let’s say opening and closing a hole in the network is an expensive operation, but modifying the port number is cheap. All we have as input in systemd is a Start and Stop, so we’d have to open and close the hole when we modify the unit file (expensive). In Kubernetes we get a whole diff of {current state, desired state} to work with, so we can choose to just edit the port (cheap). In this way, systemd isn’t really a true declarative abstraction over running your infrastructure, and is more like a shell script where you can specify a list of steps and then run those steps in the background.

              That said, I don’t think containers are the cleanest way of having object-based declarative infrastructure. Maybe something like NixOS is a better way forward long-term (the tooling and adoption are currently… ehhh). But for now, if I have to pick between writing imperative infrastructure and running containers, I’m gonna run containers.

              1. 2

                In systemd, we cannot create custom object types.

                Depends on your definition of “creation of custom types”. Because you can create “instantiated types” that allows you to create “unit templates” that you can later depend on. For example units that create named netns. Then all you need to do in your service is to add dependency on given netns like:

                Requires=netns@foo.service
                After=netns@foo.service
                

                And your service will be started after network namespace is created.

                This also solves the second problem you specified:

                All we have as input in systemd is a Start and Stop, so we’d have to open and close the hole when we modify the unit file (expensive).

                As we need to modify only our application service, not the netns service, we can restart our application without bringing the network down at all. The same goes for service that would be used for opening and closing ports in firewall.

                So the same approach is fully possible in systemd without many hiccups. Actually I think that it can be made even clearer for the Ops, as there is no “magical custom thing” like your FirewallRule, but everything is using “regular” systemd facilities.

                1. 2

                  I really like instantiated types; they make dealing with i.e. lots of site-to-site VPN links easy. I could see it used for other things; imagine describing a web service that slots into an application server as a unit like that, and have it automatically enroll in the web server’s routes/reverse proxy/whatever.

            3. 2

              Yeah, I feel the exciting part isn’t being able to build Google-scale container clusterfucks, but something like cPanel on top of clean APIs with a modern approach.

          1. 2

            Tried this on Safari with TouchID (which normally works like a security key…) and it didn’t work :(

            anyone else have any luck?

            1. 3

              It looks like they only support a couple of security key manufacturers, with yubikey being the biggest. I doubt TouchID provides the kind of manufacturer attestation needed for this scheme (but I could be wrong about that).

              1. 1

                Yeah, they say that they only support attestations by Yubikey, HyperFIDO and Thetis FIDO. TouchID probably provides attestation(though I’m not sure), but it just hasn’t been whitelisted by them yet.

                1. 2

                  Apple does have an attestation scheme for TouchID, but it’s not the “standard” one. It’s anonymous and can’t be tracked, which probably isn’t desirable for Cloudflare’s use. Presumably they are misusing this feature so they can block “bad” users, which Apple’s feature doesn’t let them do.

                  Ctrlf for Apple Anonymous Attestation on https://webkit.org/blog/11312/meet-face-id-and-touch-id-for-the-web/

                  1. 1

                    You can’t “block bad users” as is right now. Each attestation key is used in at least 100,000 tokens, there’s no reasonable way to block a single one of them with the way it’s done. Apple’s way meanwhile, is quite a bit more complicated, requires connection to Apple’s servers from your machine, and creates a new attestation certificate each time that is signed by “master” Apple’s certificate on their servers (and seems like it’s opt-in?). I’m not entirely sure if there’s much difference in the privacy front besides Apple not having to worry about somebody extracting attestation keys from their machines and spoofing their attestation.

                    1. 2

                      I think 1 in 100k, combined with additional signals like client fingerprinting, IP, etc, is absolutely enough to identify and block a bot. Even in the worst case where you block whole batches of yubikeys, the attacker cost goes up as they buy more keys, but legitimate users just fall back to captchas.

                      1. 1

                        The whole point of this for them was to decrease their CAPTCHA usage. Turning users back to using them is counterproductive for them. 1 in 100k is a tiny amount, and with carefulness, a bot writer can easily blend into a group that size.

                        1. 1

                          Most of that 100k set of users will not be visiting any particular website at a time.

                          If the point of this isn’t to block bad boys, then what is it? Bot writers will have a yubikey-as-a-service API from somebody soon, probably using a rotating set of some dozens of security keys. So it’ll be even easier for bots than captchas are today, if cloudflare isn’t using the key batch as a signal to block.

            1. 5

              So… apparently, you can trust that Apple Silicon is bug-compatible with Intel processors? Or is this a case where IEEE requires “incorrect” results for floating-point math?

              1. 20

                I’m pretty sure these results are expected based on the floating point standard. They’re mathematically incorrect because of limitations of the standard and the fact that you’re effectively rounding on each step.

                1. 11

                  Still pretty annoying how “incorrect results” gets thrown around without any explanation or qualification. It feels more like a click-bait for (Apple) commenters to dunk on Intel … which is exactly what seems to happen in the comments already:

                  this somehow feels like a deliberate decision on Apple’s part. I mean replicating Intel’s errors on a very different architecture.

                  1. 1

                    I believe Apple (and other recent Arm) FPUs can generate three kinds of incorrect results:

                    • Incorrect results as specified the rounding modes defined by IEEE 754
                    • Incorrect results as specified the rounding modes defined by EcmaScript
                    • Incorrect results that are compatible with some x87 / SSE rounding modes
                  2. 3

                    Well, ‘mathematically incorrect’ again suggests these results are incorrect or that what happens is somehow not ‘mathematical’. I think such terminology muddies the waters. These results are not in the least incorrect. They are exactly what a calculation using mathematically well-defined finite precision representations of numbers, as specified in IEEE754, should result in.

                    What’s incorrect is the expectation that such algorithms should converge to the analytical solution.

                    1. 2

                      Please consult Kahan on the original context of these traps. Kahan’s claim is that we must do error analysis if we want to understand the results that we get from IEEE 754 algorithms.

                1. 3

                  YAML is a HORRIBLE format for configuration.

                  This is exactly my experience. The state of the Kubernetes-management ecosystem is awful. At risk of spoiling my upcoming blog post on how to make it less bad… Instead of YAML, I write JSONnet whenever possible (like here). JSONnet is a pure superset of JSON with variables, functions, and other conveniences.

                  Pretty soon I want to remove all the YAML from that repository with some clever scripts for e.x. compiling values.jsonnet -> values.yaml for helm.

                  1. 3

                    I’m sorry but I’ve been screwed by so many “better than yaml” tools that I just want to remove the entire yaml everything from the equation.

                  1. 22

                    The job of the OS is to schedule which software gets what resources at any given time. Kubernetes does the same thing. You have resources on each one of your nodes, and Kubernetes assigns those resources to the software running on your cluster.

                    ehh, who’s the you here? This is starting from the assumption that I have a lot of nodes, which is only true in the context of me running infrastructure for a corporation; the you here is a corporation.

                    The first difference is the way you define what software to run. In a traditional OS, you have an init system (like systemd) that starts your software.

                    again, again, define traditional. Who’s tradition? In what context? In a traditional OS, software starts when you start using it, and then it stops when you stop using it. The idea that everything should be an always-running, fully-managed service is something that’s only traditional in the context of SAAS.

                    The thing that makes me feel cold about all this stuff is that we’re getting further and further away from building software that is designed for normal people to run on their own machines. So many people that run Kubernetes argue that it doesn’t even make sense unless you have people whose job it is to run Kubernetes itself. So it’s taken for granted that people are writing software that they can’t even run themselves. I dunno. All this stuff doesn’t make me excited, it makes me feel like a pawn.

                    1. 12

                      You’re right, you probably wouldn’t use Kubernetes as an individual.

                      I’ll take the bait a little bit though and point out that groups of people are not always corporations. For example, we run Kubernetes at the Open Computing Facility at our university. Humans need each other, and depending on other people doesn’t make you a pawn.

                      1. 8

                        Given the millions and millions spent on marketing, growth hacking, and advertising for the k8s ecosystem, I van say with some certainty we are all pawn-shaped.

                        1. 5

                          totally fair criticism. I think “corporation” in my comment could readily be substituted with “enterprise”, “institution”, “organization” or “collective”. “organization” is probably the most neutral term.

                          Humans need each other, and depending on other people doesn’t make you a pawn.

                          so I think this is where my interpretation is less charitable, and we could even look at my original comment as being vague and not explicitly acknowledging its own specific frame of reference:

                          In a traditional OS, software starts when you start using it, and then it stops when you stop using it.

                          again, who’s tradition, and in what context? Here I’m speaking of my tradition as a personal computer user, and the context is at home, for personal use. When thinking about Kubernetes (or container orchestration generally) there’s another context of historical importance: time-sharing. Now, I don’t have qualms with time-sharing, because time-sharing was a necessity at the time. The time-sharing computing environments of the sixties and seventies existed because the ownership of a home computer was unreasonably expensive: time-sharing existed to grant wider access to computing. Neat!

                          Circling back to your comment about dependency not inherently making someone a pawn and ask: who is dependent on whom, for what, and why? We might say of time-sharing at a university: a student is dependent on the university for access to computing because computers are too big and expensive for the student to own. Makes sense! The dependent relationship is, in a sense, axiomatic of the technology, and may even describe your usage of Kubernetes. If anything, the university wishes the student wasn’t dependent on them for this because it’s a burden to run.

                          But generally, Kubernetes is a different beast, and the reason there’s so much discussion of Kubernetes here and elsewhere in the tech industry is that Kubernetes is lucrative. Sure, it’s neat and interesting technology, but so is graphics or embedded programming or cybernetics, etc, etc, etc. There are lots of neat and interesting topics in programming that are very rarely discussed here and elsewhere in programming communities.

                          Although computers are getting faster, cheaper, and smaller, the computers owned by the majority of people are performing less and less local computation. Although advances in hardware should be making individuals more independent, the SAAS landscape that begat Kubernetes has only made people less independent. Instead of running computation locally, corporations want to run the computation for you and charge you some form of rent. This landscape of rentier computation that is dominating our industry has created dependent relationships that are not inherently necessary, but are instead instruments of profit-seeking and control. This industry-wide turn towards rentier computation is the root of my angst, and I would say is actually the point of Kubernetes.

                        2. 10

                          we’re getting further and further away from building software that is designed for normal people to run on their own machines

                          This resonates with me a lot. At work, we have some projects that are very easy to run locally and we have some that are much harder. Nearly all the projects that can be run locally get their features implemented more quickly and more reliably. Being able to run locally cuts way down on the feedback loop.

                          1. 2

                            I’m really looking forward to the built-in embed stuff in Go 1.16 for this reason. Yeah, there’s third-party tools that do it, but having it standardized will be great. I write Go servers and one thing I’ve done is implement every storage layer twice: once with a database, and once in process memory. The utility of this has been incredible, because I can compile a server into a single .exe file that I can literally PM to a colleague on Slack that they can just run and they have a working dev server with no setup at all. You can also do this with sqlite or other embedded databases if you need local persistence; I’ve done that in the past but I don’t do it in my current gig.

                            1. 2

                              I write Go servers and one thing I’ve done is implement every storage layer twice: once with a database, and once in process memory.

                              In my experience the overhead of implementing the logic twice does not pay out since it is very easy to spin up a MySQL or Postgres database, e.g. using docker. Of course this comes with the disadvantage of having to provide another dependency but at least the service then runs in a similar environment to production. Usually spinning up a test database is already documented/automated for testing.

                              1. 1

                                That was my first thought, but upon reflection - the test implementation is really just an array of structs, and adds very little overhead at all.

                                1. 1

                                  yeah, very often the implementation is just a mess of map[string]*Book, where there’s one Book for every model type and one map for every index, and then you slap a mutex around the whole thing and call it a day. It falls apart when the data is highly relational. I use the in-mem implementation for unit tests and for making debug binaries. I send debug binaries to non-developer staff. Asking them to install Docker alone would be a non-starter.

                          2. 4

                            Suppose that you, an individual, own two machines. You would like a process to execute on either of those machines, but you don’t want to have to manage the detail of which machine is actually performing the computation. At this point, you will need to build something very roughly Kubernetes-shaped.

                            The difficulty isn’t in having a “lot” of nodes, or in running things “on their own machines”; the difficulty is purely in having more than one machine.

                            1. 16

                              you don’t want to have to manage the detail of which machine is actually performing the computation

                              …is not a problem which people with two machines have. They pick one, the other, or both, and skip on all the complexity a system that chooses for you would entail.

                              1. 3

                                I struggle with this. My reflex is to want to have that dynamic host management, but fact of the matter is my home boxes have had less pages than my ISP in the past five years. Plain old sysadmin is more than enough in all of my use cases. Docker is still kinda useful to not have to deal with the environment and setup and versions, but like. A lot of difficulty is sidestepped by just avoiding to buy into the complexity.

                                I wonder if this also holds for “smaller” professional projects.

                                1. 1

                                  Unfortunately, I think that your approach is reductive. I personally have had situations where I don’t particularly care which of two identical machines performs a workload; one example is when using CD/DVD burners to produce several discs. A common consumer-focused example is having a dual-GPU machine where the two GPUs are configured as one single logical unit; the consumer doesn’t care which GPU handles which frame. Our operating systems must perform similar logic to load-balance processes in SMP configurations.

                                  I think that you might want to consider the difficulty of being Buridan’s ass; this paradox constantly complicates my daily life.

                                  1. 3

                                    When I am faced with a situation in which I don’t particularly care which of two identical machines performs a workload, such as your CD burner example, I pick whichever, or both. Flip a coin, and you get out of the buridan’s ass paradox, if you will. Surely the computer can’t do better than that, if it’s truly the buridan’s ass paradox and both choices are equally good. Dual-GPU systems and multicore CPUs are nice in that they don’t really require changing anything from the user’s perspective. Moving from the good old sysadmin way to kubernetes is very much not like that.

                                    I’m sure there’s very valid use-cases for kubernetes, but not having to flip a coin to decide which of my two identical and equally in-reach computers will burn 20 CDs tonight is surely not worth the tradeoff.

                                    1. 3

                                      To wring one last insight from this line of thought, it’s interesting to note that in the dual-GPU case, a CPU-bound driver chooses which GPU gets which drawing command, based on which GPU is closer to memory which is also driver-managed; while in the typical SMP CPU configuration, one of the CPUs is the zeroth CPU and has the responsibility of booting its siblings. Either way, there’s a delegation of the responsibility of the coin flip. It’s interesting that, despite being set up to manage the consequences of the coin flip, the actual random decision of how to break symmetry and choose a first worker is not part of the system.

                                      And yet, at the same time, both GPU drivers and SMP kernels are relatively large. Even when they do not contain inner compilers and runtimes, they are inherently translating work requests from arbitrary and often untrusted processes into managed low-level actions, and in that translation, they often end up implementing the same approach that Kubernetes takes (and which I said upthread): Kubernetes manages objects which represent running processes. An SMP kernel manages UNIX-style process objects, but in order to support transparent migration between cores, it also has objects for physical memory banks, virtual memory pages, and IPC handles. A GPU driver manages renderbuffers, texturebuffers, and vertexbuffers; but in order to support transparent migration between CPU and GPU memory, it also has objects for GPU programs (shaders), for invoking GPU programs, for fencing GPU memory, and that’s not even getting into hotplugging!

                                      My takeaway here is that there is a minimum level of complexity involved in writing a scheduler which can transparently migrate some of its actions, and that that complexity may well require millions of lines of code in today’s languages.

                                2. 5

                                  I mean, that’s not really an abstract thought-experiment, I do have two machines: my computer and my phone. I’d wager that nearly everyone here could say the same. In reality I have more like seven machines: a desktop, a laptop, a phone, two Raspberry Pi’s, a Switch, and a PS4. Each one of these is a computer far more powerful than the one that took the Apollo astronauts to the moon. The situation you’re talking about has quite literally never been a thing I’ve worried about. The only coordination problem I actually have between these machines is how I manage my ever-growing collection of pictures of my dog.

                                3. 5

                                  My feelings exactly. Kubernetes is for huge groups. Really huge. If you only have one hundred or so staff, I am not convinced you get much benefit.

                                  If you’re happy in a large company, go wild. Enjoy the kubernets. It isn’t for me - I’m unsure whether I will ever join a group with more than ten or so again, but it won’t be soon.

                                1. 7

                                  For fairness, we should find some way to include Dream’s perspective.

                                  My perspective on his perspective is that he goes through a lot of handwaving and psychological arguments to explain his situation. The speedrun team’s paper has a basic statistical argument which convinces me that something is unexplained, but I don’t feel like Dream has an explanation. But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                                  In a relative rarity for commonly-run games, the Minecraft speedrunning community allows many modifications to clients. It complicates affairs that Dream and many other runners routinely use these community-approved modifications.

                                  1. 5

                                    But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                                    This is the argument that always confuses me. At the end of the day, Minecraft is just some code running on someone else’s computer. Recorded behavior of this code is extremely different from what it should be. There are about a billion ways he could have modified the RNG, even live on stream with logfiles to show for it.

                                    1. 1

                                      I like to take a scientific stance when these sorts of controversies arise. When we don’t know how somebody cheated, but strongly suspect that their runs are not legitimate, then we should not immediately pass judgement, but work to find a deeper understanding of both the runner and the game. In the two most infamous cheating controversies in the wider speedrunning community, part of the resolution involved gaining deeper knowledge about how the games in question operated.

                                    2. 3

                                      But without a clear mechanism for how cheating was accomplished

                                      Are you asking for a proof of concept of how to patch a minecraft executable or mod to get lucky like Dream was?

                                      1. 3

                                        Here’s one:

                                        • open the minecraft 1.16.4.jar in your choice of archive program
                                        • go to /data/minecraft/loot_tables/gameplay/piglin_bartering.json
                                        • increase the weight of the ender pearl trade
                                        • delete META_INF like in the good old days (it contains a checksum)
                                        • save the archive

                                        Anyone as familiar with Minecraft as dream would know how to do this.

                                      2. 2

                                        But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                                        We have a clear mechanism : he modded his game. That’s because when he was asked for game logs, he deleted them. Just from the odds alone, he is 100.00000000% guilty.

                                        1. 3

                                          As the original paper and video explain, Minecraft’s speedrunning community does not consider modified game clients to be automatically cheating. Rather, the nature of the precise modifications used are what determine cheaters.

                                          While Dream did admit to destroying logs, he did also submit supporting files for his run. Examining community verification standards for Minecraft speedruns, it does not seem like he failed to follow community expectations. It is common for speedrunning communities to know about possible high-relability verification techniques, like input captures, but to also not require them. Verification is just as much about social expectations as about technical choices.

                                          From the odds alone, Dream’s runs are probably illegitimate, sure, but we must refuse to be 100% certain, due to Cromwell’s Rule; if we are completely certain, then there’s no point in investigating or learning more. From the paper, the correct probability to take away is 13 nines of certainty, which is a relatively high amount of certainty. And crucially, this is the probability that our understanding of the situation is incomplete, not the probability that he cheated.

                                          1. 4

                                            But you said there’s no clear mechanism for how cheating was accomplished. Changing the probability tables through mods is a fairly clear and simple mechanism isn’t it?

                                      1. 2

                                        https://nikhiljha.com

                                        I recently rewrote the homepage to tie everything I’ve done together via a weak common purpose. It might be too wordy/heavy though, thoughts?

                                        1. 2

                                          The lines are a bit long for me to read comfortably. In post pages I can use readability mode, but not on the index (the boxes on the project page are easier for me to read, for example).

                                          The grey background in links definitely stands out, but the contrast it a bit low, and using white there would probably be easier on the eyes. I checked the accessibility pane in firefox and it agrees with me, it doesn’t meet WCAG standards.

                                          .. and I totally clicked on the youtube link. :P

                                          1. 1

                                            Thanks! Just out of curiosity, did you view the page with dark theme or light theme? Looks like I need to go back and validate accessibility on both themes.

                                            1. 2

                                              White background, so I’d say it’s the light theme.

                                          2. 2

                                            Not a fan of grey background for links. Specifically at your home page. It looks as if I dragged the mouse by accident and mass-selected all the entries. Also - the footer is not being used now it has “Sample text”.

                                          1. 8

                                            I don’t have much experience with golang, but even then it was helpful to see some common patterns in rust (Result) explained concisely. Thanks for the post!

                                            1. 3

                                              No problem. Do you have any ideas for topics I can cover in this kind of style? Trying to build up a new backlog.

                                              1. 7

                                                I’d like to see an article describing the latest accepted/best third-party crates for Rust for some common tasks - e.g. error handling, JSON, HTTP requests. Some (eyre, reqwest, tokio for errors/HTTP requests/async) were already covered.

                                                E.g. back when I was writing Rust, there were different choices for “everybody should be using this!” error crates, and now eyre is apparently the good one.

                                                1. 5

                                                  +1 to third-party crates overview!

                                                  One of my frustrations of learning Rust over and over (coming from Go, but also I tried learning Rust before I started with Go) is that “just use the standard library” is practically not a thing like it is in Go where 80%+ of usecases are covered (http, json, templates, flags, etc – none are perfect, but all more than good enough to get started).

                                                  Say I’m building a crawler, and I hit up https://crates.io/search?q=http, how do I choose? Having done a bunch of research since then, my instinct says hyper or reqwest (which don’t even show up on the first page!) but I remember running into some crates supporting async, some don’t.

                                                  I’d love a collection of rust crates that are good for 80% of usecases, kinda equivalent to what’s in the Go standard library. When in doubt, use X.

                                                  1. 6

                                                    I’ll definitely work on a “Gopher in Exile” set of crates that about equal the majority usecase of the Go standard library. It’s something I’ve been struggling with too.

                                                    1. 1

                                                      for your usecase: probably reqwest, that’s using hyper internally, supports async and blocking(opt-in via feature flag)

                                                    2. 4

                                                      https://github.com/rust-unofficial/awesome-rust is not a terrible source. The ecosystem gets big enough for different purposes that it’s very difficult to cover exhaustively though, let alone keeping it up to date. Looking at its game engine resources for example, which is where my own work mostly lies, it hits the high points but misses basically all of the interesting secondary stuff.

                                                      Error crates are a great example. IMO it’s a problem space that was either solved long ago with failure, or is never going to be solved, depending on what you want. People keep inventing new things and for medium-sized projects at least I’ve yet to see any that are actually worth the mental overhead and compile times.

                                                      1. 1

                                                        huh, last time I checked everyone went from failure to thiserror/anyhow and I’m using snafu in crates to have optional backtraces on stable, anything new here ?

                                                  1. 3

                                                    Well this is depressing, I was just about to make an app with SwiftUI. The fact that it ships with the OS (even on Mac?) makes it seem like a no-go :(

                                                    1. 2

                                                      The domain donotreply.com is currently for sale, in case anyone wants it…

                                                      1. 1

                                                        (for 15000 USD)

                                                      1. 25

                                                        I’m glad I left the macOS-ecosystem in 2012 for good in favor of Gentoo. Apple as a company is just milking and babysitting their customers, even if they don’t want to.

                                                        I know many professionals that are locked within macOS due to software/habit, and I pity them.

                                                        I made the switch by replacing each program with an open source one, one after the other. The restrictions mentioned in the article will make this even harder to achieve unless open source developers shell out the 100$ per year, which is highly unlikely. It’s all about keeping up the walled garden.

                                                        Apple can screw themselves.

                                                        1. 14

                                                          I would be significantly less productive and make a ton less money if I went /back/ to Linux/BSD on the desktop.

                                                          1. 5

                                                            What is the productivity boost that macOS gives you compared to Linux/BSD?

                                                            1. 10

                                                              A quick list off the top of my head:

                                                              • The ability to use certain closed source software (Adobe, many electron apps built by startups).
                                                              • Alfred (rofi/dmenu/etc are not even close without significant effort to configure them)
                                                              • The “help” button at the top of the screen which allows you to search context menus. (This existed in an older version of Unity but now afaik no longer exists in any modern DE.)
                                                              • Separation of control/command (you can use command+C in terminal instead of control+shift+c or just copying everything that gets highlighted, no need to mentally context switch every time you go between the Terminal and other apps).
                                                              • nicer looking websites (look at how much better websites look in a default Ubuntu/Fedora/whatever install vs MacOS, I think it’s fonts but even after copying all my MacOS fonts to Fedora it’s still not the same).
                                                              • tight hardware integration (longer battery life, fingerprint reader to unlock)
                                                              • Integration with iOS (easily send files between my phone and laptop via AirDrop; start reading a lobste.rs article on my phone and finish on my laptop)
                                                              • Finder preview (press spacebar to preview a file quickly)

                                                              Many of the above can be done on Linux, but either require a bunch of manual configuration or are clunky to use even after configured.

                                                              1. 4

                                                                Except maybe that first point, I really wouldn’t call that “a significant productivity boost”. Especially considering I’d have to walk into a vendor lock-in and buy overpriced baubles with weird keyboards etc.

                                                                1. 7

                                                                  You’re right; it’s not one big thing, it’s a bunch of little things that make it more productive for me.

                                                                  1. 1

                                                                    If I believed hard enough that taking some pill would make me more productive, it might very well do so even if it didn’t contain any active substance. I’ve heard this “productivity talk” from Apple users multiple times and never got any reason to believe it’s actually something more than just a placebo effect taking place.

                                                                    It’d be very interesting to see a controlled study on this. We’d define productivity as solving programming tasks, replying to e-mails, writing articles etc and see what the differences really are.

                                                                    Like… OK. Everyone needs a different environment and I can imagine some people actually being more productive within Apple’s ecosystem, but it’s more about personal preferences than anything else. I’d expect all groups (Mac-, Windows-, Linux-with-GNOME-, Linux-with-KDE-, … users) to have roughly the same productivity, with some people being slightly more productive in certain environments, but probably not dramatically (assuming they’re motivated to actually try hard enough – so the study would probably have to be organized as a challenge with some neat prizes).

                                                                    Basically what I’m trying to say is that it comes to reaching some optimal setup and even though my setup isn’t optimal at all, by migrating to macOS I’d gain very little and lose a lot. That’s because I’ve spent quite some time reaching the setup that works at least this well for me. I suppose that might be the case with most power users and some productivity boost is most likely to be expected with people who tried using Windows or Ubuntu in default configuration, didn’t like it and then got a MacBook. But I’m still kind of skeptical about its magnitude.

                                                                  2. 2

                                                                    Maybe also integration with iOS, but the rest is just what one’s used to. OSX and Windows feel clunky and limiting to me because I’m used to Unix, especially wrt cross platform development.

                                                                    It’s all anecdotal.

                                                                  3. 3

                                                                    The hardware/software cohesion is nigh impossible to beat.

                                                                2. 3

                                                                  You would be less productive at the beginning of the transition, yes. But you would eventually develop new workflows and then regain productivity.

                                                                  I used to be 100% on macOS until a few years ago. My last 2 jobs I’ve been 100% on Linux and haven’t had any problems. I can install all of the corporate software on my Linux machine. I also haven’t seen any cuts in my paycheck… still making a ton of money (I think). ^_^’

                                                                  I work on web services and most of our software runs on Linux. I got tired of learning 2 OSes. I personally didn’t find any value in running macOS to run Linux (in containers or via SSH). So I cut out the middleman. I also hated that macOS is Linux-like, but not actually. For example, you might end up learning the wrong nc or sed on macOS. Super annoying when debugging.

                                                                  I do get the appeal of macOS and still recommend it to my family, but as a developer, I value the simplicity of learning 1 set of tools over vanity features. Whenever I have to switch to macOS, my productivity takes a huge hit, but that’s because I’ve learned Linux workflows.

                                                                  1. 2

                                                                    Totally understandable, and I’m not arguing that. There are many people making a really good living working with Macs, and admittedly, Macs are probably the greatest machines for creative works and are superior in terms of color space handling and font rendering, to just name two things.

                                                                    Nevertheless, the price you pay for this advantage will grow further and further. If you only do it for work, that’s fine of course, godspeed to you! But if you look at it long-term, it looks rather bleak.

                                                                  2. 4

                                                                    If the best thing to happen to my computing career was learning Unix and the second best thing was finding Cygwin for Windows (a lifesaver), the worst decision was getting a MacBook at the end of 2019. Most frustrating keyboard and mouse (Magic Mouse) I have ever used in almost 50 years of using keyboards and X years of using mice. Just awful keyboard design, layout, touch & feel, disaster of a touchbar, no universality or standardization with anything but Macs.
                                                                    I use multiple machines at home/work and I want everything to be configured the same everywhere to ease transitions between machines. Linux and Windows, I can configure to be sufficiently similar, but it’s virtually impossible with a MacBook and MacOS.
                                                                    I figured that with 37 years to figure it out and with so many Linux devs using a Mac, Apple would have had to get their act together. Boy, was I wrong. Can’t wait to be done with it and get back to sanity.

                                                                    1. 5

                                                                      Mac hardware 10 years ago was the best on the market, and I loved using it. I am still using an old Apple USB Keyboard because I haven’t found anything matching its quality and feel. Apple changed under Tim Cook, and it will change even further.

                                                                      What they probably don’t realize is that developers might not make the biggest portion of their revenue, but they keep the ecosystem alive. I like to call this fallacy the “fallacy of the gaussian belly”, because they probably only aim their efforts on the consumers (iPhone, iPad, Apple Watch, etc.) and neglect the professional segment because it doesn’t make them as much money.

                                                                      I hope I’m not sounding like an armchair-CEO here, but in my opinion they shouldn’t even penny-squeeze the Mac customers that much. What the developers do in turn for the ecosystem is much more valuable than just mere stockholder-profits and market value.

                                                                      In the end, I see the problem in public trading and having a bean-counter at the top. The goals shift and the company goes down in the long-term. And now you might say “Why can you say that when Apple has just passed 2 billion market value?”. Just look at the market data of Apple before 1997. Before its demise under Sculley, Apple was at its most profitable, and just like Cook Sculley is a bean-counter. This degradation-process won’t be sudden and there were more factors at play in 1997, but it will happen in the long-term (10 years).

                                                                      1. 1

                                                                        I joined the Apple ecosystem as the owner of a PowerMac G3 B&W that was given to my dad by a friend in 2007. I became a massive fanboy pretty quickly. 13 years later, and I’m embarrassed at how far my ‘sports team’ have fallen. The next 20 years are gonna be a rough ride and I don’t plan to stay for long.

                                                                        1. 1

                                                                          It’s a good call to leave the sinking ship. I’m sure the ARM-Macs will be successful, but they will just be more locked down and not suitable for anyone interested and invested in open source software.

                                                                  1. 1

                                                                    For now, only H.264 is supported in the rkvdec driver. Support for H.264 High-10 profile is currently being discussed by the community. In addition, VP9 and HEVC are planned to be added soon.

                                                                    Wait, does the RK3399 hardware already support both VP9 and HEVC?

                                                                    1. 1
                                                                      1. 1

                                                                        Yeah. The upcoming RK3588 will also have AV1 decoding support (4K 60fps 10bit) acc. to CNX.

                                                                      1. 6

                                                                        It is simple (and cheap) to run your own mail server, they even sell them pre baked these days as the author wrote.

                                                                        What is hard and requires time is server administration (security, backups, availability, …) and $vendor black-holing your emails because it’s Friday… That’s not so hard that I’d let someone else read my emails, but YMMV. :)

                                                                        1. 8

                                                                          not so hard that I’d let someone else read my emails

                                                                          Only if your correspondants also host their own mail. Realistically, nearly all of them use gmail, so G gets to read all your email.

                                                                          1. 4

                                                                            I have remarkably few contacts on GMail, so G does not get to read all my email, but you’re going to say that I’m a drop in the ocean. So be it.

                                                                            1. 4

                                                                              you’re going to say that I’m a drop in the ocean. So be it.

                                                                              I don’t know what gave you that impression. I also host my own email. Most of my contacts use gmail. Some don’t. I just don’t think you can assume that anyone isn’t reading your email unless you use pgp or similar.

                                                                              1. 1

                                                                                Hopefully Autocrypt adoption will help.

                                                                                1. 2

                                                                                  This is the first time I’m hearing of Autocrypt. It looks like just a wrapper around PGP encrypted email?

                                                                                  1. 1

                                                                                    This is a practice described by a standard, that help widspread use of PGP : by flowing the keys all all around.

                                                                                    What if every cleartext email you received did already have a public PGP key attached to it, and that the mail client of everyone was having its own key, and did like so: sending the keys on every new cleartext mail?

                                                                                    Then you could answer to anyone with a PGP-encrypted message, and write new messages to everyone encrypted? That would bring a first level where every communication is encrypted with some not-so-string model where you exchanged your keys by whispering out every byte of the public key in base64 to someone’s ear alone in alaska, but as a first step, you brought many more people to use PGP.

                                                                                    I think that is the spirit, more info on https://autocrypt.org/ and https://www.invidio.us/watch?v=Jvznib8XJZ8

                                                                                    1. 2

                                                                                      Unless I misunderstand, this still doesn’t encrypt subject lines or recipient addresses.

                                                                                      1. 1

                                                                                        Like you said. There is an ongoing discussion for fixing it for all PGP at once, including Autocrypt as a side effect, but this is a different concern.

                                                                            2. 1

                                                                              Google gets to read those emails, but doesn’t get to read things like password reset emails or account reminders. Google therefore doesn’t know which email addresses I’ve used to give to different services.

                                                                            3. 4

                                                                              Maybe I’m just out of practice, but last time I set up email (last year, postfix and dovecot) the “$vendor black-holing your emails” problem was the whole problem. There were some hard-to-diagnose problems with DKIM, SPF, and other “it’s not your email, it’s your DNS” issues that I could only resolve by sending emails and seeing if they got delivered, and even with those resolved emails that got delivered would often end up in spam folders because people black-holed my TLD, which I couldn’t do anything about. As far as I’m concerned, email has been effectively embraced, extended, and extinguished by the big providers.

                                                                              1. 4

                                                                                This was my experience when I set up and ran my own email server: everything worked perfectly end to end, success reports at each step … until it came time to the core requirement of “seeing my email in someone’s inbox”. Spam folder. 100% of the time. Sometimes I could convince gmail to allow me by getting in their contact/favorite list, sometimes not.

                                                                                1. 1

                                                                                  I wonder how much this is a domain reputation problem. I’ve hosted my own email for well over a decade and not encountered this at all, but the domain that I use predates gmail and has been sending non-spam email for all that time. Hopefully Google and friends are already trained that it’s a reputable one. I’ve registered a different domain for my mother to use more recently (8 or so years ago) and that she emails a lot of far less technical people than most of my email contacts and has also not reported a problem, but maybe the reputation is shared between the IP and the domain. I do have DKIM set up but I did that fairly recently.

                                                                                  It also probably matters that I’ve received email from gmail, yahoo, hotmail, and so on before I’ve sent any. If a new domain appears and sends an email to a mail server, that’s suspicious. If a new domain appears and replies to emails, that’s less suspicious.

                                                                                  1. 2

                                                                                    Very possible. In my case I’d migrated a domain from a multi-year G-Suite deployment to a self-hosted solution with a clean IP per DNSBLs, SenderScore, Talos, and a handful of others I’ve forgotten about. Heck, I even tried to set up the DNS pieces a month in advance – PTR/MX, add to SPF, etc. – in the off chance some age penalty was happening.

                                                                                    I’m sure it’s doable, because people absolutely do it. But at the end of the day the people I cared about emailing got their email through a spiteful oracle that told me everything worked properly while shredding my message. It just wasn’t worth the battle.

                                                                              2. 3

                                                                                That’s not so hard that I’d let someone else read my emails

                                                                                Other than your ISP and anyone they peer with?

                                                                                1. 2

                                                                                  I have no idea how bad this is to be honest, but s2s communications between/with major email providers are encrypted these days, right? Yet, if we can’t trust the channel, we can decide to encrypt our communication too, but that’s leading to other issues unrelated to self-hosting.

                                                                                  Self-hosting stories with titles like “NSA proof your emails” are probably a little over sold 😏, but I like to think that [not being a US citizen] I gain some privacy by hosting those things in the EU. At least, I’m not feeding the giant ad machine, and just that feels nice.

                                                                                  1. 7

                                                                                    I’m a big ‘self-hosting zealot’ so it pains me to say this…

                                                                                    But S2S encryption on mail is opportunistic and unverified.

                                                                                    What I mean by that is: even if you configure your MTA to use TLS and prefer it; it really needs to be able to fall back to plaintext given the sheer volume of providers who will both: be unable to recieve and unable to send encrypted mails, as their MTA is not configured to do encryption.

                                                                                    It is also true that no MTA I know of will actually verify the TLS CN field or verify a CA chain of a remote server..

                                                                                    So, the parent is right, it’s trivially easy to MITM email.

                                                                                    1. 3

                                                                                      So, the parent is right, it’s trivially easy to MITM email.

                                                                                      That is true, but opportunistic and unverified encryption did defeat passive global adversaries or a passive MITM. These days you have to become active as an attacker in order to read mail, which is harder to do on a massive scale without leaving traces than staying passive. I think there is some value in this post-Snowden situation.

                                                                                      1. 1

                                                                                        What I’ve done in the past is force TLS on all the major providers. That way lots of my email can’t be downgraded, even if the long tail can be. MTA-STS is a thing now though, so hopefully deploying that can help too. (I haven’t actually done that yet so I don’t actually know how hard it is. I know the Postfix author said implementation would be hard though.)

                                                                                  2. 1

                                                                                    I get maybe 3-4 important emails a year (ignoring work). The rest is marketing garbage, shipping updates, or other fluff. So while I like the idea of self hosting email, I have exactly zero reason to. Until it’s as simple as signing up for gmail, as cheap as $0, and requires zero server administration time to assure world class deliverability, I will continue to use gmail. And that’s perfectly fine.

                                                                                    1. 7

                                                                                      Yeah, I don’t want self-hosted email to be the hill I die on. The stress/time/energy of maintaining a server can be directed towards more important things, IMO

                                                                                  1. 1

                                                                                    Is Wireguard a good solution if I want to host a web server that only trusted devices can even see?

                                                                                    1. 1

                                                                                      I use it like that, just make sure all your trusted devices have WireGuard.

                                                                                      1. 1

                                                                                        Is there a guide you used? The best I have so far is @cadey’s

                                                                                    1. 0

                                                                                      It mentions needing Google Play Services (for notifications?). Does anyone know if it works (perhaps with reduced functionality) without Google Play services? Contemplating a Pixel 3a running Graphene

                                                                                      1. 2

                                                                                        Not really what you’re asking but I’m running the latest version (5.600) from F-Droid on GrapheneOS without Google Play services. This one works at least.

                                                                                        I do have the most minimal microg installed for running a Gcam port.

                                                                                        1. 1

                                                                                          Huh? Where do you see that? It’s working fine for me without Google Play Services on my OnePlus 6T running LineageOS! I’m sure the same would be true for a Pixel 3a running Graphene.

                                                                                          1. 1

                                                                                            “So why did we not update the app? It’s a combination of things. A major factor was the API level requirement by Google Play. “

                                                                                            Did I mis-understand? Likely… :) Good to know though, thanks!

                                                                                            1. 3

                                                                                              I think that sentence is referring to a Google Play (store) requirement (last year) that apps bump their targetSdk versions in AndroidManifest.

                                                                                              The main breaking change this resulted in is you have to add explicit runtime requests for some permissions that used to be requested at install time through AndroidManifest.xml

                                                                                              1. 3

                                                                                                Ah, the “API target level” is an Android app configuration option that determines what Android APIs you can use in your app. Google Play now requires that all apps increase this to a new minimum version. It has nothing to do with Google Play Services.

                                                                                                1. 1

                                                                                                  Additionally to this, when Google decide to make a non backwards compatible change, they always do so by having the change only take effect when the app’s declared targetSdk version is >= the version in which the being change was introduced.

                                                                                                  The second half of this is that a while later the Google Play store starts rejecting new apps with older targetSdk, so app devs don’t get to just leave everything on the oldest targetSdk value forever

                                                                                                2. 1

                                                                                                  Ah… Thanks all!

                                                                                            1. 2

                                                                                              Now they just need to release the Ryzen 4000 models, and I may just be ready to move on from my x230…

                                                                                              1. 2

                                                                                                I switched to an x390 and don’t regret it

                                                                                                1. 1

                                                                                                  I just read the specs for that, and am still convinced that every new “modern” laptop now is a step backwards.

                                                                                                  My x230 has 2x the memory (16GB) and it’s soldered down (lol) on the x390 so you’re stuck with that forever. Is the hard drive at least replaceable? I guess I value repairability more than most consumers now, because I’m sick of having to throw away electronics after ~2 years. The x390 looks just like another disposable laptop. (the “17.5hr” battery life is super impressive though, but probably inflated)

                                                                                                  1. 3

                                                                                                    The disk is just a user replaceable M.2 NVMe drive. You can get 16GB of RAM in an X390 as well if you choose the i7 CPU option.

                                                                                                    1. 2

                                                                                                      from what I’ve seen, almost all SSDs are user replaceable (m.2) in modern laptops

                                                                                                      a notable exception is Apple, who uses a proprietary type of drive (because of course they do)

                                                                                                1. 3

                                                                                                  The only way to “settle” is to forget about passwords entirely. I can fully control a remote machine using public key cryptography without ever having to deal with dirty passwords. Why cannot I read my webmail or buy stuff from an online shop? It is ridiculous that in the age of public key crypto we are still using passwords.

                                                                                                  1. 3

                                                                                                    Do you think that non technical users can and will use public key crypto? I mean, I guess they are every time they visit a site with an https:// in the URL.

                                                                                                    Is it just that the right tools haven’t been found yet? I was on a call with HYPR a few days ago (disclaimer, we’ve done some work integrating with their solution): https://www.hypr.com/why-hypr/ and it seems pretty sweet, but then we move from securing knowledge to securing devices.

                                                                                                    Something has to hold the private key, after all.

                                                                                                    1. 3

                                                                                                      I doubt they will be able to manage private keys well.

                                                                                                      Servers indeed are doing that now with HTTPS, but we expect server admins to be a little better at these things. And they still fail more often than we would like. IIRC, HPKP was deprecated because it was too easy for sysadmins to get wrong, or to have used against them by malicious actors, rendering their domain semi-permanently inaccessible. Are we going to expect casual users to do better than them?

                                                                                                      Casual users may have even messier use cases. Say you have 5 devices that you want to be able to access all of your accounts from. Now you’d have to register all 5 public keys with every service you want secure access to. And correctly manage dropping the right key from all of them if you lose or discard a device, and add one to all of them if you get a new device.

                                                                                                      1. 2

                                                                                                        Build the protocol into the browser, have it manage your key. Browser vendors can even store an encrypted version of your key on their servers (optionally) to allow you to regain access if you lose it/sync to multiple devices.

                                                                                                        Edit: Like BitID but instead of using a bitcoin private key you use any other type of private key, and it’s in your browser instead of in another app.

                                                                                                        1. 2

                                                                                                          You would still have to synchronize the private key between your devices. And even if nowadays you browser can sync itself across devices, it is done through an online account. Secured with a password.

                                                                                                          Passwords are going to last, because they are immaterial, so you can have them with you at all times “just” by remembering them. Physical private keys are too complex to manage, and to easy to lose, thus locking you out. The last option we have is biometrical identification which would be easier for everyone (nothing to remember, everything with you at all times), but this is a further step in the privacy field…

                                                                                                          1. 1

                                                                                                            Mozilla tried this with Persona (née BrowserId), and it did not take off.

                                                                                                      1. 29

                                                                                                        TL;DR: Organize notes with tags and links instead of folders. Notes should be small & digestible “ideas” instead of lengthy analysis so they can be better linked.

                                                                                                        1. 20

                                                                                                          I would say there is a more important, not explicitly mentioned overall concept here. You train yourself to, rather than gradually forgetting about the past, keep revisiting your accumulated knowledge, everytime from a different angle, because you actively try to fit in new ideas into an existing body of previous ideas. You award yourself for doing so (by neatly tugging away a card in a drawer), creating an incentive to keep doing it.

                                                                                                          This way you’ll be much more inclined to regularly overthink what you know and keep previous ideas in a semi-active working state. I suspect that the Zettelkasten maps very well to a mode of functioning that the brain is surprisingly good at (and thus feels good and thus readily induces a state of flow): maintaining relatively quick access to a lot of information by linking ideas together in the form of a web.

                                                                                                          1. 5

                                                                                                            That’s a good summary.

                                                                                                            I’d also emphasize the organically evolving heterarchy (which the Zettelkasten facilitates) as opposed to a pre-defined hierarchy (that is the norm for outliners like Workflow, Dynalist, etc.).

                                                                                                            1. 1

                                                                                                              For anybody coming to this thread late, we are developing an official community zettelkasten here: https://www.zettel.page/

                                                                                                            2. 1

                                                                                                              So basically what I’ve been doing for the past 15 years with a private wiki. Didn’t know it had a name!

                                                                                                              (Although in my case, due to other issues, my extensive notes and organization only bring me up to functionally productive.)

                                                                                                            1. 3

                                                                                                              I’m pretty sure this won’t do anything to prevent cheating at high levels. Now people will just have to run a hypervisor and run their cheats in ring “-1” so to speak.

                                                                                                              1. 1

                                                                                                                It’s so close already, so it’s good that phosh ideas are finally moving upstream!

                                                                                                                Here’s a cherry picked screenshot of what GNOME looked like on phones before: https://twitter.com/jhanikhil/status/1229270316053958657