Threads for gabeio

    1. 3

      I have an pretty bad experience paying for parking as well. I paid for parking with my mobile phone while standing near my car, got out left to go buy stuff in a store and eat. I wanted to increase the time on the meter come to find out their app uses bluetooth to talk to the actual meter next to my car? Which then meant I was deceived into believing that I could increase my time away from my car?! On top of that the app was also so poorly put together I recall not being able to adjust how much time I wanted to put on the meter. Truly horrible. Honestly at that point the only improvement over the meter is the fact they don’t have to pay someone to empty it.

    2. 8

      Everybody is looking for an alternative to terraform, but it was always there, in Ansible.

      1. 26

        As someone who used Ansible heavily for ~5 years: Ansible is pretty awful for cloud orchestration. The modules were of very varying quality, the YAML-based configuration gets old really quickly and in general Ansible is just clunky. One example of that clunkiness is how for certain collections you have to manually install dependencies in the target machine for the tasks to work (the one that comes to mind was Azure, where I had to add an explicit step when setting up the VMs to add specific Python packages otherwise the collection wouldn’t work).

        It’s a shame, I really wanted to love Ansible, but after that long working with it, I dreaded every time I had to setup a new kind of server.

      2. 23

        My impression from (admittedly light) use of ansible and terraform is that anbsible lacks a lot of the state management terraform has. So for example, when deleting an ASG from a terraform module, terraform is smart enough to delete the ASG. But ansible cannot be because it has no memory of what changes is applied in previous runs.

        1. 3

          Having had to fight against terraform state in too many occasions, I think that not having one is an advantage in almost all situations. The only case when the state is helpful is the deletion, as you mentioned, and I only had an handful of those.

          1. 22

            Drift detection is extremely useful when you have tens of resources that should be identical (or close enough).

            Terraform state doesn’t magically fix it, and sometimes even gets in the way of changes, but without it you can’t even know if the actual configurations changed in relation to the intended state.

            1. 1

              I’ve never used these sorts of things at anything more than small scale, but… yes you can? You run ansible with the --check --diff flags and it prints out the changes it would need to make to bring your systems into the state specified in the playbook. I’m probably misunderstanding what you mean.

          2. 3

            What kinds of situations cause you to have to fight with Terraform state? I’ve been using it for years and the only recurring headache I used to have with state was when I wanted to restructure my configuration (moving things into modules, etc.) which used to require an obnoxious amount of manual state fiddling.

            But that was before they added the moved block to support refactoring. Since I started using moved, I can’t remember having to manually mess with a statefile even once.

            1. 1

              In my experience, which I hope is uncommon, terraform state rm/pull/push are commands I ended up typing within 1 week of using terraform.

              I’ve had issues with:

              • importing AWS route and route tables with non-sensical errors about the format of the IDs
              • importing AWS key pairs
              • deletes actually removing resources but not updating state
              • specific providers being crap and forcing to manipulate state for trivial changes (the OVH and OpenStack providers want to recreate a VM when you add a network port or an additional storage device)

              Manipulating state when moving resources around is perfectly justified in my opinion, but it stems from using state in the first place. If one chooses to interpose a state concept, such as terraform and git, the implementation should be flawless across the entire ecosystem, and from what I’ve seen terraform’s state isn’t.

      3. 11

        Data manipulation in Ansible is horrifically painful. Sorry, but these tools just don’t compare.

      4. 3

        So many Ansible collections use Terraform under the hood.

        1. 3

          Which one? I checked digital ocean, aws, azure, gcp, openstack, online, linode and cloudstack.

          All of them used python libraries like boto3, azure sdk or google cloud SDK, … Using terraform sounds counter productive, as they would have to fork-exec on a non-built-in binary, when a lot of the cloud providers provide a python SDK.

    3. 14

      The one nice thing about “transpiler” is that it combines the two existing words, “compiler” and “translator”, which were previously the accepted names for these sorts of transformations. If all compilers-and-translators courses were replaced with transpilers courses, then maybe it wouldn’t be so controversial. The problem is the insistence that transpilers are somehow neither compilers nor translators.

      1. 30

        In my experience most of the people who use the word “transpiler” are under the impression that “it’s not a compiler unless it emits machine code” which is frustrating.

        1. 6

          Unfortunately that probably is because originally the word compiler meant compiling to machine code (at the time) from human readable text and if you don’t believe me here is a few definitions copied off of a few well known search engines:

          a program that converts instructions into a machine-code or lower-level form so that they can be read and executed by a computer.

          (computer science) a program that decodes instructions written in a higher order language and produces an assembly language program

          book definition:

          a computer program that translates an entire set of instructions written in a higher-level symbolic language (such as C) into machine language before the instructions can be executed

          I do agree though that the definition has been expanded upon, Wikipedia has a fairly open definition of it:

          In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language).

          1. 11

            Yeah, I understand the word has multiple definitions; I was expressing frustration that people don’t seem to be aware of the definition that I use.

            A definition that specifically excludes other meanings which are in active use is … just not a very good definition?

            1. 5

              A definition that specifically excludes other meanings which are in active use is … just not a very good definition?

              Yes and no, this article is literally someone arguing that in reverse with the word transpiler so … you tell me?

          2. 6

            into a machine-code or lower-level form

            1. 3

              I went down a fairly deep rabbit hole with some talented theoreticians after making a comment like that. Defining what ‘lower level’ means is incredibly hard and it became clear that any definition that we came up to often didn’t give even a partial ordering over the kinds of things that we care about in practice.

            2. 3

              machine-code or lower-level form

              I read that as meaning machine-code or lower-than-machine-code form.

              1. 7

                Natural languages are beautifully ambiguous. But what would you consider even lower-level than machine code?

                1. 5

                  IIUC (not a hardware person) these days processors actually translate back from machine code in order to recover the original program graph, in order to better divvy up work on the hardware itself. So machine code is really an abstraction over what is actually going on under the hood.

                  1. 2

                    I’m not an expert, but I don’t think that’s right. Processors do reorder instructions and split them up into “micro-ops”, but I don’t think it’s possible or desirable to reconstruct the original graph in the CPU.

                2. 2

                  But what would you consider even lower-level than machine code?

                  I would have guessed they left it open on purpose in the event there was something lower in the future. (They just left it open in the wrong a different way.)

                  At any rate I was only pointing out the easily accessible definitions do specifically mention machine-code.

    4. 8

      While this certainly fits with my experience, what about people who don’t get joy from programming, don’t want to learn new stuff, don’t find the puzzle fun?

      1. 15

        Maybe they are in the wrong industry, i don’t think anyone is happy at a job that sucks the fun out of life

        1. 18

          Im not sure people are in general expected to be happy in their job? So long as it pays the bills?

          1. 6

            I find it hard to imagine a professional football player that doesn’t or at least didn’t for a substantial amount of time in the past like playing football. I also can’t do the same for an influential physics professor. I’m willing to believe that not all jobs are equal in this sense. I have a burning passion for programming and I still have to push myself hard in order to endure the mental pain of trying to align a hundred stars and solve difficult programming challenges. I can’t imagine how one could motivate oneself to suffer that 8 hours a day without feeling the kind of joy that comes with finding the solution.

            It’s hard to describe this to non-programmers, but I believe I have the right audience here. Programming is a very stressful job. Not stressful like a surgeon or a stock broker who get stressed due to what’s at stake, but stressful because you have to push yourself to your limits to turn your brain into a domain specific problem solving apparatus and find the solution.

            BTW, I know that there are a lot of programming jobs out there which don’t resemble what I’m describing here at all, but I know that there are jobs like this too, but we don’t have a different name for them.

            1. 2

              I have a burning passion for programming and I still have to push myself hard in order to endure the mental pain of trying to align a hundred stars and solve difficult programming challenges.

              There is so much programming out there where you do some boring crud service on some db or where you assemble 4 different json blobs in a different format and pass it to the next microservice or cloud endpoint. That’s not truly exciting or challenging.

              1. 1

                I know that and I respect those jobs and programmers, but as I’ve mentioned some programming jobs require constant puzzle solving and creativity. I think my comment would be more agreeable if I said “compiler engineer” or “game AI engineer” or “database engineer”, but I don’t know of any term that can be used about those jobs collectively. Maybe we need a term like “R&D programmer” or maybe I should just have said “R&D engineer” and decoupled my point from programming per se.

          2. 5

            I think most people strive for being happy in their jobs, but yes the main factor for having one is to not starve or be homeless

            1. 8

              I’ve seen clock-in clock-out devs who didn’t give a shit about anything they did. Took no joy nor pride in their work. They were government contractors and so they did the absolute least possible (and least quality) that the gov asked for and would accept, and no more. They didn’t seem to care about what they got personally out of their jobs, they seemed to think it was normal. Drove me nuts, quit the company in 6 months.

              1. 2

                I had the exact same experience with some additional slogging through warehouses (cutting cardboard; I wish I were joking) and testing security hardware while waiting for a security clearance shortly after OPM got hacked (~6 months to get the clearance). Then to finally be surrounded by people warming their chairs, I couldn’t stand it. I understand the need to have stability in your job but pride is also important, at least to me.

        2. 8

          It depends on why you do it. Let’s not forget that programming is a very well paid profession. Maybe you use the good salary to finance the life-style you want to have (buy a house/appartment, have kids, maybe expensive hobbies). I can certainly imagine a more fun place to work than my current job, but the pay is very good. Therefore I stay because it enables my family and me to have the live we want.

          1. 3

            Thanks, that is very interesting points. Indeed, I think there is a lot of reasons to take the job beside fun and this is very respectable. On the other end, I would state that people having fun doing it get a better chance at performing and improve their skills on the long run.

            1. 4

              That is interestingly quite controversial in the research and we have solid data pointing to both.

              Note also that not having fun does not equate to sucking your soul out of you.

              Being meh about a job is ok. That is the case of nearly everyone.

              1. 1

                Thanks so much for the reply ! It would be so nice if you could point me to some of this research !

    5. 3

      Can someone please explain to me where the execution of arbitrary code is occurring?

      Input text prompt -> model -> output text -> escaping -> web page code
      

      The model is bounded in what functions it uses, ie it doesn’t run system()? So it can’t occur in the model step?

      The escaping step could be something simple (plus setting the page charset):

      sed 's|<|&lt;|g' 's|>|&gt;|g' 's|\n|<br />|g'
      

      That leaves only your website code as being the vulnerable step? In which case this has nothing to do with GPT or machine learning models at all? Or am I missing something?

      1. 2

        Can someone please explain to me where the execution of arbitrary code is occurring?

        it’s happening in their demo web app. they allow the model to generate Rails code based on the user’s prompt and then execute it on their backend.

        1. 9

          Much as I enjoy bashing GPT (and I really do), it seems that the problem here is allowing a user to submit arbitrary code and executing it without sandboxing. The fact that there’s a nice UI in front of the massive security hole doesn’t change anything.

        2. 2

          Why would they design it to allow that? EDIT: Looks like the linked article asks the same question. Sorry.

          1. 2

            I don’t know if this is the correct answer but recently I saw someone showing off chat gpt doing things for him like signing up for a service with his credit card on the internet. I believe this requires chat gpt to execute code based on the user input so that it accomplishes the task for them (which sounds similar to this), admittedly this would be awesome for personal use but of course only if you host it yourself since as everyone here has mentioned it is a giant security hole to normally do such unless there are serious guard rails around what tasks could be accomplished. I do not work in ML this could be complete garbage I just spouted.

            1. 2

              So… in theory the bot could also earn them money and direct it to their bank account. Then self-host itself by paying for VPS? Excellent :D

              1. 1

                Can I borrow this?! I love it.

                1. 2

                  I will only allow you to plagiarise this idea if you use a neural network to launder it :)

    6. 17

      If that developer leaves the company this can lead to what I like to call “alien artefacts”: software that works, and might work very well, but is hard to comprehend or change by other developers.

      1. 2

        This. And it gets much worse if said alien artifact breaks … and then you don’t know how it was working let alone why it isn’t. It’s not a great position to be in.

      2. 2

        This is a great term. I have inherited a lot of unknown code and systems when I took over the management of the software team (I primarily focused on the web stuff) and now I have projects this year to break down these Alien Artifacts and either refactor them or at the very least have clear documentation on what they do.

      3. 1

        I think this is an interesting subject. I have been a creator and inheritor of many such alien artefacts. I find it much easier to deal with them when I’m given complete authority over them so that I can assimilate the artifact as I understand it. The nightmare scenario is when the creator hasn’t actually left the company but rose in the hierarchy so they outsourced it to you, but they’re still emotionally invested in it so they won’t give you enough authority to assimilate it.

    7. 6

      I have one to add:

      • Email addresses have at least 2 characters at the left of the @.
        Meaning, e@example.com is invalid.

      The first culprit I saw was phpBB, which refused my email address because I included a single letter on the left (the right being my own domain name, so I didn’t want to be redundant). I have since given up and now use full words (generally my first name).

      1. 2

        apparently having a dot is another. apparently ai (the tld itself) has an mx record. so the email could be a@ai …

    8. 1

      My solution to this problem consists of a few steps:

      1. Use either a password manager that generates passwords from a “master key”, use SSO for everything, or use multiple, password managers with encrypted backups on multiple cloud services
      2. Use strong 2FA (multiple PIN-protected YubiKeys + TOTP) for everything
        FYI: YubiKeys support 63-digit alphanumeric “PINs”, so there’s no risk with untrusted people accessing them either.
      3. Backup the primary passwords for [1] and the QR codes for [2] on an encrypted USB drive
      4. Deposit sets of each a PIN-protected YubiKey [2] and one of the encrypted USB drives [3] together in different, trustworthy places.
      5. Always keep one set on your body.

      The only situation in which I could get locked out of all my services is four different places, some of them hundreds of kilometers apart, all being burned/nuked/SWATted at the same time, while I’m swimming (the only situation in which I don’t follow rule 4)

      1. 2

        Yubikeys are waterproof. Unless you swim naked, you could have them with you.

        1. 1

          Oh that’s good to know! Do you know how well they handle the salt in seawater? If they handle that well, and I find an equally-waterproof usb drive, that’d be awesome!

          1. 2

            It has a pretty solid rating of IP68 (https://en.wikipedia.org/wiki/IP_Code)

            • 6 Dust-tight No ingress of dust; complete protection against contact (dust-tight). A vacuum must be applied. Test duration of up to 8 hours based on airflow.
            • 8 Immersion, 1 meter (3 ft 3 in) or more depth

            and their press blog (take with a grain of salt) https://www.yubico.com/press-releases/yubikey-survives-ten-weeks-in-a-washing-machine/ claims that it survived a 48 meter dive in saltwater.

            the only thing that salt could do would be corrode the contacts on the port plug, it’s encased in plastic (not just a plastic case like 99% of usb storage devices), just make sure it’s fully dry before plugging it in.

          2. 1

            I did not test salt water myself, only washing machine and swimming pool a few times and did not notice any problem after.

    9. 6

      One of the things I honestly love about GitHub actions over most of the others is that you actually can run your own actions on your own machines. Yeah it might defeat the purpose if you don’t have enough builds to exceed their minutes, but if you want something well built and you want to self host the runners for one reason or another it’s an amazing option now.

      1. 6

        just for the record, you can do the same with GitLab, and Azure DevOps. I’m sure there are others too.

        1. 2

          GitLab is open source so I’m not shocked, Azure DevOps is an interesting one, although admittedly I have almost never touched Azure. I believe sourcehat has something as well. I’m just pretty happy with how Github actions are structured and use Github at work already so it is just nice that they allow you to use your own. This I should have clarified is most in contrast to CircleCI, TravisCI which is not shocking considering that is their core product.

          1. 1

            sourcehat

            You mean soucehut :P ? Yeah, I’m 99.99% sure they also have that.

            in contrast to CircleCI and TravisCI

            yes, your comment makes more sense that way haha

            1. 2

              Sourcehut, in fact, does not have self-hosted runners. The architecture as it is right now isn’t really fit to just plop them in. You can however fairly easily run your own instance of just the build service.

              1. 1

                good to know. Can you plug that build service in, say, a “hosted sourcehut”?

                1. 1

                  Ehhh, somewhat. I’m not particularly happy with it though, you basically need to set up your own push webhooks, and I don’t think there’s anything that would automate that. I want to rework how cross-service automation works in sourcehut, but I haven’t found the time for that yet.

            2. 1

              You mean soucehut :P ?

              Oops, yes I did mean sourcehut. I’ve been jokingly calling it “sr.ht” -> sir hat for too long apparently lol.

        2. 1

          Buildkite is another one that hardly anyone seems to know about. I used it at my last job and was quite happy with it.

    10. 1

      It was also interesting to see the reaction from open source developers to unsolicited pull requests from what looks like a bot (really a bot found the problem and made the PR, but really a human developer at Code Review Doctor did triage the issue before the PR was raised). After creating 69 pull requests the reaction ranged from:

      I wonder if you’d get better reactions if a human made the PR and didn’t say it came from a bot.

      1. 1

        That’s exactly the quote that prompted me to share the article. I think there was recently a case in the linux kernel community that some university group was submitting (arguably bad) patches that were generated by a tool – it didn’t went well if I recall correctly. Maybe initial reactions would be better, but long term, if the project finds out, it would lead to loss of trust.

        1. 1

          It was the University of Minnesota and they got their entire university banned from submitting anything to the Linux kernel.

          The biggest argument against stuff like this I saw was that the heads of the groups being tested against had not accepted to participate in the study.

      2. 1

        A flipside of that is that you might expect better analysis from a human if it had been filed under a human’s name. These are clearly mostly auto-generated bug reports, and a number of false positives were filed, despite the triaging (from just spot-checking: 1, 2). So filing them under a bot’s name is maybe more honest to manage expectations.

    11. 1

      Where is the code that used this? The article mentions it’s their Go driver. But the only scylla go driver I can find when I google is https://github.com/scylladb/gocqlx and that doesn’t have any btree library in go.mod/go.sum.

      1. 3

        I think currently there is no code, but when it arrives it will be posted under this issue

      2. 2

        This work was also part of our ongoing partnership with the University of Warsaw.

        They seem to indicate that this isn’t for scylladb but for their work with the University instead.

    12. 10

      I have seen this and unfortunately had to do something similar (the editing on production part). What’s more scary is that you get used to it. It stops feeling like you are doing said haircuts with chainsaws. At first it definitely feels that way, later on … not so much. I even had the unfortunate pleasure of having to actually sometimes go in and manually edit fields in the live production database because there was literally no other way to fix entire sites.

      1. 20

        What’s more scary is that you get used to it.

        For anyone interested in learning more, this concept is called normalization of deviance, and applies in a variety of low-probability, high-consequence activities.

      2. 6

        I did a few fixes like this for a different reason: I didn’t know better. The boys said, “fix this ASAP”, so I ssh’d in, yadda, yadda, yadda, stuff fixed. To my “defence”, i also worked as a support at a major hosting company, where ssh’ing to prod, finding out which client is hammering the shared database, and acting upon it (restarting the DB server or apache if it’s stuck, blocking some script kiddie IP if he it she is trying to “hack” the site, or just suspending the client and letting them deal with troubleshooting.

        Since then, I’ve started doing both things in better ways :)

      3. 2

        A place I used to work had no way to reverse a common accidental click on our admin backend… except by opening a rails console on production. And this was done frequently with no checklists and having to guess which fields to edit. Needless to say, the poor guy doing this spent just as much time dealing with the resulting data corruption.

    13. 8

      The post smells like one big Nomad advertisement.

      The line

      Nomad’s batch scheduler is optimized to rank instances rapidly using the power of two choices described in Berkeley’s Sparrow scheduler.

      “scheduler is optimized” is an overstatement. The paper to which they link to says, I quote:

      …a decentralized, randomized sampling approach provides near-optimal performance while avoiding the throughput and availability limitations of a centralized design.

      Basically it’s a poor man’s load balancer. Sparrow scheduler chunks each node into slots and schedules workloads into them. As result randomized scheduling “provides near-optimal performance” only in one situation – when scheduled workloads are homogenious (like batch jobs). Good luck with reaching a high saturation of nodes with this kind of scheduler if you have to deal with any of the following factors

      • the workloads are anything more complex than batch workloads,
      • incremental schedling is needed,
      • the nodes are heterogenios (differ one from another significatly),
      • containers are used by your team in composable manner (employing pods) to achieve high performance through data locality

      It’s near impossible to allocate resources efficiently (with randomized scheduler) for non-homogenious workloads without high levels of churn.

      But from what I remember about Nomad, their shecuduler isn’t the “randomized” one, but the one using a system of two queues to shedule workloads. The Nomad’s scheduling page confirms it. So the article’s point is moot.

      But let’s leave advertising tone of the article aside and stick to an engineering side of things. It’s unclear to me why the option of custom k8s scheduler has not been considered at all. Engineering effort needed to implement a custom scheduler is (roughly) an order of magnitute leser than

      • migration of their rendering stack from managed GKE to a new scheduler
      • subsequent support of Nomed with dedicated on-call personnel

      References:

      1. Sparrow: Distributed, Low Latency Scheduling
      2. Configure Multiple Schedulers | Kubernetes
      1. 1

        It’s unclear to me why the option of custom k8s scheduler has not been considered at all.

        They did mention that they at least thought about developing their own, although they don’t go into more detail on whether or not this was completed/implemented and details about why this ended up not working (assuming it didn’t work).

        All these issues eventually convinced us to develop our own in-house autoscaler.

        1. 2

          All these issues eventually convinced us to develop our own in-house autoscaler.

          I don’t think so. IIUIC, with the phrase quoted above they refer to Nomad, cause:

          • they say it at the end of “Reasons Behind the Switch” section concluding their decision and
          • “autoscaler” isn’t a custom scheduler… unless it’s the author’s (sloppy) way to name it so
    14. 23

      Always vaguely annoyed when Cloudflare takes over more stuff, but I can understand not wanting to deal with all this bullshit when you’re just one person. If he reads this, thanks for the service :)

      Not all of the interactions were positive, however. One CISO of a US state emailed me and threatened all kinds of legal action claming that icanhazip.com was involved in a malware infection in his state’s computer systems. I tried repeatedly to explain how the site worked and that the malware authors were calling out to my site and I was powerless to stop it.

      Shades of when CentOS hacked Oklahoma City’s website.

      1. 4

        Always vaguely annoyed when Cloudflare takes over more stuff

        I’m genuinely curious as to why? Cloudflare seems at least so far the least “evil” of large internet companies.

        1. 7

          Cloudflare seems at least so far the least “evil” of large internet companies.

          Still, I would prefer it to be more decentralized. Lots of Internet traffic is already going through CF.

          1. 1

            The protocols that the Internet relies on have an inherent centralising effect on Internet services. While a decentralised Internet would be nice, I don’t know of any good proposals for making this happen while also solving the numerous problems to do with how to share power and control in a decentralised manner while still maintaining efficiency and functionality.

            inb4 blockchain :eyeroll:

        2. 2

          Unless cloudflare turns out to be working closely with no such agency. Then the decentralization crowd will be vindicated and we will still need to solve all the problems cloudflare solves for us.

          1. 3

            Of course the decentralisation crowd solve them right now instead of waiting.

            There’s always someone in the peanut gallery who’s vindicated, because the peanut gallery is rich in opinions. For any opinion, there’s someone who holds it, doesn’t volunteer or otherwise act on it, is vindicated on reddit/hn/…/here if something bad happens or can be alleged to happen, and blames the people who did volunteer.

            Blaming those who volunteer is a shameful thing to do IMNSHO.

      2. 3

        It looks like the site (though still managed by post author) migrated entirely to Cloudflare’s systems in 2020 or so.

    15. 14

      Is anyone else having issues with the new tab style? It seems like the tabs are blending with each other.

      1. 7

        If all your tabs are in containers (due to Temporary Containers especially), they’re all very clearly demarcated by the container color lines :D

      2. 5

        No, honestly it needed a refresh. Mozilla doesn’t seem to change their Firefox design anywhere near as much as google does with chrome. Which is nice but it does eventually get stale. When I first saw it on my personal laptop which I run beta on I loved it. I could not wait for it to get to stable for my other machines.

        It seems like the tabs are blending with each other.

        Why do you need clear definition? the second you hover over the middle (between the icon and the x) which is where I bet most people click even if there is clear definition, so honestly what is the difference? Seems like a quibble.

        1. 4

          Why do you need clear definition?

          Not OP but I also hate this. I “need” clear definition so that I can read tab titles properly (otherwise the titles just blend into each other) and, more importantly, so that I “know” where to point my mouse at in order to switch to a new tab. The way you “need” clear definition on buttons, otherwise they just look like labels. I know there’s a hover animation but you have to get there in order to hover.

          1. 1

            My initial reaction is that the tab titles are separated well enough thanks to the favicon between them. The screenshots honestly look pretty readable to me. I haven’t tried it for myself yet though, and this is all subjective anyways.

            1. 2

              If the favicon is colourful and/or obvious enough, it’s not a problem. But in this day and age 8 out of 10 favicons are grayscale/monocolour anonymous icons, half of which are basically just a letter in a circle, so the favicon doesn’t look too different from the text. My eye vision is not the best, given that I haven’t been in the 18-25 target demographic for a while, so it’s not exactly the best mechanism for separation…

      3. 4

        Yes, but… check out the new Add-On “Firefox Color”. It’s a point-and-click themer.

        Admittedly, I don’t understand 2/3 of the choices, and the Firefox UI team seems to focus on completely different parts of the UI than I do, but I got it to make the current tab obviously different from the other tabs while keeping the other tabs high-contrast.

        1. 2

          I’m still tweaking it, but am fairly happy with the Acme-esque theme I’ve made. This is a neat little add on.

          Admittedly, I don’t understand 2/3 of the choices

          This helped.

      4. 2

        Yeah, on macos at least, the non-selected tabs sure could use some kind of demarcation. The selected tab has it, but on the non-selected tabs (especially with more than a few open tabs), it looks pretty messy.

      5. 1

        The tabs don’t feel less like tabs, more like random floating buttons, disconnected from the content.

    16. 2

      I’d be surprised if the background tasks weren’t being run with a high niceness (low priority) on Intel cores too?

      I guess the downside of using full power cores for background jobs is that, even though the scheduler makes them yield time slices as soon as a high priority job shows up, they still heat up your MacBook so it’s already thermally throttling by the time you start your high priority task.

      1. 2

        There seems to be a few benefits to this:

        • rogue background processes are limited to half (if not lower than half because of “lower power cores”) of the entire cpu’s time (mdworker is a great example I myself have had that thing max out cores)
        • you nearly will always have 4 cores ready to pick up jobs that the user wants done vs having to wait for context switching and threads to get out of your way
        • as you mentioned the computer is already half as hot (or lower) than it would have been if all of the cores were doing things
        • potentially with 4 cores limited to background jobs that might mean even less time slicing for UI jobs which definitely will make it faster if it is nearly the only thing using “that” core

        for a gui environment this seems like a really good idea honestly. You likely could achieve the same feel with an Intel if you were able to assign half of the cores to be only for background jobs as they have.

        1. 3

          (mdworker is a great example I myself have had that thing max out cores)

          Fuck, I hate the entire concept of desktop search so much.

          I have never, ever seen a desktop search system work. I type in a query that should match the content of a recently used document. No results. I type in something that should match its filename. Also no results. I curse the gods and give up and find it manually or grep or something.

          I’ve had exactly this experience with the desktop search systems that come with Windows 10, with Mac OS, with KDE and I can’t remember if Gnome even has one but if it has I haven’t seen it succeed.

          So much fucking electricity wasted and literally zero occasions where the fucking things have ever been useful.

          The only desktop search applications I’ve ever seen work are locate (mlocate) and the simple slow reliable one that came with Windows 9x which didn’t index anything in advance, it just searched the whole disk on demand. But that one was actually useful.

    17. 1

      I’m not convinced that allowing password logins is really that bad anyway. Any machine I put on the Internet would have something like blacklistd, fail2ban or sshguard. The trouble with keys is they’re only as secure as the system you store them on. If I left an ssh key at work to provide access to my home system, a colleague would be able to brute force the passphrase as fast as his machine could calculate them. Hammering and ssh daemon is going to be a lot slower than that and may not go unnoticed. In the blog authors case, this is presumably a laptop so they’d be actively using it while the attack takes place.

      1. 10

        I’m not convinced that allowing password logins is really that bad anyway.

        Most mac laptops (like the author was mentioning) won’t have blacklistd, fail2ban or sshguard. And when they connect to a LAN, they will, in their default configuration, advertise the availability of their ssh service over multicast DNS as soon as they are on the network. This includes potentially hostile wifi hotspots. (Though responsibly operated ones don’t let clients see each other, you’d possibly be surprised by how often you can connect to other machines on the same WLAN.)

        And if a key is only as secure as the system you store them on, the password is only as secure as the system you type it into. You really shouldn’t log in at all from systems you don’t trust. But if you’re going to, store your ssh key on a hardware token so that the untrustworthy system can only attack you while the token is physically there :)

        1. 2

          This really only should affect intermediate to advanced mac users, as sshd is not activated by default.

      2. 3

        Most of my personal machines have a fairly weak password because they’re only supposed to protect from someone physically typing a password into my machine. My laptop isn’t running blacklistd, fail2ban or sshguard. I sometimes bring my laptop to things like coffee shops, universities, friends, work, airports (well I used to…), and other places where I connect it to a LAN with people outside of my immediate family. I would really like it if random people on the LAN couldn’t try to brute force my password remotely.

        1. 1

          protect from someone physically typing a password into my machine

          then you shouldn’t turn sshd on.

          1. 2

            Why not? If it’s configured to not accept password auth, as it should, it poses no extra risks regardless of my account’s password.

      3. 0

        If I left an ssh key at work to provide access to my home system

        My initial questions would be: why do that, and if you have a really good reason, why not do it using a hardware key you take away with you?

        1. 1

          Because it is often useful to be able to get to personal e-mail and my own files while at work. For as long as I’ve had always-on-Internet at home I’ve simply got used to doing that. Many things like config files and general notes can be git cloned over ssh so it is simply really convenient.

          I’m sure it’s still more secure than most people’s habit of having their work browser left logged in to their gmail, facebook and whatever accounts. I should probably think about the hardware key idea but there is always a danger of forgetting it.

    18. 2

      A few years ago I decided to see how thoroughly I could overengineer fizzbuzz in JavaScript. The fact I decided to use the sum of two waves means that it doesn’t need any conditionals or modulo. Behold:

      // How many ms between fizzbuzz calls?
      const RATE = 100;
      
      // FizzBuzz Parameters
      const FIZZ_PERIOD = 3;
      const BUZZ_PERIOD = 5;
      
      // Map waves to final output values in range [ 0..3 ]
      const FIZZ_AMPLITUDE = 1;
      const BUZZ_AMPLITUDE = 2;
      
      // Lookup table for results
      const LOOKUP = [ undefined, "Fizz", "Buzz", "FizzBuzz" ];
      
      // Generate a square wave
      const squareWave = period => amplitude => n =>
        amplitude * Math.max( 0, Math.floor( Math.cos( n * 2 * Math.PI / period ) + 0.5 ) );
      
      // Compose the fizzbuzz function out of waves components
      const fizzbuzz = n => LOOKUP[ [
        squareWave( FIZZ_PERIOD )( FIZZ_AMPLITUDE ),
        squareWave( BUZZ_PERIOD )( BUZZ_AMPLITUDE ),
      ].reduce( ( prev, curr ) => curr( n ) + prev, 0 ) ] || n;
      
      // Async sleep to avoid busyloop
      const sleep = async ms => new Promise( resolve => setTimeout( resolve, ms ) );
      
      // Generate an infinite sequence starting from some base
      function* seq( start = 1 ) {
        for ( let i = start; true; ++i ) {
          yield i;
        }
      }
      
      // FizzBuzz Forever!
      async function main() {
        for ( const i of seq() ) {
          console.log( i, fizzbuzz( i ) );
          await sleep( RATE );
        }
      }
      
      // Start
      main();
      
      1. 3

        Isn’t Math.max a conditional?

        1. 4

          I suppose that was cheating slightly, but “no conditionals” wasn’t my original intent. The sum-of-waves method was just intended to be needlessly obtuse. It happens to have this property as a side-effect.

          There are other ways to generate a square wave that don’t have hidden conditionals. This one is nicer and fixes a bug I just noticed in the original (when used with different periods than 3 & 5):

          // Generate a square wave
          const squareWave2 = period => amplitude => n =>
            amplitude * ( Math.cos( n * Math.PI / period ) & 1 );
          

          Also, thinking about it, the || could also be considered a hidden conditional, but it can be eliminated by just not hard-coding the lookup table and making n be the first entry every time.

          1. 4

            With squareWave2 this program gives the wrong answer for i = 64084278. With squareWave it gives the wrong answer for i = 895962678003541 (and maybe lower values of i as well, I did not check).

            1. 3

              Turns out the 2nd one is good up for all inputs up to 2^25. After that it would need another wave function that doesn’t rely on floating point. This one should be correct until integer overflow:

              const squareWave3 = period => amplitude => n =>
                amplitude * !( n % period );
              

              Sadly this re-introduces the rather cliché modulo operator, but I don’t think that can be avoided if the promise of “FizzBuzz Forever!” is to be upheld. At least, this one should be good for all values of forever < 2^54. To go any higher than that reliably, I’d need to go through the whole program and suffix all the numbers with n to use BigInt.

    19. 4

      From the summary, it sounds like async is always as good or better than true threads. So when should I use a thread instead of async?

      1. 10

        I think this is a limited view. What I’m seeing here is that threads, an arguably simpler concurrency primitive, can scale quite competitively on modern operating systems.

        Therefore, while you should continue to use async/epoll etc. when there is a true need (e.g. building web facing software like haproxy, nginx, envoy, caddy), many use cases will run competitively using a simpler blocking threads model and it might be worth evaluating whether you’d prefer working in that paradigm.

      2. 7

        Async requires separating I/O-bound tasks from CPU-bound ones (you need to specifically spawn a task on a threadpool if it’s going to hog the CPU). If you can’t reliably isolate the two workloads, you’re going to have I/O latency problems. In that case it’s better to sacrifice threads for mixed workload.

        Similarly if you’re dealing with FFI or non-async libraries, you have no guarantee what they’ll do, so it’s best to use a real thread just in case.

        async/await in Rust creates a single state machine for every possible await point. If your code is complex with lots of on-stack state intermixed with awaits, then this state machine can get large. Recursion requires heap allocation (because it ceases to have compile-time-fixed call graph depth, so the state becomes unbound). If your code is complex, with lots of state and recursion, a real thread with a real stack may be better.

      3. 3

        If I understand rust’s async correctly or rather that it follows other languages’ async/concurrency patterns, you use async to improve single thread/core performance, it allows you to continue if the thread hits something like an io wait or similar non-cpu task. Multi-threading can do this but is better when processing for durations, like a browser.

        cli tools that are intended to do one thing and exit quickly like ls for example may degrade performance by adding threads, but may improve performance by adding async. Now if as mentioned you have a browser and you have already maxed out what a single thread can do using async it’s time to split using threads to allow you to use more of the available cpu cores to do its job.

        golang explanation of concurrency vs parallelism: https://blog.golang.org/waza-talk

      4. 2

        Performance is only one metric. It is indeed true, to the first approximation, that async is not worse than threads in terms of performance. The most convincing argument for the opposite is that async might have worse latency, as it’s easier to starve tasks, but I haven’t seen this conclusively demonstrated for Rust.

        Another important metric is ergonomics or “is programming model sane?”. In this regard, async, at least in Rust, is not a sane programming model. Stack traces do not make sense, you get extra “failure modes” (any await might not return (when the corresponding task is dropped)), and there are expressively restrictions (recursion, dynamic dispatch, and abstraction are doable, but awkward).

        1. 2

          To play the devil’s advocate, sync, blocking model is also far from perfect. It doesn’t handle cancellation or selection between several sources of events. That is, if you do a blocking read call, there isn’t a nice way to cancel it.

          With async, because you need to redo the entirety of IO anyway, you can actually go and add proper support for cancellation throughout.

      5. 1

        When a syscall/FFI blocks, or when you might have to crunch a lot of numbers are good cases.

    20. 27

      Obligatory please don’t tell anyone how I live, here is my very messy desk:

      OS: Arch Linux

      CPU: Intel i5-6600 @ 3.30 GHz

      RAM: 16 GB DDR4

      WM: i3

      MB: Gigabyte Q170M-D3H

      KB: IBM Model M

      GPU: Nah

      Cat: Orange and White Maine Coon, “Salsa” aka “Salsa T. Cat Esq.”

      Cat treats: Chicken

      Water: Tap

      Coffee: Black

      Whisky: Neat

      1. 11

        I enjoyed this image very, very much. Thank you for your honesty! I particularly enjoyed the pump bottle of vaseline.

        1. 6

          Thanks! I was going to remove it and take another picture but then I thought, well why not just show a slice of everyday life? It’s cold and dry where I live in Canada and my skin needs some lotion so I don’t get the alligator complexion.

          I was thinking a lot of this excellent Calvin and Hobbes comic when I was taking the picture, should I clean up my desk before I take a picture so I appear to be neat and tidy or just present my life as-is unfiltered?

      2. 2

        This feels like home. I don’t know if you can actually compare two messes but our areas feel equal in messiness.

      3. 2

        I see the base for the soldering iron. I’m scared to ask where in here the actual iron is.

        1. 1

          Haha, it’s off to the left, on the window sill.

      4. 1

        Fantastic! As well as Arch, I’m a huge Kubrick fan—where did you get your desktop background?

        1. 1

          Awesome, glad you liked it. I’ve had that one for a long time, I did a search on the filename and there is a copy here: https://www.colipera.com/you-deserve-nothing/vector-2001_00359644/

      5. 1

        I often struggle with how messy my desk becomes. My preferred style of note taking to work out a problem is a good pen and pads of paper, so things end up accumulating and I don’t feel like I want people to see my office. Thank you for sharing this picture! I’m right in the middle of reorganizing, or I’d show you how bad mine can get.

      6. 1

        is that a speaker strapped to the bottom of the left monitor? If yes, why?

        1. 1

          It is! It was an accessory that was available with that monitor and from what I recall, a lot of Dell business/professional monitors. Here’s what it looks like off the monitor.