1. 3

    I’ve taken the week off work. This is my first break all year.

    I might look at some personal projects that haven’t been touched for a while, but mostly I don’t want to be sat in front of a computer else I might as well still be working.

    1. 3

      You deserve it, 2020 has been a lot

    1. 20

      Category 5 is cheaper, but can only support speeds of 100 Mbps. Category 6 is slightly more expensive but you will need it to get full gigabit speeds.

      This isn’t entirely correct: cat5 is only rated for 100Mbps, but cat5e will do 1Gbps just fine and is significantly cheaper than cat6a and more flexible.

      This is a pretty good read on the differences between cat5e/cat6.

      1. 7

        While YMMV, I agree here and offer my experience. My house was built and wired in 1998 just before the change over from cat 5 to cat 5e. I did an addition in 2003 with cat 5e. I run gigabit on a mix of both of those cable types with no problem. I figure that there are two reasons for this. First, as the wikipedia article says, most cat 5 cable actually meets cat 5e specifications even though it wasn’t tested that way. Second, with regard to bandwidth, drop length matters at least as much as the cable you use. My longest drop might be 35 meters. My average drop is probably just under 10 meters. At those lengths, it was a good bet to replace my 100Mb/s switches with Gigabit swtiches and cross my fingers/keep the receipts.

        1. 5

          My house was built in the early 90s, probably just after they switched from installing 4-wire copper phone cables to installing Category 3 cables instead. However, these Category 3 cables still support gigabit speeds without issue (I use it every day with our symmetric 300 Mbps Internet connection), despite being stapled to the struts.

          I’m not saying all Cat 3 will do this, just that some cables do indeed meet higher specifications, per above.

          1. 3

            My desktop currently speaks 10GbaseT to my main switch via ~20ft of cat5 (not e). And the other end is a 10GbaseT SFP+ adapter which only claims 30 meters over cat6, vs the 10GbaseT standard 100 meters.

          2. 2

            However if you plan to use Type3/4 PoE devices, the thicker wire gauges found in Cat6/6a/7 are recommended.

          1. 9

            Great list of tips!

            cmd+shift+4 pops up a crosshair to take a screenshot of a region.

            And pressing space after cmd+shift+4 lets you screenshot a particular window.

            And since 10.14 (I think) taking a screenshot now gives you a little preview in the bottom-right of the display which delays it writing to a file. If you just want it to write the file and skip the preview, cmd+shift+5 gives you an Options menu where you can disable “Show floating thumbnail”.

            1. 11

              Don’t miss the fact that ⇧⌘5 can also do screen recordings, with or without audio. Previously you had to run QuickTime Player and find it in the menu.

              1. 5

                Pressing control along with either of those just copies the image to the clipboard, ready for pasting!

                1. 1

                  The nice thing about that floating thumbnail is that you can drag & drop it like a real file. Sometimes I’m screen shotting just to share with someone in chat, and that dragging that thumbnail over means I can send images without ever having them written to disk.

                  1. 2

                    Mentioned in the other comment but yeah– pressing control along with wither hotkey just copies to clipboard inmediately which I’ve found to bw the best path for this usecase. Then I’m able to just Cmd + V in the target.

                    1. 2

                      On Catalina (not sure about previous versions), you can also hit cmd + shift + 5 and select the clipboard as the default destination. Then you won’t need to add control for screenshots to go to the clipboard.

                1. 3

                  Hey, author of the post here! Really happy to see it on Lobsters, and I’d be happy to answer any questions and/or comments you have!

                  I encountered this “bug” while working on rewriting my iOS app with the new App and Scene structures introduced during WWDC2020. The project is nearing completion, and I’m really excited about how its turning out.

                  Enjoy!

                  1. 12

                    Unfortunately not related to the content, but for me the font choice made the post too difficult to read.

                    1. 3

                      Understandable. I was attempting to make it “retro,” though I’m going to change the font when I rewrite the site (soon) to make it clearer and load faster.

                      1. 2

                        I agree with you. Try using Reader View if your browser supports it. It’s much better.

                        1. 2

                          Pictures/videos also don’t work in Safari 14.

                          1. 1

                            Yeah, they’re in .webm which for some reason is not supported by Safari despite massive size reductions from mp4. Going to need to add mp4s.

                        2. 3

                          Nice post! Happens to all of us :-)

                          That’s what you get for populating static items in a list. I’m a little confused about the sorting (or whether it works as needed):

                          • Completed tasks are at the bottom. Ongoing tasks are at the top.
                          • Higher priority items are at the top of their category (completed/ongoing).
                          • After above two points, ordering is done ascending, by task name.

                          The above statements sound nice, but:

                          • the UI fails to show important (high prio) tasks
                          • sorting by name is not visible, as the point above. Looking at the videos your provided, it appears sorting is random (although it may not be)

                          I’m not an Apple user, but I would enjoy having a task list with the following features:

                          • Priority items clearly marked (color/“hotness” or font weight)
                          • Completed tasks with a “greyed out”/“disabled” state (the strikethrough helps)
                          • Sorting based on the timestamp when the item was created/modified/completed
                          1. 1

                            Thank you for the great suggestions!

                            Some clarifications about sorting:

                            • The exclamation marks on the trailing side are supposed to be the main indicator of priority, which I understand might be too small of an indicator.
                            • The “ascending task name sort” is just a fancy way of saying alphabetical order. Because it’s the third priority it may seem a little random, but what it does is sort all tasks of the same priority and the same category (completed/ongoing) in alphabetical order.

                            Feature suggestions:

                            • I love the idea of color/weight indicators for priority! Definitely going to implement that going forward.
                            • Completed tasks are grayed out in addition to the strikethrough in the main app, I’ve just yet to implement it in the rewrite.
                            • The timestamp sort would be an important thing, but a big feature of the app is that tasks get deleted at midnight every day so that would be a really short-term thing. I will consider adding it as an additional sort method, though.
                        1. 5

                          This is quite similar to the approach I take, so it’s good to see others doing similar. I also use the git add -p approach to force myself to review each change so I can make note of the “why” in the commit message - or perhaps omit a hunk to put into a separate commit.

                          I tend to write commit messages for “future me”. Sure I’ll remember the context of the change for the next few days, perhaps a week or a month, but after that I’m guaranteed to have fogotten exactly why I did a thing, so I try and document the “why”. It’s not uncommon for me to write a commit message that’s two or three paragraphs for a one-line change.

                          1. 2

                            Is there no mode that would share the physical network port but tag all IPMI traffic with a VLAN you configure?

                            1. 6

                              Many HPE servers have a dedicated network ports for the iLO card but can also optionally share one of the regular network ports if needed. When in shared mode, you can indeed configure a VLAN tag for the management traffic, which can be different to the VLAN tag used by the host operating system normally.

                              1. 1

                                Unfortunately, in the same way that chris explained that a any compromised host might be able to switch the device IPMI mode from dedicated to shared, using a VLAN for segregation can have a similar problem. If the compromised host adds a sub-interface with the tagged VLAN to their networking stack they now can gain network access to the entire IPMI VLAN.

                                1. 2

                                  In addition there are other annoyance with using a shared interface. Because the OS has control of the NIC it can reset the PHY. If the PHY is interrupted while, for example, you’re connected over Serial over LAN or a virtual KVM, you lose access. If you’re lucky, that’s temporary. If you’re really unlucky the OS can continually reset the PHY making IPMI access unusable. A malicious actor could abuse this to lock out someone from remote management.

                                  That can’t happen when you use a dedicated interface for IPMI (other than explicit IPMI commands sent over /dev/ipmi0). Generally switching a BMC from dedicated mode to shared mode requires a BIOS/UEFI configuration change and a server reset.

                                  (Speaking from experience with shared mode and the OS resetting the NIC. The malicious actor is merely a scenario I just dreamt up.)

                                  1. 1

                                    Indeed, although I suspect in many cases these IPMI modules are already accessible from the compromised host over SMBus/SMIC or direct serial interfaces anyway - possibly even with more privileged access than over the network. That’s how iLOs and DRACs can have their network and user/group settings configured from the operating system.

                                    1. 4

                                      The increased risk mostly isn’t to the compromised host’s own IPMI; as you note, that’s more or less under the control of the attacker once they compromise the host (although network access might allow password extraction attacks and so on). The big risk is to all of the other IPMIs on the IPMI VLAN, which would let an attacker compromise their hosts in turn. Even if an attacker doesn’t compromise the hosts, network access to an IPMI often allows all sorts of things you won’t like, such as discovering your IPMI management passwords and accounts (which are probably common across your fleet).

                                      (I’m the author of the linked to article.)

                                      1. 3

                                        The L2 feature you are looking for is called a protected port. This should be available on any managed switch, but I’ll link to the cisco documentation:

                                        https://www.cisco.com/en/US/docs/switches/lan/catalyst3850/software/release/3.2_0_se/multibook/configuration_guide/b_consolidated_config_guide_3850_chapter_011101.html

                                        1. 1

                                          In a previous life at a large hosting we used this feature on switch ports that were connected to servers for the purposes of using our managed backup services.

                              1. 3

                                This explains why I was taught to do “sync; sync; halt” in the late 80s, and still encouraged to do this in the late 90s at my first sysadmin job.

                                I’ve always wondered when and why this was no longer needed.

                                1. 3

                                  I started a Solaris sysadmin job in 2008 and was being taught “sync; sync; sync; halt” as late as that! My colleagues knew that you’re supposed to type each one out to let buffers flush, rather than all on one line, but it’s interesting that these old habits really do die hard.

                                  (Prior to that job I’d only worked with Linuxen and BSDs.)

                                1. 3

                                  There’s an error in this article in “Enter last argument from previous command(s)”:

                                  $? is the exit code of the last command. !$ is the last argument of the last command. Similarly !! repeats the entire last command (e.g. if you forgot to sudo, you can sudo !!).

                                  1. 1

                                    To add to this point, Alt+. or Esc+. works for me to insert last argument, not Ctrl+.

                                  1. 0

                                    Does anyone know why uMatrix doesn’t like this site?

                                    1. 2

                                      NextDNS blocks the domain name for me as it’s listed in https://github.com/StevenBlack/hosts.

                                      1. 1

                                        The best way to figure that out is probably to post a screenshot of the dashboard and a link to the site over at the support subreddit for umatrix. I don’t have umatrix installed on any browser on this machine, so I can’t look myself directly. But based on privacybadger’s status display, I’d bet it’s blocking something load bearing from ajax.googleapis.com rather than merely refusing to send a cookie there. (I find privacybadger, after a couple of days on a fresh system, blocks 99% of what I care about with much less fiddling, FWIW.)

                                          1. 2

                                            Huh. Looks like they sell telemetry tools to site owners. Not, at least on the face of it, the kind of stuff that tracks you across different sites (though I didn’t look deeply enough to exclude that) but it’s easy to imagine a telemetry saas landing on a few blocklists.

                                      1. 4

                                        This is an interesting read. I was looking at pyinfra after it was posted here recently.

                                        At $WORK we heavily use Salt and almost everyone is frustrated with it. We have a lot of “spaghetti Jinja” thrown all over YAML which makes most of the states very difficult to read.

                                        On the flipside Ansible is conservative about where you can use Jinja, but your YAML is still semi-templated so there are some weird gotchas.

                                        For my personal things I find Salt works okay as I can keep it fairly clean with a small setup, but I’m not all that keen on using the master-minion setup. (I don’t use it in minion-only mode because I don’t want to have to have the repo on every server. I haven’t tried salt-ssh recently.) I tried Ansible briefly but I felt like I was fighting it a lot or having to weird twisty things to run a task N times for different inputs.

                                        I sometimes wonder if I should try Puppet again as it’s been nearly 8 years since I last used it. Having a DSL was sort of nice, if a little frustrating to use at the time, but I’m not that keen on Ruby so extending Factor was unpleasant.

                                        1. 5

                                          This looks interesting (and has some parallels with PostgREST), although I’m curious why you chose to template Go code with .NET instead of writing in one or the other?

                                          1. 6

                                            This does code generation whereas I think PostgREST does it all in memory. Moreover this gives you an entire application you have full control over if you want. Whereas you kind of have to treat PostgREST as its own black box.

                                            Additionally, this supports Go today but should be easier to extend to other languages because it’s based on text templates. I started with Go because I’m most familiar with its ecosystem for web development and I can build something quickly that’s still reasonably production-ready.

                                            I’d like to extend support to other databases over time as well.

                                            I went with F# because I wanted a high-level statically typed language with a mature ecosystem for database interactions and text manipulation. For example, the error messages in the template library (Scriban) are excellent, giving you the exact line number of a misconfigured template.

                                            Go would be much harder to do this sort of thing without reverting to a lot of unsafe dynamism with reflection or empty interfaces. Possible, but not as easy.

                                          1. 2
                                            Bugfixes
                                            --------
                                            
                                             * ssh(1): fix IdentitiesOnly=yes to also apply to keys loaded from
                                               a PKCS11Provider; bz#3141
                                            

                                            Well this one is good to see as that used to be pretty annoying, although I’ve now switched to yubikey-agent to not have to deal with the PKCS#11 implementation anymore.

                                            1. 2

                                              What does the yubikey-agent get you that isn’t native to OpenSSH >= 8.2?

                                              It seems like the yubikey-agent stuff was a fill-gap for older versions of OpenSSH that didn’t support FIDO out of the box, or maybe I am missing something?

                                              1. 4

                                                It’s absolutely a fill-gap, because FIDO support requires OpenSSH >= 8.2 on both sides of the connection. There’ll be a long tail of servers running older OpenSSH, and it’s nice to have a solution for people stuck connecting to them. For example, Ubuntu 18.04 is supported until April 2023 with extended support until April 2028, and uses OpenSSH 7.6.

                                                1. 5

                                                  Cool, I basically live on OpenBSD current, so I have had this (both ends) for some time now. Would be handy for github though!

                                                  1. 3

                                                    Right, exactly this. I have personal servers running sshd that ships with the OS that aren’t yet on 8.2+, and similar for work.

                                                    My employer gives all employees a YubiKey but our servers run Debian and we don’t backport newer OpenSSH versions, so yubikey-agent allows me to have an easy way to use it without the complicated and slightly flaky PKCS#11 setup.

                                                    Another advantage of yubikey-agent is it allows you to re-plug your YubiKey and it doesn’t break. The stock ssh-agent (combined with OpenSC) generally stops working if the YubiKey is unplugged and it’s fiddly to get it working again.

                                              1. 1

                                                2FA for SSH sounds great but I can only imagine how cumbersome it must be for using on a daily basis. I’ve recently started permanently locking my SSH ports and only briefly whitelisting them for only my IP with a bash scripts whenever the access is needed https://pawelurbanek.com/ec2-ssh-dynamic-access

                                                1. 2

                                                  If you use a U2F hardware key instead of TOTP it’s not cumbersome at all.

                                                  I use a Yubikey 5 Nano which is always in my laptop and use OpenSSH’s native PKCS#11 support to use the Yubikey as a hardware-backed SSH key. I documented how I did it at https://github.com/jamesog/yubikey-ssh.

                                                  (Yubikeys can also do TOTP if you want to use a regular SSH key.)

                                                  1. 1

                                                    I think I read about beign able to reuse a SSH connection after it’s established. Something like connection reuse or multuplexing.

                                                    Would surely make using multiple connections much easier after the first one is established.

                                                    1. 1

                                                      Yes, it’s configured using ControlMaster, ControlPath, and ControlPersist. Once you enable ControlMaster a socket is created which future connections will use. Once that’s enabled if you SSH to a 2FA-enabled server you’ll only have to do that once, as long as the control socket is alive.

                                                  1. 2

                                                    On macOS, should you always be installing OpenSSH with Homebrew to get this and other updates? I don’t normally install OpenSSH, but I do use Homebrew extensively for other things.

                                                    1. 3

                                                      macOS is at OpenSSH 8.1 so it doesn’t support it and we don’t know yet if they’ll build it with the native support.

                                                      1. 2

                                                        Unfortunately installing from Homebrew doesn’t replace the system-provided ssh-agent, so the agent started automatically by the OS won’t be able to load the ecdsa-sk key type.

                                                        SIP won’t let you modify the /usr/bin/ssh-agent binary nor edit the launchd plist. In theory you could create a new launchd service to run the brew-installed ssh-agent but then you lose Keychain support for the passphrase. Depends if that’s important to you.

                                                        1. 2

                                                          Oh, fascinating. macOS does indeed run a default on-demand ssh-agent, and the socket path is magically passed in the environment of login shells. I did not know this! Kinda surprised that Homebrew would ship its own ssh-add by default when again by default it would talk to the system ssh-agent. I wonder what the backwards compatibility of that protocol is.

                                                          https://opensource.apple.com/source/OpenSSH/OpenSSH-235/openssh/ssh-agent.c.auto.html

                                                          The good news is that if I read this right we’d be able to load the Homebrew FIDO2 middleware into the system agent if they don’t build it.

                                                          The bad news is that Apple squatted on “ssh-add -K” for keychain support, and now that’s a real option for loading resident keys 🤷‍♂️

                                                          1. 1

                                                            The did update the OpenSSH version that originally came with Catalina, 7.9 to 8.1 now. So let’s hope that there will be a 10.15.5 with OpenSSH 8.1.

                                                        1. 27

                                                          It’s worth linking to A&A’s (a British ISP) response to this: https://www.aa.net.uk/etc/news/bgp-and-rpki/

                                                          1. 16

                                                            Our (Cloudflare’s) director of networking responded to that on Twitter: https://twitter.com/Jerome_UZ/status/1251511454403969026

                                                            there’s a lot of nonsense in this post. First, blocking our route statically to avoid receiving inquiries from customers is a terrible approach to the problem. Secondly, using the pandemic as an excuse to do nothing, when precisely the Internet needs to be more secure than ever. And finally, saying it’s too complicated when a much larger network than them like GTT is deploying RPKI on their customers sessions as we speak. I’m baffled.

                                                            (And a long heated debate followed that.)

                                                            A&A’s response on the one hand made sense - they might have fewer staff available - but on the other hand RPKI isn’t new and Cloudflare has been pushing carriers towards it for over a year, and route leaks still happen.

                                                            Personally as an A&A customer I was disappointed by their response, and even more so by their GM and the official Twitter account “liking” some very inflammatory remarks (“cloudflare are knobs” was one, I believe). Very unprofessional.

                                                            1. 15

                                                              Hmm… I do appreciate the point that route signing means a court can order routes to be shut down, in a way that wouldn’t have been as easy to enforce without RPKI.

                                                              I think it’s essentially true that this is CloudFlare pushing its own solution, which may not be the best. I admire the strategy of making a grassroots appeal, but I wonder how many people participating in it realize that it’s coming from a corporation which cannot be called a neutral party?

                                                              I very much believe that some form of security enhancement to BGP is necessary, but I worry a lot about a trend I see towards the Internet becoming fragmented by country, and I’m not sure it’s in the best interests of humanity to build a technology that accelerates that trend. I would like to understand more about RPKI, what it implies for those concerns, and what alternatives might be possible. Something this important should be a matter of public debate; it shouldn’t just be decided by one company aggressively pushing its solution.

                                                              1. 4

                                                                This has been my problem with a few other instances of corporate messaging. Cloudflare and Google are giant players that control vast swathes of the internet, and they should be looked at with some suspicion when they pose as simply supporting consumers.

                                                                1. 2

                                                                  Yes. That is correct, trust needs to be earned. During the years I worked on privacy at Google, I liked to remind my colleagues of this. It’s easy to forget it when you’re inside an organization like that, and surrounded by people who share not only your background knowledge but also your biases.

                                                              2. 9

                                                                While the timing might not have been the best, I would overall be on Cloudflare’s side on this. When would the right time to release this be? If Cloudflare had waited another 6-12 months, I would expect them to release a pretty much identical response then as well. And I seriously doubt that their actual actions and their associated risks would actually be different.

                                                                And as ISPs keep showing over and over, statements like “we do plan to implement RPKI, with caution, but have no ETA yet” all too often mean that nothing will every happen without efforts like what Cloudflare is doing here.


                                                                Additionally,

                                                                If we simply filtered invalid routes that we get from transit it is too late and the route is blocked. This is marginally better than routing to somewhere else (some attacker) but it still means a black hole in the Internet. So we need our transit providers sending only valid routes, and if they are doing that we suddenly need to do very little.

                                                                Is some really suspicious reasoning to me. I would say that black hole routing the bogus networks is in every instance significantly rather than marginally better than just hoping that someone reports it to them so that they can then resolve it manually.

                                                                Their transit providers should certainly be better at this, but that doesn’t remove any responsibility from the ISPs. Mistakes will always happen, which is why we need defense in depth.

                                                                1. 6

                                                                  Their argument is a bit weak in my personal opinion. The reason in isolation makes sense: We want to uphold network reliability during a time when folks need internet access the most. I don’t think anyone can argue with that; we all want that!

                                                                  However they use it to excuse not doing anything, where they are actually in a situation where not implementing RPKI and implementing RPKI can both reduce network reliability.

                                                                  If you DO NOT implement RPKI, you allow route leaks to continue happening and reduce the reliability of other networks and maybe yours.

                                                                  If you DO implement RPKI, sure there is a risk that something goes wrong during the change/rollout of RPKI and network reliability suffers.

                                                                  So, with all things being equal, I would chose to implement RPKI, because at least with that option I would have greater control over whether or not the network will be reliable. Whereas in the situation of NOT implementing, you’re just subject to everyone else’s misconfigured routers.

                                                                  Disclosure: Current Cloudflare employee/engineer, but opinions are my own, not employers; also not a network engineer, hopefully my comment does not have any glaring ignorance.

                                                                  1. 4

                                                                    Agreed. A&A does have a point regarding Cloudflare’s argumentum in terrorem, especially the name and shame “strategy” via their website as well as twitter. Personally, I think is is a dick move. This is the kind of stuff you get as a result:

                                                                    This website shows that @VodafoneUK are still using a very old routing method called Border Gateway Protocol (BGP). Possible many other ISP’s in the UK are doing the same.

                                                                    1. 1

                                                                      I’m sure the team would be happy to take feedback on better wording.

                                                                      The website is open sourced: https://github.com/cloudflare/isbgpsafeyet.com

                                                                      1. 1

                                                                        The website is open sourced: […]

                                                                        There’s no open source license in sight so no, it is not open sourced. You, like many other people confuse and/or conflate anything being made available on GitHub as being open source. This is not the case - without an associated license (and please don’t use a viral one - we’ve got enough of that already!), the code posted there doesn’t automatically become public domain. As it stands, we can see the code, and that’s that!

                                                                        1. 7

                                                                          There’s no open source license in sight so no, it is not open sourced.

                                                                          This is probably a genuine mistake. We never make projects open until they’ve been vetted and appropriately licensed. I’ll raise that internally.

                                                                          You, like many other people confuse and/or conflate anything being made available on GitHub as being open source.

                                                                          You are aggressively assuming malice or stupidity. Please don’t do that. I am quite sure this is just a mistake nevertheless I will ask internally.

                                                                          1. 1

                                                                            There’s no open source license in sight so no, it is not open sourced.

                                                                            This is probably a genuine mistake. We never make projects open until they’ve been vetted and appropriately licensed.

                                                                            I don’t care either way - not everything has to be open source everywhere, i.e. a website. I was merely stating a fact - nothing else.

                                                                            You are aggressively […]

                                                                            Not sure why you would assume that.

                                                                            […] assuming malice or stupidity.

                                                                            Neither - ignorance at most. Again, this is purely statement of a fact - no more, no less. Most people know very little about open source and/or nothing about licenses. Otherwise, GitHub would not have bother creating https://choosealicense.com/ - which itself doesn’t help the situation much.

                                                                          2. 1

                                                                            It’s true that there’s no license so it’s not technically open-source. That being said I think @jamesog’s overall point is still valid: they do seem to be accepting pull requests, so they may well be happy to take feedback on the wording.

                                                                            Edit: actually, it looks like they list the license as MIT in their package.json. Although given that there’s also a CloudFlare copyright embedded in the index.html, I’m not quite sure what to make of it.

                                                                            1. -1

                                                                              If part of your (dis)service is to publically name and shame ISPs, then I very much doubt it.

                                                                    2. 2

                                                                      While I think that this is ultimately a shit response, I’d like to see a more well wrought criticism about the centralized signing authority that they mentioned briefly in this article. I’m trying to find more, but I’m not entirely sure of the best places to look given my relative naïvete of BGP.

                                                                      1. 4

                                                                        So as a short recap, IANA is the top level organization that oversees the assignment of e.g. IP addresses. IANA then delegates large IP blocks to the five Regional Internet Registries, AFRINIC, APNIC, ARIN, LACNIC, and RIPE NCC. These RIRs then further assigns IP blocks to LIRs, which in most cases are the “end users” of those IP blocks.

                                                                        Each of those RIRs maintain an RPKI root certificate. These root certificates are then used to issue certificates to LIRs that specify which IPs and ASNs that LIR is allowed to manage routes for. Those LIR certificates are then used to sign statements that specify which ASNs are allowed to announce routes for the IPs that the LIR manages.

                                                                        So their stated worry is then that the government in the country in which the RIR is based might order the RIR to revoke a LIR’s RPKI certificate.


                                                                        This might be a valid concern, but if it is actually plausible, wouldn’t that same government already be using the same strategy to get the RIR to just revoke the IP block assignment for the LIR, and then compel the relevant ISPs to black hole route it?

                                                                        And if anything this feels even more likely to happen, and be more legally viable, since it could target a specific IP assignment, whereas revoking the RPKI certificate would make the RoAs of all of the LIRs IP blocks invalid.

                                                                        1. 1

                                                                          Thanks for the explanation! That helps a ton to clear things up for me, and I see how it’s not so much a valid concern.

                                                                      2. 1

                                                                        I get a ‘success’ message using AAISP - did something change?

                                                                        1. 1

                                                                          They are explicitly dropping the Cloudflare route that is being checked.

                                                                      1. 2

                                                                        Nice writeup.

                                                                        I started using Cloud Run after they announced it in alpha, for a couple of toy services that were previously in App Engine.

                                                                        I updated them to use Cloud Build too, so you can avoid that manual gcloud deploy step: https://github.com/jamesog/whatthemac/blob/master/cloudbuild.yaml

                                                                        1. 1

                                                                          Nice. I have used Cloud Build before and I think its a great idea if the builds are going to take lot of resources. Personally, I still try to manually test via make docker_run before deploying an image, so, building locally works. I am sure though at some point I will migrate to Cloud Build as well.

                                                                        1. 2

                                                                          I was using a WASD v3 105-Key ISO with Cherry MX Clear switches until mid last year.

                                                                          I switched to a Das Keyboard 4 Professional which has Cherry MX Brown switches, which are softer and a bit quieter. I really like this keyboard, in part because of the media controls (they were a bit fiddly on the WASD keyboard, which needs you to hold Fn to use them) and it has a USB hub on the back.

                                                                          1. 1

                                                                            I use a das 4 ultimate with browns. I really like it. But sadly, they’ve joined the dark side0, and so I can’t in good faith recommend Das Keyboard anymore.

                                                                            1. 2

                                                                              Huh, that’s strange.

                                                                              That said, I deliberately got the 4 Pro instead of anything newer because I don’t have a need for controlling my keyboard from the OS (it should be the other way around!) and I don’t need pretty light patterns.

                                                                          1. 5

                                                                            I have the Dell UltraSharp U2718Q and find it to be totally fine, even with gaming. Wirecutter says it’s their budget 4K screen. Would be nice if it had USB-C but not something I need.

                                                                            1. 1

                                                                              Yeah, I have the 2715Q. Bought two for $435 each in 2018 for work. I have been very happy with them for office use. Anything less than UHD is just horrible on the eyes after using retina screens for almost a decade now. I don’t get how anyone stands it (Google doesn’t issue UHD monitors to staff which I think is pretty nuts).

                                                                              1. 1

                                                                                Have three of these (computing family), they’re great and I’m totally satisfied. However, does not meet the refresh rate you mentioned.

                                                                                Older macbook pro drives two of them very nicely except for videoconf. All three have a problem wake from sleep, both OSs. I have to reset a monitor about once a month, which I can now do in less than a minute. Still wouldn’t hesitate to recommend. They’re beautiful and relatively affordable.

                                                                                1. 1

                                                                                  Same display here. I love it. I have the HDMI cable connected to my (personal) Mac mini and a mini-DP cable loose to plug in my work laptop when I want to work.

                                                                                  I’ve heard this year’s model has USB-C.

                                                                                1. 9

                                                                                  But here is where the problem lies, it’s surprisingly rare that I find myself editing only one file at a time.

                                                                                  And that’s why Emacs exists. It even has a best-in-class set of vim keybindings. And it has a wonderful extension language. It’s a Lisp Machine for the 21st century!

                                                                                  1. 15

                                                                                    That just means he needs to learn more Vim. It does indeed support tabs, and splits too, and IMO does it better than most IDEs. And you can get a file tree pretty easily too with Nerdtree. I have no issues with editing a bunch of files at once in Vim. Features like being able to yank into multiple registers is much more of a help there.

                                                                                    1. 8

                                                                                      I suspect one problem people have with vim and editing multiple files is that they only know about buffers, which can be a little tricky to work with, but I don’t think many people realise it does actually have tabs too.

                                                                                      I frequently open multiple files with vim -p file1 file2 which opens each file in a tab, and you can use gt, gT, or <number>gt to navigate them. There’s also :tabe[dit] in existing sessions to open a file in a new tab.

                                                                                      I generally find this pretty easy to work with and not much harder than doing something like Cmd-[ / Cmd-] as in some IDEs.

                                                                                      1. 3

                                                                                        There is some debate on whether tabs should be used this way in Vim. I used to do it this way and then I installed LustyJuggler and edited all my files without tabs.

                                                                                        But if it works for you, more power to you!

                                                                                        1. 3

                                                                                          As a sort-of an IDE power user, I would argue that editor tabs are a counter-productive feature: https://hadihariri.com/2014/06/24/no-tabs-in-intellij-idea/.

                                                                                          1. 2

                                                                                            That said, while you can work with tabs like this, that’s not entirely the idea of them. Tabs in Vim are more like “window-split workspaces” where you can keep your windows in some order that you like. With one buffer in one window per tab you do get a pretty similar workflow to tabs in other editors, but then again you could get a similar workflow with multiple buffers in one window even before Vim got tabs.

                                                                                            Guess tabs fall in the tradition of vim naming features different than one would imagine: buffers is what people usually understand as files, windows are more like what other editors call splits and tabs are more like workspaces.

                                                                                            1. 4

                                                                                              Oh, to be clear I don’t have one buffer per tab. I do tend to use them more like workspaces as you say. Typically each of my tabs has multiple splits or vsplits and I’ll go back and forth between buffers - but having tabs to flip back and forth between semi-related things can be useful on a smaller screen too.

                                                                                          2. 3

                                                                                            One of the reasons why I love vim is that I find a lot easier to edit multiple files at once. I can open then with :vsp and :sp and shift-ctrl-* my way around them very fast, with NERDtree I can open a directory folder in any of these windows, locate the file, and there you go, I have them arranged in whatever way I want. It makes it super easy to read the multiple files and copy things around. I like my auto-complete simple, I find autocomplete distracting, so I just use ctrl-n, but I’m aware this is a fringe opinion, if you want a more feature rich autocomplete, You complete me works pretty fine for people that like these. Also, I can open any terminal with :terminal… My vim usually looks more like this https://i.redd.it/890v8sr4kz211.png nothing with to do with 1 file per time.

                                                                                            Does vim makes me super faster than everyone else? Probably not, it’s just a text editor, but it’s very maleable and I’ve been coding for many years now and I haven’t ever seen the need to not use it. When I need to work on a new language I just install a bunch of new plugins and things get solved.

                                                                                            Nothing against VS Code, but it’s also only super powerful with the right plugins and configurations, there’s more of it out of the box, but without them it would also just be a simple text editor.

                                                                                            1. 2

                                                                                              What’s a good resource to start learning emacs?

                                                                                                1. 1

                                                                                                  I gave emacs a try through mg(8), a small editor that ressemble emacs a lot (no LISP though) and is part of the default OpenBSD system. It comes with a tutorial file that takes you on a tour to discover the basics of emacs, starting with the navigation. This is like vimtutor, and it’s great !

                                                                                                  It also let you discover emacs as a text editor, AND NOTHING MORE! Which is refreshing and helped me reconsider its usefulness for editing text 😉

                                                                                              1. 2

                                                                                                FYI, each of the algorithm-specific commands in openssl are superseded by the generic genpkey and pkey commands: https://linux.die.net/man/1/genpkey