1. 5

    I was intrigued by this:

    According to the author of Kitty, tmux is a bad idea, apparently because it does not care about the arbitrary xterm-protocol extensions Kitty implements. Ostensibly, terminal multiplexing (providing persistence, sharing of sessions over several clients and windows, abstraction over underlying terminals and so on) are either unwarranted and seen as meddling by a middleman, or should be provided by the terminal emulator itself (Kitty being touted as the innovator here). A remarkable standpoint, to say the least.

    Because this is something that I completely agree with. I have recently switched to abduco from tmux because I want my terminal to handle being a terminal and the only thing that I wanted from tmux was connection persistence. There are a load of ‘features’ in tmux that really annoy me. It does not forward everything to my terminal and implements its own scrollback which means I can’t cat a file, select it, copy it, and paste it into another terminal connected to a different machine (which is something I do far more often than I probably should).

    1. 2

      Same. I never warmed to the “tmux is all you need” approach, because, honestly, it’s just a totally unnecessary interloper in my terminal workflow. I like being able to detach/reattach sessions, but literally everything else about tmux drives me bananas.

      1. 2

        I also love how Kitty pretty easily allows you to extend these features with other programs. Instead of Kitty’s default history, I have it enter neovim (with all of my configurations) so that I can navigate and copy my history in the same way the I write my code. I have been using Kitty for a few years and absolutely love it. The only issue I run into on occasion is that SSHing into some servers can mess the terminal up a little.

        1. 2

          Yeah, I do not like how some terminal emulators now are leaving everything to tmux/screen, rather than implementing useful features for management, scrollback, etc themselves. For 99% of my cases, I don’t need tmux in addition to my shell and a good terminal emulator, so idk why I’d want to introduce more complexity.

          kitty honestly works very well for me, and has Unicode and font features that zutty does not seem to consider. Clearly some work needs to be done for conformance to the tests that the author raises, but for my needs, kitty works great for Unicode coverage and rendering.

          1. 1

            Yeah, I do not like how some terminal emulators now are leaving everything to tmux/screen,

            So I think tmux and screen both suck since they don’t pass through to the terminal things like scrollback. Instead of the same mouse wheel or shift+page up, I have to shift gears to C-a [ or whatever it is.

            I actually decided to write my own terminal emulator… and my own attach/detach session thing that goes with it. With my custom pass-through features I can actually use them all the same way. If I attach a full screen thing, the shift pageup/down just pass through to the application, meaning it can nest. Among other things. I kinda wonder why the others don’t lobby for similar xterm extensions or something so they can do this too.

        1. 2

          This just moves the risk from Cloudflare to Cloudflare’s partners. Will they sell query logs?

          1. 2

            I mean, that depends.

            Cloudflare isn’t the only DoH player in the game (https://dnscrypt.info/public-servers/ contains DoH and DNSCrypt supporting servers), and the source to their proxy is released. You could set up a community DoH proxy that proxies to Quad9 and offer it to a bunch of folks. Your queries wouldn’t touch Cloudflare in this case.

          1. 1

            Love this post, can’t wait to use artist-mode for diagrams. I also went through and added pulsing to various movements so that I stop losing my cursor.

            1. 1

              Honestly, this reminds me a lot of older PC-BSD, with the app bundles being reminiscent of PBI. I know that PBIs ended being a consistent source of problems for PC-BSD users, so hopefully these app bundles don’t go the same way.

              1. 9

                I think the big unmentioned aspect is the fact that it’s a social network. Network effects are real and significant for collaborative systems. It takes time to learn GitLab, sourcehut et al and this imposes switching costs that keep people wherever they are - “source gravity” as a form of “data gravity”. GitHub also pays a growing number of developers rent (I’m on the payroll). But I don’t host anything of a private nature on GitHub, and I’ve worked on GH alternatives because I ultimately see collaborative systems that pay people as the beginnings of a two-sided market that may further commoditize engineering labor and result in a monopsony or oligopsony where everyone works for a small number of platforms that strip workers of their individual qualities in a bid to drive their compensation into the ground. This is a real systemic threat and we need to be wary of things that seem like they could further undermine our labor security.

                1. 7

                  Network effects are real and significant for collaborative systems.

                  The takeaway for me is that since git is distributed, it’s easy to accept contributions from GitHub users via a mirror while keeping the canonical repo somewhere better.

                  Since moving the Fennel programming language off GitHub we’ve gotten a few issue reports and patches thru the GitHub mirror we left up, but honestly the majority of new contributors (granted the sample size here is very small; maybe 3 of 5) have preferred to use SourceHut.

                  1. 5

                    If we’re talking company or project context, I am not buying the social network thing.

                    I know so many people who don’t use stars, don’t follow people, don’t fiddle with their “profile page” - they just enjoy? or at least use it host git repos, participate in discussions in issues and maybe use the wiki. That’s what you do with every other code hosting platform as well. I use it just like our local GitLab at work. 0% like a social network.

                    The only thing that is really nice that you don’t need a 67th account somewhere, you can just participate

                    1. 2

                      That’s fine that you don’t use it that way, but unless you don’t use open source code at work, you are still significantly influenced by the network effects from those who are. The fact is, a lot of software only becomes popular due to its discoverability, which the GH network can significantly amplify. Software that becomes popular is often the software that receives more contribution over time, and the fact that it became popular significantly increases the chances that you, someone who does not discover it in this way, will discover it nevertheless indirectly through someone who did.

                      1. 2

                        I still disagree. I don’t discover stuff on Github, I discover stuff in package repositories or via word of mouth, or blog posts. Now you could argue that SOMEONE down the line will discover it on Github, that is true, but it worked fine before Github with Sourceforge and Freshmeat and what not, so I concede your point that it must be discovered to be used, but the “social network” is only one method of discoverability and if a project is hosted elsewhere than Github it doesn’t mean its not discoverable.

                        1. 4

                          Wow, you and I have very different experiences. When my coworkers are looking for software, libraries, whatever, their search often starts and ends with Github. If it’s not at least mirrored on Github, they won’t find it at all.

                          1. 3

                            Sourceforge, Freshmeat, the campfire in a cave where a group huddled around, the lump of mold on a fruit, etc… are all information networks with propagation proportional to the userbase. You can start your own, and new ones will replace the previous hubs eventually, but there are significant trade-offs that will be made. The trade-offs may appeal to some subset that may grow or shrink over time, but to reject any network at all will result in the end of the family tree for any organism that requires others to reproduce. Such isolation surely brings some people peace, but for an article posted on the internet of all places to claim that we don’t need (big network) seems a bit ironic to me.

                    1. 1

                      I’ve decided the reason I’ve never mastered rebase is because working with small reams on production and OSS projects there is little value in tidy history. For projects involving really large teams this is completely different.

                      1. 2

                        I disagree.

                        Even if there are only 2-3 developers, being able to see the flow of changes going into mainline as discrete feature oriented chunks is immensely helpful when you’re going back through history trying to figure out exactly where something went wrong.

                        1. 2

                          Honestly, I find a lot of use in a tidy history in all projects. Any time that I need to use git log -S, git bisect, git revert, it makes it way easier if the history is tidier. I do use these tools in my personal projects, because they’re v powerful for finding bugs.

                        1. 7

                          The fact that github PR workflows does not support git diff between force pushes is so mind bending to me. Their core staffs who contribute to git core uses rebase regularly.

                          Gitlab does retains diff between force pushes is a nice to have, but comes with a long term performance trade-off.

                          1. 4

                            They’ve half-assed this feature now. On the message where it says you force pushed, if you click the text “force pushed,” it will give you a diff.

                            However, multiple force pushes will be collapsed into one message, and clicking on that link will only show you the latest force push diff.

                          1. 1

                            I admittedly don’t give a shit about R, but this is a very interesting part to me:

                            However, the Apple silicon platform uses a different application binary interface (ABI) which GFortran does not support, yet.

                            Does this mean that the ABI for core Apple libs is different? That seems expected if you’re switching to a whole new arch. Or do they mean that something like the calling convention is different? I’m super interested in the differences here.

                            1. 1

                              I have no expertise on the platform, but I did find in some Apple docs a reference to the C++ ABI now matching that of iOS: https://developer.apple.com/documentation/xcode/writing_arm64_code_for_apple_platforms#//apple_ref/doc/uid/TP40009020-SW1 (which itself makes reference to developer.arm.com, so changing ABI is likely not a decision made by Apple alone).

                              1. 9

                                Most of those look pretty much like the 64-bit Arm PCS. I presume that Apple is using the same ABI for AArch64 macOS as iOS. The main way that I’m aware that this differs from the official one is in handling of variadic arguments. Apple’s variadic ABI is based on an older version of the Arm one, where all variadic arguments were passed on the stack. This is exactly the right thing to do for two reasons:

                                • Most variadic functions are thin wrappers around a version that takes a va_list, so anything other than passing them on the stack requires the caller to put them into registers and then the callee to spill them to the stack. This is much easier if the caller just sticks them on the stack in the first place.
                                • If all variadic arguments are contiguous on the stack, the generated code for va_next is simpler. So much simpler that, in more complex implementations, va_start is often compiled to something that writes all of the arguments that are in registers into the stack.

                                As an added bonus, if you have CHERI, MTE, or Asan, you can trivially catch callees going past the last argument. This is exactly how variadics worked on the PDP-11 and i386, because all arguments were passed on the stack. In K&R C, you didn’t actually have variadics as a language feature, you just took the address of the last formal argument and kept walking up the stack.

                                The down side is that now your variadic and non-variadic calling conventions are different if you non-variadic convention passes any arguments in registers. That shouldn’t matter, because it’s undefined behaviour in C to call a function with the wrong calling convention. It did matter in practice because (when AArch64 was released, at least, possibly fixed now) some high-impact bits of software (the Perl and Python interpreters, at least) used a table of something like int(*)(int, ...) function pointers and didn’t bother casting them to the correct type before invoking them. They worked because on most mainstream architectures the because the variadic and non-variadic conventions happened to be the same for functions that up to four integer-or-pointer arguments.

                                I am still sad that Arm made the (commercially correct) decision not to force people to fix their horrible code for AArch64.

                                I believe that the new Apple chips also support Arm’s pointer signing extension and so there are a bunch of features in the ABI related to that, which probably aren’t in GCC yet.

                                1. 1

                                  It did matter in practice because (when AArch64 was released, at least, possibly fixed now) some high-impact bits of software (the Perl and Python interpreters, at least) used a table of something like int(*)(int, …) function pointers and didn’t bother casting them to the correct type before invoking them.

                                  I think you just explained for me why apple’s ObjC recently started demanding explicit casts of IMP (some thing like id(*)(id, SEL, …), which I’m aware you already know but readers may not).

                                  1. 1

                                    I don’t think that should be a new thing. Back in the PowerPC days, there were a bunch of corner cases (particularly around things that involved floating-point arguments) where that cast was important. On 32-bit x86, if you called a function using the IMP type signature but it returned a float or double then it would leave the x87 floating point stack in an unbalanced state and lead to a difficult-to-debug crash later on.

                                    On Apple AArch64; however, you’re right that it’s a much bigger impact: all arguments other than self and _cmd will be corrupted if you call a method using the IMP signature.

                                    One of the breaking changes I’d like to make to Objective-C is adding a custom calling convention to IMP so that C functions that you want to use as IMPs have to be declared with __attribute__((objc_method)) or similar. It would take a few years of that being a compiler warning before code is migrated but once it’s done you have the freedom to make the Objective-C calling convention diverge from the C ones.

                            1. 16

                              Having recently introduced a “please explain to me what how a | is used in a bash shell” question in my interviews, I am surprised by how many people with claimed “DevOps” knowledge can’t answer that elementary question given examples and time to think it out (granted, on a ~60 sample size).

                              Oh, this is a gem! It will go right next to “why stack the memory area has the same name as stack the data structure” into the pile of most effective interview questions.

                              1. 12

                                Do these questions even work? seriously. I remember interviewing someone who didn’t have the best concepts of linux, shell,etc but he knew the tools that were needed for the DevOps role and he gets the job done; knowing things like what a shell pipeline doesn’t factor in for me.

                                In terms of the article itself, like I said above, people know AWS and know how to be productive with the services and frameworks for AWS. that alone is a figure hard to quantify. Sure I could save money bringing all the servers back internally or using cheaper datacenters, but I worked at a company that worked that way. You end up doing a lot of busy work chucking bad drives, making tickets to the infrastructure group and waiting for the UNIX Admin group to add more storage to your server. WIth AWS I can reasonably assume I can spin up as many c5.12xlarge machines as I want, whenever I want with whatever extras I want. It costs an 1/8 of a million a year, roughly. I see that 1/8 of a million that cuts out a lot of busy work I don’t care about doing and an 1/8 of a million that simplifies finding people to do the remaining work I don’t care about doing. The author says money wasted, I see it as money spent so i don’t have to care, and not caring is something I like; hell it isn’t even my money.

                                1. 4

                                  I remember interviewing someone who didn’t have the best concepts of linux, shell,etc but he knew the tools that were needed for the DevOps role and he gets the job done

                                  I have to admit, I’ve never interviewed devops, only engineers. And in my experience, it’s more important for an engineer to dig into fundamental processes that he’s working with, and not just to know ready-made recipes to “get the job done”.

                                  1. 7

                                    I agree completely with this statement, and I think this is exactly what the article mentions as one of the lock-in steps. The person can “get the job done” because “they know the tools” is exactly the issue - the person picked up the vendor-specific tools and is efficient with them. But in my experience, when shit hits the fan, the blind copy-pasting of shell commands starts because the person doesn’t undersand the pipe properly.

                                    Now, I don’t mean by that that the commenter above you is wrong. You may be still saving money in the long run. I’m just saying that it also definitely increases that vendor lock in.

                                  2. 3

                                    I feel like saving your company of whatever scale $15,000 a year per big server worthwhile, as long as it doesn’t end up changing your working hours. I know that where I work, if I found a way to introduce massive savings, I would be rewarded for it. Shame SIP infrastructure is so streamlined already…

                                    1. 2

                                      It is optimized for accuracy, not recall. This question may have some positive correlation with good devOps. It may just have positive correlation with year-of-experience, hence, good devOps. Hard to quantify.

                                    2. 2

                                      Too bad the author didn’t specify how many is “many”. I would expect some of the interviewees not answering because of interview stress, misunderstanding the question etc.

                                      1. 25

                                        This is not an answer in vogue, but I don’t want ops people who get too stressed to be able to explain shell pipelines.

                                        1. 12

                                          In my experience, a lot of people that get stressed during interviews don’t have any stress problems when on the job.

                                          1. 6

                                            Indeed. I once interviewed an engineer who was completely falling apart with stress. I was their first interview, and I could tell within minutes they had no chance whatsoever of answering my question. So I pivoted the interview to discuss how they were feeling, and why they were having trouble getting into the problem. We ended up abandoning my technical question entirely and chatting for the rest of the interview.

                                            Later, in hiring review, the other interviewers said the candidate nailed every question. Strong hire ratings across the board. Had I pressed on with my own question instead of spending my hour helping them de-stress and get comfortable, we likely never would have hired one of the best I’ve ever worked with.

                                          2. 7

                                            I quite disagree with this, perhaps because I’m the type of person that gets very stressed out by interviews. What you’re saying makes sense if we assume that all stressors are uniform for all people, but that doesn’t really match reality at all.

                                            For me, social situations (and interviews count as social situations) are incredibly, sometimes cripplingly stressful. At worst, I’ve had panic attacks during interviews. However, throughout my entire ops career I’ve worked oncall shifts, and had incidents with millions of dollars on the line, and those are not anywhere near the same. I can handle myself very well during incidents because it’s entirely a different type of stressor.

                                            1. 4

                                              Same in my company. All engineering is on-call for a financial system and it’s very hard to hire someone that get stressed out during the interview when this person would have to respond to incidents with billions in transit.

                                              1. 4

                                                Yep. I have a concern that in our push to improve interviewing we are overcorrecting.

                                            2. 5

                                              I’m helping my company interview some people in that area. We have a small automated list of questions around 10 to 12) that we send to candidates that apply, so nobody loses time with things that we’ve agreed interviewees should know.

                                              Less than 10% manage to answer questions like “Which command can NOT show the content of a file?” (with a list having grep/cat/emacs/less/ls/vim).

                                              When candidates pass this test, we interview them, and less than 5% can answer questions like the author mentions, at least in a

                                              1. 3

                                                Kinda unrelated to the article was just an anecdote to say “there’s a load of people that can’t really use a classic server and need more modern IaaS to operate”.

                                                For the sake of defending my practices though, I did give people 5 minutes to think of the formulation and gave examples via text of how one would use it (e.g. ps aux | grep some_name). I think the amount of people that couldn’t answer was ~2/5. As in, I don’t think I did it in an assholish “I want you to fail way”.

                                                It’s basically just a way to figure out if people are comfortable~ish working with a terminal, or at least that they used one more than a few times in their lives.

                                                1. 5

                                                  On the other hand, I can operate a “classic server”, but struggle with k8s and, to some degree, even with AWS. Although I’m sure I can learn, I simply never bothered to do so as I never had a reason or interest. I suppose it’s the same with many who were raised on AWS: they simply never had a reason to learn about pipes.

                                                  1. 1

                                                    I didn’t imply malpractice, rather statistical error. That said, anywhere close to 2/5 in conditions you described… It’s way higher than what I would expect. I didn’t hire any DevOps recently tho, so maybe I’m just unaware how bad things got.

                                                  2. 1

                                                    This is always true for interviews, but this is a measurement error that would be present for any possible interview question.

                                                    1. 1

                                                      Yeah that was my point.

                                                1. 1

                                                  tbh I used to have a big ole prompt and an rprompt. When I switched back fulltime to ksh, I took some time to do the same as OP, but ended up with this prompt: ~|11:26:49|0$. Just the basename of cwd, time and the return code of the last command. I’ve never had occasion to need a more up-to-date time.

                                                  I’m oncall a lot and do ops which results in needing to copy/paste into slack, irc, tickets, etc. and honestly an rprompt was too disruptive there and a multiline prompt to me just looks like too much line noise.

                                                  1. 2

                                                    Very cool!

                                                    Might be worth noting that this won’t work at all on any non-Linux systems though (many don’t have /proc, and on some UNIX systems, files in /proc are not plaintext, many don’t have /sys). Would you be interested in advice on how to structure it to be more multi-platform friendly? Or do you want to keep it Linux-specific?

                                                    1. 1

                                                      Thank you.

                                                      I plan on supporting BSDs as well for sure! Otherwise nix (as in *nix) would be a misleading title. I know mostly what I need to do to get them supported, I just haven’t gotten around to implementing it yet. I’m juggling around multiple projects right now, along with various in real life things.

                                                      If there are any other *nix operating systems that people use and would like support for, I’d be more than happy to add it.

                                                      And yeah, advice would be great! :)

                                                    1. 12

                                                      I’d be really hesitant to link a Slack thread in a commit message. Even permissions aside, Slack limits the history access on some plans so this link may soon become invalid.

                                                      1. 4

                                                        I prefer analysis and notes go in a bug tracker for a variety of reasons (e.g., you can amend it later if URLs change or to make it clear the analysis was somehow incorrect) – but, yes, whether in bug notes or in the commit message itself, it definitely feels better to paraphrase, summarise, or even just copy and paste the salient details rather than just link to a discussion in some other potentially ephemeral mechanism or something volatile like a gist.

                                                        1. 1

                                                          I mostly agree here. But I do think that links can belong in a commit message, like when referencing a design document. Since design docs are usually a snapshot of a design of a feature, that usually has context on motivations for some of the decisions that are too long to type out in a commit message, as well as discussions.

                                                          1. 1

                                                            Agreed, IMHO details and discussion should all be in the bug tracker - not in Slack or in commit messages. The commit message should have a short ”what does this change do” explanation and a reference to the bug tracker where more details can be found if needed. I don’t agree with the need to put an entire essay in the commit message that seems to be popular on the Internet.

                                                            1. 16

                                                              My 12-year-old codebase is on it’s fourth bug tracker. None of the links in commit messages still work, but the repository kept history when it got converted to git, so 12-year-old commit messages still carry useful context.

                                                              1. 2

                                                                As other comments mention, commit history has a longer lifetime than bug trackers. Of course, you can port your tickets when you change trackers, but will you?

                                                                1. 2

                                                                  Yes, of course. Not migrating your hard earned knowledge would be an incredible destruction of time and money, especially in a commercial setting.

                                                                  1. 2

                                                                    Commits are for non commercial settings as well, and sometimes migration can’t be done e.g. when you rehome but don’t move a GitHub repository (if for example original access controls were vested in single humans who then apparently vanished).

                                                                    Keeping content in an issue tracker is nice, but it’s always worth duplicating knowledge into the git commit.

                                                                    Catastrophic data loss can always happen, even in commercial settings.

                                                            2. 1

                                                              Interesting point. It’s a trade-off between providing context easily and future proofing. I was assuming a paid plan, which has no historical limits. I don’t think free slack is a good fit for this, because of the memory hole.

                                                              1. 2

                                                                I’m fine with linking to a fully fledged bugtracker we know we’ll keep using (yeah…) but something like Slack feels far to flimsy to me. It’s not clear where the context begins and where it ends as the discussion often evolves and drifts away to other related topics. A chat platform just isn’t a fit here in my opinion.

                                                            1. -1

                                                              Being it a lot simpler than vim, most people (including myself) use vim.

                                                              1. 8

                                                                I like how the commands are listed at the bottom. I always use it on servers.

                                                                1. 6

                                                                  I disagree with the notion that most people use vim. Even among sysadmins I know, few of them prefer vim.

                                                                  1. 4

                                                                    Being a programmer that works with sysadmins on a daily basis, I can confirm that almost none of them use vim.

                                                                    EDIT: Not that, that’s a problem. I could care less. Just an interesting tidbit.

                                                                    1. 2

                                                                      In your experience what editor is the most used?

                                                                      1. 6

                                                                        notepad.exe

                                                                        :(

                                                                        1. 1

                                                                          Don’t take it the wrong way, but perhaps you need to widen your connections. The vast majority of sysadmins/devops folks I know are vim enthusiasts.

                                                                          1. 1

                                                                            Huh, that’s very different from my experience. I definitely wouldn’t say that the sysadmins that I’ve worked with are vim enthusiasts, but they all use vi since they can trust that it’ll be installed on a system. Many have had familiarity with ed as well.

                                                                            1. 1

                                                                              I would say a bit under half know how to use vi/vim. Most probably know the very basics (insert mode, :wq), but in general I don’t know many people who use vim as their editor of choice. There’s a couple emacs guys, but the most common editor I see used on Linux servers is Nano. Most of my current and former coworkers are <30, which might have something to do with it.

                                                                        1. 1

                                                                          Hey, is there much of a difference between this strategy and the much older strategy of test fixtures? It sounds like a different name for a very old concept.

                                                                          Here’s a wiki article about test fixtures: https://en.wikipedia.org/wiki/Test_fixture#Software

                                                                          Here’s an example of test fixtures at work in pytest, a popular python testing framework: https://docs.pytest.org/en/latest/fixture.html#fixture

                                                                          1. 2

                                                                            I was a wreck for pretty much all of April and May, unable to work on tech projects and work. Thankfully, my boss was very understanding and didn’t penalize me for my change in performance and offered me plenty of sick time off when I needed it.

                                                                            Now, I’m slowly getting back into tech hobby projects now that I feel a lot better, and that cases are down and my country has arrived at what it considers its new normal.

                                                                            1. 3

                                                                              These structures shells are, in my humble opinion, more or less useless. If your script/idea is large/complex enough that it would benefit by using nushell instead of bash/posix sh, then you may as well write it in a “real” language like python or ruby. On the other hand, for interactive usage I personally only write short snippets of sh code at a time, so I wouldn’t really benefit from using a shell with structured data.

                                                                              1. 6

                                                                                I definitely disagree here. Just from all of the cases I’ve seen in my career of the developer of shell scripts having to deal with unexpected spaces in output, or the cognitive overhead of dealing with spaces and the problems they cause. Some of it comes naturally to folks, but people rarely work in a vacuum, so someone working with your shell scripts on your team likely do have to do some extra thinking to account for spaces (or figure out where they came from when debugging).

                                                                                I had the luxury of using PowerShell at an old job, and honestly I still miss it. As an above post mentions, it’s very difficult for something like that to exist on UNIX-likes, since the real benefits of PowerShell are being able to use WMI objects. Honestly, I think nu suffers from the inability to boil the ocean in this regard.

                                                                                1. 4

                                                                                  I agree with you on this, in general. However, I’ve seen people who were awk wizards and could crank out an amazing one-liner very quickly to do something super useful, but one-off. Structured data might make that kind of shell mastery a little easier to develop. Or not, I’m not sure, just thinking aloud.

                                                                                  1. 2

                                                                                    If your script/idea is large/complex enough that it would benefit by using nushell instead of bash/posix sh, then you may as well write it in a “real” language like python or ruby.

                                                                                    ‘Interactive language, or language with good data structures?’ is a real tradeoff when you have only Posixshell and Python/Ruby to choose from. Posixshell is good for interactive usage; Python and Ruby have expressive, predictable, and easy-to-manipulate data structures, namely lists and dicts. But Nushell can make this a false tradeoff, because there is is no reason a language could not be interactive-friendly (shell-like) and use structured data. At that point ‘shell or script’ becomes a question of ‘how complex is the problem’, instead of ‘how hard does the language make it?’

                                                                                    The gains lie wherever solutions are only hard to write at the command line because of Posixshell’s incidental complexity. Those solutions we could dash off at the command line if only we had a better language. Nushel wants to make that possible.

                                                                                    When your shell language has both interactivity and a powerful universal data structure, you’ll be able to solve so many more things ad-hoc in your shell, without needing to bail out to a scripting language.

                                                                                  1. 11

                                                                                    Other commenters have said much of what I was thinking reading this page.

                                                                                    However, I do agree with this page about how consolidation under Cloudflare is kind of a scary prospect. Really, from a business perspective, they’ve made their offerings very attractive, and have done an excellent job marketing. But yes, all that consolidation is very alarming from a privacy perspective. Cloudflare does a good job promoting privacy around the internet, and coming into cloudflare, but like many companies, how that data is used internally is a black box.

                                                                                    The other point that I wish was levied at more than just Cloudflare — the point about the VPN market. Cloudflare has adopted tactics of the popular VPN providers that are advertising services at folks not as familiar with tech, selling them what’s almost privacy snakeoil. I’d like to see that point echoed far and wide, about all VPN providers, so that their potential customers actually understand what they’re buying.

                                                                                    1. 27

                                                                                      It’s worth linking to A&A’s (a British ISP) response to this: https://www.aa.net.uk/etc/news/bgp-and-rpki/

                                                                                      1. 16

                                                                                        Our (Cloudflare’s) director of networking responded to that on Twitter: https://twitter.com/Jerome_UZ/status/1251511454403969026

                                                                                        there’s a lot of nonsense in this post. First, blocking our route statically to avoid receiving inquiries from customers is a terrible approach to the problem. Secondly, using the pandemic as an excuse to do nothing, when precisely the Internet needs to be more secure than ever. And finally, saying it’s too complicated when a much larger network than them like GTT is deploying RPKI on their customers sessions as we speak. I’m baffled.

                                                                                        (And a long heated debate followed that.)

                                                                                        A&A’s response on the one hand made sense - they might have fewer staff available - but on the other hand RPKI isn’t new and Cloudflare has been pushing carriers towards it for over a year, and route leaks still happen.

                                                                                        Personally as an A&A customer I was disappointed by their response, and even more so by their GM and the official Twitter account “liking” some very inflammatory remarks (“cloudflare are knobs” was one, I believe). Very unprofessional.

                                                                                        1. 15

                                                                                          Hmm… I do appreciate the point that route signing means a court can order routes to be shut down, in a way that wouldn’t have been as easy to enforce without RPKI.

                                                                                          I think it’s essentially true that this is CloudFlare pushing its own solution, which may not be the best. I admire the strategy of making a grassroots appeal, but I wonder how many people participating in it realize that it’s coming from a corporation which cannot be called a neutral party?

                                                                                          I very much believe that some form of security enhancement to BGP is necessary, but I worry a lot about a trend I see towards the Internet becoming fragmented by country, and I’m not sure it’s in the best interests of humanity to build a technology that accelerates that trend. I would like to understand more about RPKI, what it implies for those concerns, and what alternatives might be possible. Something this important should be a matter of public debate; it shouldn’t just be decided by one company aggressively pushing its solution.

                                                                                          1. 4

                                                                                            This has been my problem with a few other instances of corporate messaging. Cloudflare and Google are giant players that control vast swathes of the internet, and they should be looked at with some suspicion when they pose as simply supporting consumers.

                                                                                            1. 2

                                                                                              Yes. That is correct, trust needs to be earned. During the years I worked on privacy at Google, I liked to remind my colleagues of this. It’s easy to forget it when you’re inside an organization like that, and surrounded by people who share not only your background knowledge but also your biases.

                                                                                          2. 9

                                                                                            While the timing might not have been the best, I would overall be on Cloudflare’s side on this. When would the right time to release this be? If Cloudflare had waited another 6-12 months, I would expect them to release a pretty much identical response then as well. And I seriously doubt that their actual actions and their associated risks would actually be different.

                                                                                            And as ISPs keep showing over and over, statements like “we do plan to implement RPKI, with caution, but have no ETA yet” all too often mean that nothing will every happen without efforts like what Cloudflare is doing here.


                                                                                            Additionally,

                                                                                            If we simply filtered invalid routes that we get from transit it is too late and the route is blocked. This is marginally better than routing to somewhere else (some attacker) but it still means a black hole in the Internet. So we need our transit providers sending only valid routes, and if they are doing that we suddenly need to do very little.

                                                                                            Is some really suspicious reasoning to me. I would say that black hole routing the bogus networks is in every instance significantly rather than marginally better than just hoping that someone reports it to them so that they can then resolve it manually.

                                                                                            Their transit providers should certainly be better at this, but that doesn’t remove any responsibility from the ISPs. Mistakes will always happen, which is why we need defense in depth.

                                                                                            1. 6

                                                                                              Their argument is a bit weak in my personal opinion. The reason in isolation makes sense: We want to uphold network reliability during a time when folks need internet access the most. I don’t think anyone can argue with that; we all want that!

                                                                                              However they use it to excuse not doing anything, where they are actually in a situation where not implementing RPKI and implementing RPKI can both reduce network reliability.

                                                                                              If you DO NOT implement RPKI, you allow route leaks to continue happening and reduce the reliability of other networks and maybe yours.

                                                                                              If you DO implement RPKI, sure there is a risk that something goes wrong during the change/rollout of RPKI and network reliability suffers.

                                                                                              So, with all things being equal, I would chose to implement RPKI, because at least with that option I would have greater control over whether or not the network will be reliable. Whereas in the situation of NOT implementing, you’re just subject to everyone else’s misconfigured routers.

                                                                                              Disclosure: Current Cloudflare employee/engineer, but opinions are my own, not employers; also not a network engineer, hopefully my comment does not have any glaring ignorance.

                                                                                              1. 4

                                                                                                Agreed. A&A does have a point regarding Cloudflare’s argumentum in terrorem, especially the name and shame “strategy” via their website as well as twitter. Personally, I think is is a dick move. This is the kind of stuff you get as a result:

                                                                                                This website shows that @VodafoneUK are still using a very old routing method called Border Gateway Protocol (BGP). Possible many other ISP’s in the UK are doing the same.

                                                                                                1. 1

                                                                                                  I’m sure the team would be happy to take feedback on better wording.

                                                                                                  The website is open sourced: https://github.com/cloudflare/isbgpsafeyet.com

                                                                                                  1. 1

                                                                                                    The website is open sourced: […]

                                                                                                    There’s no open source license in sight so no, it is not open sourced. You, like many other people confuse and/or conflate anything being made available on GitHub as being open source. This is not the case - without an associated license (and please don’t use a viral one - we’ve got enough of that already!), the code posted there doesn’t automatically become public domain. As it stands, we can see the code, and that’s that!

                                                                                                    1. 7

                                                                                                      There’s no open source license in sight so no, it is not open sourced.

                                                                                                      This is probably a genuine mistake. We never make projects open until they’ve been vetted and appropriately licensed. I’ll raise that internally.

                                                                                                      You, like many other people confuse and/or conflate anything being made available on GitHub as being open source.

                                                                                                      You are aggressively assuming malice or stupidity. Please don’t do that. I am quite sure this is just a mistake nevertheless I will ask internally.

                                                                                                      1. 1

                                                                                                        There’s no open source license in sight so no, it is not open sourced.

                                                                                                        This is probably a genuine mistake. We never make projects open until they’ve been vetted and appropriately licensed.

                                                                                                        I don’t care either way - not everything has to be open source everywhere, i.e. a website. I was merely stating a fact - nothing else.

                                                                                                        You are aggressively […]

                                                                                                        Not sure why you would assume that.

                                                                                                        […] assuming malice or stupidity.

                                                                                                        Neither - ignorance at most. Again, this is purely statement of a fact - no more, no less. Most people know very little about open source and/or nothing about licenses. Otherwise, GitHub would not have bother creating https://choosealicense.com/ - which itself doesn’t help the situation much.

                                                                                                      2. 1

                                                                                                        It’s true that there’s no license so it’s not technically open-source. That being said I think @jamesog’s overall point is still valid: they do seem to be accepting pull requests, so they may well be happy to take feedback on the wording.

                                                                                                        Edit: actually, it looks like they list the license as MIT in their package.json. Although given that there’s also a CloudFlare copyright embedded in the index.html, I’m not quite sure what to make of it.

                                                                                                        1. -1

                                                                                                          If part of your (dis)service is to publically name and shame ISPs, then I very much doubt it.

                                                                                                2. 2

                                                                                                  While I think that this is ultimately a shit response, I’d like to see a more well wrought criticism about the centralized signing authority that they mentioned briefly in this article. I’m trying to find more, but I’m not entirely sure of the best places to look given my relative naïvete of BGP.

                                                                                                  1. 4

                                                                                                    So as a short recap, IANA is the top level organization that oversees the assignment of e.g. IP addresses. IANA then delegates large IP blocks to the five Regional Internet Registries, AFRINIC, APNIC, ARIN, LACNIC, and RIPE NCC. These RIRs then further assigns IP blocks to LIRs, which in most cases are the “end users” of those IP blocks.

                                                                                                    Each of those RIRs maintain an RPKI root certificate. These root certificates are then used to issue certificates to LIRs that specify which IPs and ASNs that LIR is allowed to manage routes for. Those LIR certificates are then used to sign statements that specify which ASNs are allowed to announce routes for the IPs that the LIR manages.

                                                                                                    So their stated worry is then that the government in the country in which the RIR is based might order the RIR to revoke a LIR’s RPKI certificate.


                                                                                                    This might be a valid concern, but if it is actually plausible, wouldn’t that same government already be using the same strategy to get the RIR to just revoke the IP block assignment for the LIR, and then compel the relevant ISPs to black hole route it?

                                                                                                    And if anything this feels even more likely to happen, and be more legally viable, since it could target a specific IP assignment, whereas revoking the RPKI certificate would make the RoAs of all of the LIRs IP blocks invalid.

                                                                                                    1. 1

                                                                                                      Thanks for the explanation! That helps a ton to clear things up for me, and I see how it’s not so much a valid concern.

                                                                                                  2. 1

                                                                                                    I get a ‘success’ message using AAISP - did something change?

                                                                                                    1. 1

                                                                                                      They are explicitly dropping the Cloudflare route that is being checked.

                                                                                                  1. 14

                                                                                                    “when the tests all pass, you’re done”

                                                                                                    Every TDD advocate I have ever met has repeated this verbatim, with the same hollow-eyed conviction.

                                                                                                    Citation strongly required. If it was something being repeated verbatim it should be all over the internet. But when I search for the phrase in quotes I get.. versions of this blog post.

                                                                                                    In my experience the first thing TDD teaches is that testing is a process. Every time you want to make a change you write a test first. Where’s the “done” in that?!

                                                                                                    Forget “every”, and forget “verbatim”, I’d like to see one person advocating for something resembling this thesis.

                                                                                                    1. 9

                                                                                                      I don’t know a good way to cite this well, but I have had the same experience as the author with regards to TDD-minded folks. I also want to stress that I’m in a camp that likes having thorough unit tests, but disagrees with TDD for more or less the reasons that the article specified.

                                                                                                      1. 1

                                                                                                        Thank you. I definitely appreciate that experience can vary. As long as you’re writing automated tests the details are much less important.

                                                                                                    1. 1

                                                                                                      This is honestly an underrated post. I think that they do a great job cutting to the core of language wars, the root of those conversations. Also slightly biased in that I agree very much with how they pick up and play with languages — they like to see if a new language’s quirks or oddities jive with how they like to program (taken from the discussed Zig post).