Threads for arp242

  1.  

    In zsh you can also use setopt no_flow_control to disable ^S and ^Q. The difference with stty is that it only applies to ZLE (the zsh line editor) rather than everything, which may be better or worse, depending on what you want.

    1. 5

      Leaving aside the technical reasons, FreeBSD “feels like home”. I’ve just purchased a refurbished X250 and plan to switch back to FreeBSD from Ubuntu as my daily driver for everything other than streaming media and gaming.

      Which is odd, because I grew up with Linux in the 1990s, and only discovered FreeBSD in ~ 2014.

      1. 2

        Arguably the Linux of 2022 is unrecognisable from the Linux of the 1990s due to the insane amount of software churn, whereas FreeBSD hasn’t changed that much. When the BSDs adopt something new, they try to make it fit in with the rest of the system and make sure it’s of a solid enough design that it doesn’t have to be replaced in a few years. And the rate of change is much slower, too.

        1.  

          On the other hand, in the (late) 1990s and 2000s running Linux or BSD on your desktop was more or less the same experience: loads of stuff wouldn’t work “because $X only supports Windows”.

          Today things have expanded a little bit with “$X support Windows, macOS, and Linux”. Of course, lack of support for $X is often not FreeBSD’s fault, but I find it’s a good pragmatic reason to prefer Linux over any of the BSD systems, especially for daily desktop use. Not that I have a particularly great love for the “Linux ecosystem”, but “Just Works™” (most of the time) counts for a lot as I’m too old to be dealing with that kind of stuff.

          1.  

            That’s why I switched back to Linux from (Net)BSD, too. Not enough time for ceaseless yak shaving and working around deficiencies.

      1. 1

        5GB seems very low! I wonder if they’re worried specifically about the size of storage or if it’s a proxy for general workload. Or maybe all the bigger projects are not things they want; people storing porn or whatever.

        1. 8

          It seems very low, but it’s still quite a lot, assuming you’re using repositories for source code. I’m currently using ~2.2Gb of storage for my git repos (on my own server mind you), 1.7Gb of which is my brother’s website with huge photos. The other ~500mb are ~170 repositories of various sizes (including a mirror of Guix, accounting for about half of that). My repositories, even those with a long history and a fair amount of commits clock in below 100mb, but most of them are even smaller than that. For reference, Gitea, a 8+ year old project with over 13k commits and 140 tags is only about 250mb. You’d need to have ~20 Gitea-scale (or ~15 Guix-scale) projects to exceed the 5Gb storage limit.

          You can fit a lot of source code into 5Gb, so that quota seems quite reasonable to me.

          1. 5

            I am a little amused by their progressive size steps, starting at 45,000 GB and working their way (rapidly) downward. I have to wonder what proportion of their free tier users are consuming >45,000 GB of space!

            1. 3

              Yeah, the step sizes are a bit baffling indeed, and I’d love to see some numbers, how many repos/accounts are affected by each, and so on. I’m not entirely sure I understand why those steps are necessary, seeing as the delay between 45gb and 5gb is only 3 weeks, with the biggest drop (45000g->7500gb) is only a single day. Might aswell start at 7500g then, or as repos are going to be locked only, and ample preparation time was given, just flag-day the 5g.

              They could just notify people now, and introduce the 5g limit on October 19th. That’d give people more time than when they start getting in-app notifications on October 22nd. But maybe I misunderstood when they’ll start notifying people.

              1. 4

                I mean, I can understand the gradual roll-out; first to see if it works at all, and second to give their infrastructure and customer support people both a chance to ramp up slowly. It’d be cool to see numbers though; I would have divided it up such that 10 repos get affected by the first change, 100 on the second, 1000 on the third, and so on, but there’s other valid ways of doing it they may want instead.

            2. 3

              The storage counts build artifacts as well, not just the git repo.

              1. 1

                To be specific:

                Storage types that add to the total namespace storage are:

                • Git repository
                • Git LFS
                • Artifacts
                • Container registry
                • Package registry
                • Dependecy proxy
                • Wiki
                • Snippets

                https://docs.gitlab.com/ee/user/usage_quotas.html#namespace-storage-limit

              2. 1

                You can fit a lot of source code into 5Gb, so that quota seems quite reasonable to me.

                My ~/code directory is currently 872M. This includes projects going back almost 20 years (some of which aren’t even on GitHub but were on Sourceforge, Google Code, or BitBucket back in the day), a bunch of cache/data files that aren’t in git, projects I worked on but haven’t pushed (yet), generated/cached stuff that’s not in git, etc. All of that combined is probably at least 100-200M less, but I didn’t bother to really check. There are 398 directories/projects in total.

                This doesn’t include binary uploads though; for example all of uni‘s git history is ~16M (it’s comparatively large as it includes a unicode database), but there are also ten releases each has 10 binaries of ~1.5M, so that’s ~150M extra.

                Still, I have a lot more projects than most, and thus far I probably would have had enough within the 5G limit. As you mentioned, even large projects tend to be on the order of several hundred M.

                I feel it’s regrettable that the only way to upgrade is $19/month; I pay $5/month to FastMail and get 30G of storage with that. I guess it makes business sense to focus on the business customers.

                1. 1

                  It seems very low, but it’s still quite a lot

                  Just don’t work on Linux (or Chromium or Firefox)

                  1. 3

                    How does GitLab account for forks? I believe GitHub internally uses a content-addressable store and so multiple clones of mostly-identical repos use the same storage. If one person pushes Chromium then they’re adding a lot of data, but then every subsequent person who pushes the same repo just adds a reference count. This is why they don’t care too much about large popular repositories: the cost is amortised over the users who have forks. It’s only large private or unpopular repositories that have a large per-user cost to the platform.

                    1. 1

                      Neither of those are developed on GitLab to begin with, and I see no point in having a personal mirror there, either.

                      1. 1

                        There’s a few reasons. First, you may want to share a branch with other developers. This can be because it is a work in progress (and thus not suitable for submission yet). Pushing a branch which is a compilation of multiple separate patch series can allow other users/developers to test your work easier. You may want to run third-pary CI (Azure, Travis (although I suppose that is less common now), etc) against your branch.

                    2. 0

                      1.7Gb of which is my brother’s website with huge photos

                      Do not store large binary files in Git.

                  1. 19

                    As somebody who work with Gitlab on a day to day basis for the last 3 years: This really shows that Gitlab is getting desperate.

                    Since before the IPO, I think the way they run their products line up: to favor breadth over depth, is already a non-sustainable approach. Even after the IPO, I still see they try to expand their product line up toward craps that they have no talents strategy to back: MLOps, Monitoring, Remote Development environment, Code Search… While their core product offering such as Merge Request, CI, Runner, Gitaly barely get staffed to fix critical bugs.

                    As Github started to ramp up in features delivery in the last few years, I think very soon (if not already) they would surpass Gitlab in core features’ depth. I have strong doubt on Gitlab as a company.

                    1. 3

                      Since before the IPO, I think the way they run their products line up: to favor breadth over depth, is already a non-sustainable approach

                      Pretty much this right here is why I’ve been so resistant to give SourceHut a real go, even though the product matches more closely what I want in terms of usage. I can’t imagine how it won’t all come crashing down under the weight of all that breadth, and the numerous side projects (and presumably consulting) that they engage in to keep it afloat. (Yes, I recognize that giving them my money will help here…)

                      I mean, if GitLab can’t do it, how can a team of 3?

                      1. 7

                        I mean, if GitLab can’t do it, how can a team of 3?

                        In recent months, I have been trying to figure out why the hell does Gitlab take so long to address some of the obvious Issues in their backlog that affected 30-40s paying enterprise customers. I spent the time to filter throught their backlog, watch their PM’s update video on the Unfiltered channel.

                        Side note: One thing Gitlab has it right: a super transparency approach in running the company. Thanks to this, you could just dive in and get these information yourself.

                        Turn out, the reason why the feature is keep getting delayed is… the entire team who own that feature has 3 engineers + 1 shared(?) PM + 1 shared(?) UX designer working full time on it. They have 1 huge backlog of high priority ‘new’ features to further expand their product breadth and critical bug fixes, pretty much enough work to drown the team of this size for another year.

                        They would never be able to get to the feature request I have been watching because: it would only have impact on big enterprise customers. Gitlab’s product development pipeline are not optimized for this audience, they are trying to optimized for the startup of 5 people who are (1) smaller than Gitlab itself and (2) require something quick to bootstrap everything. There is no feature depths required for that targeted audience.

                        I think such a product strategy makes sense pre-IPO: they spent 3 years hinting IPO via different media channels while trying to make their product offering to cover a massive range of things. This makes the company looks very nice in the eyes of non-technical investors, the growth potential is infinite. But a result of this is that their core product offering rot away:

                        • Merge Request UI got slow, terrible bug that would let mergability check spinning forever until user manually refresh the page.

                        • CI Merge Train was so ahead of everything else, but got no love because… they directed the talents who worked on it to lead new product lines. Now Github is coming out with a better feature offering.

                        • Gitlab CI yaml was so rich in feature, all it requires is a saner YAML syntax. They did… nothing there. Github Actions came out with proper yaml syntax versioning and a much better plugin ecosystem. BuildKite, CircleCI are also getting a lot better.

                        Here are the things I think Gitlab should have never get themselves into producing:

                        • Kubernetes buzz: They used to advertise support for Serverless and WAF. Both of which are deprecated today, waste of effort.

                        • Package registry: They underestimated the serious amount of talents required to build something to compete with Jfrog’s Artifactory and ‘s package registry solution. Their product have very little competitive edge except for “it’s already on Gitlab and we have Gitlab so let’s just use it”.

                        • The things they are trying to get into today:

                          • MLOps: It does not work, don’t make it a thing
                          • Symantic CodeSearch: Trying to compete with SourceGraph using the search indexing engine that SourceGraph is maintaining?
                          • Remote Dev Container: Competing with Github’s Codespace and GitPod and JetBrain’s Space and Replit. If I understand this correctly, Gitlab is starting from behind with 2-3 times less the headcount dedicated to this vs any of the players in the space.
                          • Security Scanning: This one is actually sound product strategy, but instead of building things in-house, they should have aimed to enable better integration with 3rd party solutions.
                        1. 7

                          I mean, if GitLab can’t do it, how can a team of 3?

                          Some of it has got to be the inefficiencies innate in a large organization, or having people who are not enthusiastic about just making a good product. Other parts are the inefficiencies in having engineering time doled out by those who don’t share the priorities of those doing the work.

                          There are many things a small company can do, and well, that a large company can’t hope to do better. Inertia is a factor in engineering organizations.

                          1. 1

                            Yeah, as an employee of a many-hundred-person company it doesn’t surprise me at all that a team of 3 can outmaneuver a VC-backed corporation. Sometimes when I compare what I can do within the structures that are enforced at work vs what I can do in my free time I feel like I’m a hundred times more productive on my own doing what I want to do. I can think of a few factors:

                            • allowing myself the room to do something right the first time VS cutting corners to hit a management-enforced deadline and claiming we’ll go back later to clean it up but never actually doing it
                            • decisions about what to do next are made by people who use the software every day and possess a deep knowledge of how it works and why
                            • not being affected by political maneuverings of managers trying to advance their own career (obviously larger OSS orgs can have plenty of political drama, but usually this doesn’t manifest until you reach a certain size)
                            • not having to ever use Jira
                          2. 5

                            Worth noting that SourceHut publishes yearly finance reports. In particular, the 2021 report mentions:

                            We have three full-time staff paid under the standard compensation model, and about $700/mo in recurring expenses. Currently we make about $9200/mo from the platform, putting our monthly profit at about $1000, without factoring in consulting revenue. Thus, the platform provides for a sustainable business independently of our other lines of business.

                            1. 3

                              I should have chosen a better word than “afloat.” They charge money to use the service, presumably enough to keep things paid.

                              However, my concern is more with keeping the service actually running given how much it does, and it scaling. There was this a few years ago, that made me wonder if this is the scaling strategy, or if there’s another plan. The site has a large number of services and a very small team. That’s generally a risky bus factor, but I am probably being over cautious.

                              1. 5

                                Bro, they have $1000/month in profit. They’re good. Nothing could possibly go wrong with that massive warchest in reserve. /s

                                For real: one decent lawsuit would bankrupt them.

                                1. 2

                                  What would you sue them for? Daring to compete with our dear overlord Microsoft?

                                  1. 1

                                    Literally anything, make up your own lawsuit. It just needs a sheen of merit. You don’t need to win, just run down the clock so they can’t afford to defend themselves.

                                    Lack of accessibility under ADA. DMCA or copyright violations. Some OSS licensing or patent garbage.

                                    1. 3

                                      Drew has moved to the Netherlands; and it seems the Sourcehut company registry moved with it, so these kind of US-style trivial lawsuits with huge bills are probably less of an issue.

                                      I get your point, but what other options are there if you’re just $some_guy looking to make a small tech business? Starting any business is always a matter of risk.

                                  2. 1

                                    one decent lawsuit

                                    which would force Dr. Evil to out himself.

                              2. 2

                                I can’t imagine how it won’t all come crashing down under the weight of all that breadth

                                Yeah, but the difference here is that sourcehut is fully open source. If it crashes and burns (and I hope it doesn’t) you can just host it yourself. That’s one of the reasons I switched to sourcehut from Gitlab a while ago, despite being quite happy with Gitlab.

                                I mean, if GitLab can’t do it, how can a team of 3?

                                I don’t mean any disrespect to the folks at Gitlab, but a small team actually gives me confidence. Per Kelly’s rules for the Lockheed Skunk Works:

                                1. The number of people having any connection with the project must be restricted in an almost vicious manner. Use a small number of good people (10% to 25% compared to the so-called normal systems).
                                1. 2

                                  That’s one of the reasons I switched to sourcehut from Gitlab a while ago, despite being quite happy with Gitlab.

                                  You mean like GitLab? I’m sure neither SourceHut, nor GitLab is trivial to host, but yeah, both are possible, at least.

                                  1. 3

                                    “GitLab’s open core is published under an MIT open source license. The rest is source-available.”

                                    Not all of GitLab is possible to self-host, at least not at the moment.

                                    1. 1

                                      Fair.

                                2. 1

                                  looks gitlab is confident they can, (just) if they charge.

                              1. 5

                                Visiting fair Dublin City, looking forward to a proper Guinness and some live music.

                                Also want to get vpsadminos setup on a VM in the homelab so I can evaluate it. Seems to tick my boxes for hypervisor as a SmartOS replacement. ZFS, NixOS, netboot.

                                1. 3

                                  I always really liked the casual music in Irish pubs; just some people playing music sitting in a regular booth or whatnot. There’s a good vibe to it which beats a “real” concert, and certainly beats playing pop or EDM at a million-and-one dB.

                                  1. 3

                                    Hope you enjoy Dublin, we’ve some great pubs around; just be sure to have a COVID vaccine pass. If you haven’t been before, I’d recommend avoiding areas like Temple Bar, as the pubs there are overpriced and mostly targeted to tourists (there’s a few exceptions, but it’s a good rule of thumb).

                                    Why’re you migrating from SmartOS? I was actually considering a move to it for a system in my homelab. Right now it’s running FreeBSD with manually managed jails / bhyve VMs, which is fine, but I’d like to move to an OS that actually aims to be used primarily as a virtualization host.

                                    1. 2

                                      Thanks, yeah you do. This is my third time visiting, first time we didn’t leave Temple Bar, second time I explored a bit more and discovered more of the city. We’ve done a mixture this time, ended up sampling Irish whiskeys in Palace Bar last night which was most enjoyable. Peaty irish whiskey isn’t anywhere near as explosive to the palette as peaty scotch, but it’s still an enjoyable drink.

                                      I’m mostly looking to switch from SmartOS because I don’t work with it day to day anymore, rather than because it’s deficient somehow. It’s definitely easy to administer and does the job, I love the fact it’s simple to recover from boot media failure (flash a new stick), upgrade the base OS (update the stick) and it’s based on ZFS of course. I’m a little less enamoured by managing the VMs, my current method is a ruby library I borrowed from a friend to generate the XML to create VMs. I’ve never managed to find anything better, which given things like terraform exist makes me sad. Not upset enough to invest time in fixing the problem space though, which then makes me sadder.

                                      Nixpkgs/NixOS I like from playing with it and would like more exposure to it, which basing the HomeLab on it will give me for sure. The homelab is a bit unloved as well, things have been fairly static with it for 3-4 years now and heading into the winter I’m itching to spend some time working out kinks with it, making sure it’s available on Tailscale properly, sorting out monitoring/service management. VPSAdminOS appears to tick the boxes of being like SmartOS, but based on Nix/Linux so my now-usual day to day tooling works there easily.

                                      1. 2

                                        We’ve done a mixture this time, ended up sampling Irish whiskeys in Palace Bar last night which was most enjoyable. Peaty irish whiskey isn’t anywhere near as explosive to the palette as peaty scotch, but it’s still an enjoyable drink.

                                        If you’re into whiskeys, then the Dingle whiskey bar is worth a visit if you have the time. They have an extensive selection of world whiskeys there, and the staff are usually happy to recommend. Kennedy’s is my go-to for lunch and a pint on the rare occasion I’m in town these days. Reasonable selection of pub food (their lamb burger is my recommendation; I haven’t had any of their vegetarian options since pre-covid, so I can’t offer any guidance there), and a well rounded choice of beers.

                                        I’m mostly looking to switch from SmartOS because I don’t work with it day to day anymore, rather than because it’s deficient somehow. It’s definitely easy to administer and does the job, I love the fact it’s simple to recover from boot media failure (flash a new stick), upgrade the base OS (update the stick) and it’s based on ZFS of course.

                                        The simplicity of upgrades, and decoupling of the base OS from the VM storage is what attracted me to it. I was also considering Alpine which can function similarly (although with a bit more manual work), but I’ve never been a huge fan of the usual tools for working with KVM (virt-manager / virsh). If the tooling is equally awkward on the SmartOS side, then that levels the playing field a little bit. Maybe I’ll just put a weekend or four into writing my own tooling for KVM and pad out the CV, haha.

                                        I’ll be sure to check out VPSAdminOS; I haven’t really used Nix much but it’s been on my radar for a while.

                                    2. 3

                                      I learned an important lesson from one of the organisers at the conference I attended: Guinness does not travel well. It needs to be moved carefully and then left to settle and after that the pumps and the pouring make a noticeable difference. There’s a huge difference in quality of the Guinness from one pub to the next. I was told to go to a place by the river that doesn’t seem to exist anymore for the best Guinness in the city and so I decided to try it (not really believing that there was much of a difference) and it really was true. The most surprising thing to me was that the bar on the top of the Guinness museum (which, by the way, is fantastic) served mediocre Guinness. I then made the mistake of drinking Guinness again when I got back home. It really doesn’t survive crossing the Irish Sea and being poured like ale.

                                      Where is the best place for Guinness in Dublin now?

                                      1. 1

                                        As someone visiting from the UK, anywhere in Dublin is the best place for a Guinness. Auld Dubliner and Palace Bar both served an excellent Guinness to us last night, there’s probably cheaper places out there if you wander a bit further out of the tourist area.

                                        Guinness definitely doesn’t make it to the UK in anywhere near as enjoyable a state. Whilst I do enjoy a Guinness from time to time at home, I basically visit Dublin (and Belfast to be fair) for a proper Guinness.

                                        1. 1

                                          Guinness definitely doesn’t make it to the UK in anywhere near as enjoyable a state

                                          Getting technical for a second, it makes it there fine. The trouble is that it’s not nearly as popular there as it is here. That means that the kegs aren’t as fresh, and the lines aren’t cleaned nearly as often, that combined with the fact that it oxidizes quickly, is what leads to the poorer quality.

                                    1. 8

                                      TL;DR: AOL wanted to stop third-party clients from accessing the AOL Instant Messenger network. It did so by intentionally exploiting a buffer overflow in the official AOL client to perform remote code execution. Third party clients presumably were programmed more safely and/or didn’t want to open themselves up intentionally to RCE and thus would fail this “authentication”.

                                      1. 3

                                        Software in the 90s/early 00s was such a wild west cowboy show.

                                        1. 9

                                          Whereas nowadays we have to download and run proprietary binary blobs in order to view Web content that we’ve already paid for. But only on approved browsers and OSs, of course. Which come with ads and spyware baked in.

                                          Things are so much better in the 2020s …

                                          1. 7

                                            It’s a platform almost entirely built on open standards with a few limited exceptions, sandboxed by default, doesn’t suffer from “buffer overflow of the week” (they still exist, but not as much as in the 90s), and is cross-platform and runs on pretty much anything. Yes, I’d call that an improvement over some Windows-only .exe file with a known buffer overflow as a “feature”.

                                            If you’re annoyed with widevine DRM: yes, it’s annoying. But most of the web functions perfectly fine without it – only Netflix, Spotify, and similar services don’t. It’s not ideal but also not that big of a deal.

                                            1. 1

                                              Yes, I’d call that an improvement over some Windows-only .exe file with a known buffer overflow as a “feature”.

                                              How do you know that there aren’t similar horrors lurking inside that proprietary blob?

                                        2. 1

                                          Note that it relied on jumping to a “call esp” instruction which is not normally generated in executable code; the byte sequence happened to be in a bitmap resource. I’m curious whether this exploit would work on Vista and up - once the CPU/OS implements NX, that bitmap would not normally be executable, so the process would crash before executing the payload. It’s possible to mark any section of a PE as executable, but this would not be done by default for a .rsrc section, so they’d need to specially mark executables to be vulnerable.

                                        1. 11

                                          Unless I’m profoundly misunderstanding something, the only thing this allows is blocking view-source: URLs with the local Chrome URLBlocklist setting.

                                          What it doesn’t do is allow websites to block people from using “view source”: just the person in control of a particular Chrome installation (i.e. the school) can use it.

                                          Is it still a good idea to do this? Probably not. But it’s a lot less dramatic as some of the comments here seem to be implying.

                                          1. 8

                                            Flying back to the Netherlands tomorrow (from Indonesia). These kind of long flights are already … fun, not looking forward spending all this time with a mask too. But if I stay here any longer I’ll go properly crazy so will have to suck it up.

                                            I will miss my cat :-( Actually, that’s a big reason I stayed here as long as I did, which sounds kinda silly but I’ve gotten quite attached to her.

                                            Looking forward to being back home though. Found a place to stay in the countryside (well, as “countryside” as you get in NL) for two weeks and have to see where to go after that. Maybe stay in NL or maybe back to Ireland.

                                            1. 3

                                              How did you like your time in Indonesia? I’m in India, and I’d really like to visit Indonesia once they open up again.

                                              1. 3

                                                For visiting? It’s nice. For living? Not for me. I could never get use to the climate, cultural/language barrier is fairly significant, and as a white person you’ll always be seen as a foreigner by a significant chunk and you’ll never truly be a “local”.

                                              2. 2

                                                oh, sad to read that you’ll have to depart from your companion cat :(

                                                is there no chance to bring her back with you to the Netherlands? i’ve seen people flying with pets on a transatlantic flight a couple of weeks ago

                                                1. 7

                                                  It’s possible but very difficult. First, you need all the vaccinations, tests, quarantine, etc. Her vaccinations just expired so I need to get that renewed and wait 3 months, then have her tested (there are no EU-approved labs for the tests in Indonesia, so that might be hard). It’s going to take months and months with no guarantee of success. I happen to know some people at the local vet (my ex-girlfriend works there) and asked them, and they didn’t really know either.

                                                  Then you arrive in NL: then what? I will need to find a place which allows pets, which is significantly harder (and more expensive!) I can’t just rent some cheap small apartment somewhere. Cats don’t deal well with change so I have time to find something else (maybe in NL, maybe somewhere else), but with a cat I really need something permanent pretty much from the get-go.

                                                  In short, it’s all going to cost a lot of money that I don’t really have, will be stressful in ways I don’t think I can deal with right now, and will seriously limit things in ways that would be hard to accommodate in my current situation.

                                                  1. 1

                                                    An interesting look into the difficulties moving internationally with pets, thank you for writing this down!

                                                    1. 2

                                                      It’s definitely doable, but just hard. When I lived in New Zealand my American girlfriend there moved her dog from the states to NZ. This was before we met, but from what I gathered it was a long and expensive process. Dogs are probably easier, since they’re less skittish than cats.

                                              1. 13

                                                mostly because /etc/passwd is world-readable by design for some arcane reason that I can’t find on Google

                                                I think it’s so you can map uids to usernames. And read all the other fun stuff like their real name, phone number, office location, etc.

                                                1. 5

                                                  Exactly this. There are obviously other options Yellow Pages, Kerberos, Active Directory, etc, etc, etc. But this is the “I don’t need anything more” directory.

                                                  1. 7

                                                    Very early versions of Unix also had /etc/uids for this purpose, which was just a username↔UID mapping (e.g. dmr:7), but even early Research Unix systems tended to just read /etc/passwd. Some tools still printed “can’t read /etc/uids” as an error, but in fact read /etc/passwd.

                                                    I guess it was just more useful to also get the group ID and other information from one file, and keeping two files in-sync is annoying.

                                                    1. 1

                                                      And not only (username, uid), but also “full user name”. Not sure any tools besides finger used this much..as well as “home dir” used for shell tilde expansion, as in ~joe from ~jim’s account. That is all probably early to mid-1980s era stuff.

                                                1. 22

                                                  It’s really interesting to see where they diverge from mainline Lua since they based it on PUC 5.1, which is frankly pretty awful. (LuaJIT also uses PUC 5.1 as a baseline but fixes its most egregious faults, sadly in some ways which are off by default.) For context: when Lua 5.1 came out, Java was at version 5.

                                                  • they fix the issue of resuming coroutines across pcall, which is an enormous problem in 5.1
                                                  • they haven’t backported the ability to iterate with metatables, which is my #1 complaint in 5.1 by a long shot, but they admit that their solution isn’t great and that they might add this later. hope they do!
                                                  • they didn’t backport goto (good!)
                                                  • backported UTF8 and bitwise ops
                                                  • … aaaaaand they somehow decided to remove tail-call optimization

                                                  Honestly that last point just completely kills whatever interest I may have had in this. What the hell.

                                                  1. 8

                                                    … aaaaaand they somehow decided to remove tail-call optimization

                                                    I would have to guess that the aggressive optimizations made to the bytecode / interpreter made it difficult to support generalized TCO (between methods). I don’t see why they can’t rewrite self calls into a loop but maybe it got dropped to avoid explaining that self-calls are fine, but generalized is not?

                                                    Update: Oh, they actually explain it, which I missed:

                                                    Tail calls are not supported to simplify implementation, make debugging/stack traces more predictable and allow deep validation of caller identity for security

                                                    1. 5

                                                      they didn’t backport goto (good!)

                                                      The biggest reason I use goto in Lua is as a “pseudo-continue”:

                                                      for [..]
                                                          if [..]
                                                              goto continue
                                                          end
                                                      
                                                          ::continue::
                                                      end
                                                      

                                                      I tried doing without it, but for, if, if, and then all of the loop contents inside of it (and similar things) is just awkward.

                                                      1. 3

                                                        IMO code becomes a lot easier to read when you can make certain assumptions about its structure, like the fact that it doesn’t jump around randomly and the fact that the return value always can be found in tail positions. But by supporting early returns Lua has already lost this battle, so the argument isn’t as strong as it is in more expression-oriented languages.

                                                        1. 6

                                                          I find it a lot easier if I can read things from top-to-bottom without keeping too much in my head. Early return and “early continue” really helps with that. Obviously there is no “one right way” because different people read things different, but I suspect this applies to a significant chuck of devs.

                                                          1. 2

                                                            I’ve noticed that people tend to have surprisingly different opinions on this one. There’s definitely a camp of purely functional programmers who prefer visually nested expressions.

                                                            https://www.teamten.com/lawrence/programming/avoid-continue.html is an interesting bit of writing here which argues against continue, but for early return, from imperative programming perspective.

                                                            1. 2

                                                              From the link:

                                                              In English, the word continue means “go on”.

                                                              It’s funny, because the English word “continue” also requires context. “Continue what, exactly?” In programming, the context is “the top of the closest scoped loop.” It’s actually probably less clear of the context in most English uses, because English is so often ambiguous!

                                                              (I do theoretically like next as a replacement for continue, but it has the same problem in English, so it’s kind of no different…)

                                                      2. 3

                                                        From my very limited experience with Lua, the biggest problem with the version LuaJIT used was that it couldn’t represent 64-bit integers (which was really important for the kinds of things I wanted to do, which required handling things like 64-bit inode numbers) but with 5.3 you had 64-bit integer and 64-bit float types (I think you can turn off floats for builds in environments that don’t support them?). The biggest advantage of Lua for embedding is that it’s tiny - around 300 KiB for a complete version and tuneable to something smaller if you want a subset. I don’t see any binary size numbers om the Luau page, so I guess it’s aiming for a different use case.

                                                        1. 2

                                                          In LuaJIT you can use any C type via ffi.cdef - including int64_t. Handling unsigned 64-bit integers in LuaJIT was an important feature to me in the pre-5.3 days.

                                                      1. 14

                                                        Site, paper

                                                        This write-up isn’t very concrete, the issue is that this:

                                                        if accessLevel != "user‮ ⁦// Check if admin⁩ ⁦" {
                                                            fmt.Println("You are an admin.")
                                                        }
                                                        

                                                        Seems alright, however, there are some sneaksy control characters in there:

                                                        if accessLevel != "user<RLO><LRI>// Check if admin<PDI><LRI>" {
                                                            fmt.Println("You are an admin.")
                                                        }
                                                        

                                                        Which are:

                                                        • RLO (U+202e) - Render text as right-to-left.
                                                        • LRI (U+2066) - Isolate this text as left-to-right until PDI or newline.
                                                        • PDI (U+2069) - End last LRI, RLI, or FSI.

                                                        This is very difficult to see; Vim shows the RLO as <202e>, but it doesn’t show the other two, although it behaves weird when editing and you can see it if you move the cursor and press e.g. g8 or have the codepoint in the statusline. This probably depends on your terminal too.

                                                        So essentially it renders two blocks of left-to-right text in right-to-left order:

                                                        if accessLevel != "user[// Check if admin][" {]
                                                        

                                                        There are more complex things you can do as well; this was adapted from one of their examples. It seems GitHub presents you with a warning to protect you.

                                                        I’ve always hated these LTR things as a user and developer. Copy something accidentally and everything will behave weird and it’s really hard to understand what’s going on; the only reason I know is because I know about Unicode, but most people don’t, and even if you know about Unicode it’s a pain to deal with as you can’t “just delete” or “see” them clearly in pretty much all text inputs.

                                                        1. 2

                                                          That actually seems great. Does anybody see any drawback (besides the overhead of starting a subshell) with using this tip?

                                                          1. 9

                                                            Forks are slow so starting a subshell is not an insignificant cost. It also makes it impossible to return values besides an exit status back from a function.

                                                            Zsh has “private” variables which are lexically scoped. ksh93 also switched to lexical scoping instead of dynamic scoping but note that in ksh, you need to use function name { syntax instead of name() { to get local variables.

                                                            1. 9

                                                              Also, in zsh you can just use always to solve the problem in the article:

                                                              {
                                                                   foo
                                                              } always {
                                                                   cleanup stuff
                                                              }
                                                              
                                                              1. 3

                                                                Every time I learn a new thing about zsh, I’m struck by how practical the feature is and how amazing it is that I didn’t know about said feature the past dozen times I really, really needed it. I looked around the internet for documentation of this, and I found:

                                                            2. 2

                                                              A guy on the orange site timed subshell functions to take roughly twice as long.

                                                            1. 7

                                                              I cannot find a clear stated reason why this approach was rejected. I read through that PR, and the communication from the OpenSSL people throughout has been, ehm, concise.

                                                              Nothing is posted in the PR at all except a link to this mailing list message, which simply states “PR#8797 will not be merged and compatibility with the APIs proposed in that PR is a non-goal.” That’s all I can find other than the message from a year and a half ago expressing concerns about API stability. A valid concern as such, but it’s not entirely clear to me this actually is an unstable API. And even if it was: wouldn’t starting from an existing tested implementation be better?

                                                              In short: it seems NIH strikes again.

                                                              1. 23

                                                                I stopped signing stuff because I just couldn’t deal with gpg any more. At some point it broke for mysterious reasons I couldn’t figure out, and I just gave up. I’ve been wanting a better signing scheme for a long time.

                                                                For anyone wanting to try it out:

                                                                ~/.config/git/config:

                                                                [user]
                                                                signingKey = ~/.ssh/id_ed25519
                                                                
                                                                [gpg]
                                                                format = ssh
                                                                

                                                                Commit:

                                                                % git commit -am 'Sign me!' --gpg-sign
                                                                
                                                                % git log --format=raw
                                                                

                                                                commit 74d2eb36642937c31b096419fe882259572e42e3 tree 7a6a8614e03d217dea76f28edbc6652666932df8 parent 8c8db6f1bd0ec29cfffc1cf0d0f91f637e8fbd26 author Martin Tournoij martin@arp242.net 1635225059 +0800 committer Martin Tournoij martin@arp242.net 1635225059 +0800 gpgsig —–BEGIN SSH SIGNATURE—– U1NIU0lHAAAAAQAAADMAAAALc3NoLWVkMjU1MTkAAAAg6w5WB1nhvFYmOIc/hxLj2dkuME 4oQcQrLs1oQsRdZ68AAAADZ2l0AAAAAAAAAAZzaGE1MTIAAABTAAAAC3NzaC1lZDI1NTE5 AAAAQDPTXV5wPb0Yzt0VaVpk5/83TKw5MklAb0DkQkVT99Ib+MwaTIirb1kG1m54akzfn+ Bb3vV9YYRjjCHnie5ziwU= —–END SSH SIGNATURE—–

                                                                Sign me!
                                                                

                                                                The gpg in a lot of the settings and flags is somewhat odd since you’re not using gpg at all, but I can see how it makes sense to group it there.

                                                                It doesn’t really integrate well with GitHub, but that’s hardly surprising given that this feature is about five hours old 🙃

                                                                1. 2

                                                                  You may find signify of interest. See this post from @tedu for elaboration.

                                                                  1. 2

                                                                    There’s also minisign, which has a few improvements. But can you use it with git (or email for that matter)? Because last time I checked you couldn’t. Nothing standing in the way in principle, but the tooling/integration just isn’t there.

                                                                      1. 1

                                                                        Ah nice, but it’s a bit too hacky for my taste to be honest 😅 Also somewhat hard and non-obvious to verify for other people (philosophical question: “if something is securely cryptographically signed but no one can verify it, then is it signed at all?”)

                                                                  2. 1

                                                                    Does it only support ed25519? I am still on rsa, btw, you format is screwed due to markdown.

                                                                    1. 2

                                                                      Presumably it supports all key types; it just calls external ssh binaries like with gpg. There’s no direct gpg or ssh integration in git (as in, it doesn’t link against libgpg or libssh) and leaves everything up to the external tools. It’s just that I have an ed25519 key.

                                                                      Looks like I forgot to indent some lines; can’t fix the formatting it now as it’s too late to edit 🤷

                                                                    1. 5

                                                                      This is not wholly unreasonable, but there are some downsides too, and I don’t see it as so clear-cut.

                                                                      The “refactor” and “change” phases (or hats) aren’t always completely unrelated. Let’s say you want to add feature A and the current design makes this hard or impossible; a fairly common motivation for refactoring, especially in non-public modules. Okidoki, so let’s refactor things to accommodate feature A! The problem with having a completely clean separation between the “refactor” and “implement feature A” stages is that you don’t necessarily know if your refactor actually makes sense unless you also implement feature A, especially if A is somewhat complex. you may refactor too little, or too much. Only by actually implementing feature A can you truly understand the problem at a code level, and only by actually implementing it can you truly know if the API you thought of actually works well.

                                                                      So you kind of need to do both at the same time, at least some of the time – depending on the complexity of the API, complexity of Feature A, your familiarity with the code and problem, team size, frequency of changes, etc. Sometimes it makes sense to then later split out your branch a bit (I’ve done so many times), other times … I don’t know; not so much, where the amount of required time and effort is disproportional to the (potential) benefits.

                                                                      I also think there’s a lot of overhead in this approach in general. His example with different “refactor” and “change” hats and switching between them seems a lot of kerfuffle, and I don’t really see the benefit for simple things like “rename a badly named variable” or “add a guard clause”, especially if it’s in the actual code you’re changing.

                                                                      Another thing this article doesn’t cover: make sure you actually have a reason to refactor things. “I don’t like the way this is written” is a good reason for your hobby project but not a reason for actual professionally developed software involving production software. At least half of the refactors I’ve seen are basically pointless rewriting of working code but in a different way because “you ought to do X, Y, Z” without any actual objective reasons. This kind of refactors are not just a waste of time, but can actually make the code worse by accidentally introducing regressions, improving churn for the rest of the team (“wait, what happened here? How does this work now?”), etc.

                                                                      1. 2

                                                                        The problem with having a completely clean separation between the “refactor” and “implement feature A” stages is that you don’t necessarily know if your refactor actually makes sense unless you also implement feature A, especially if A is somewhat complex.

                                                                        I’ve also had this experience. In some cases it’s really best to just do the refactor and implementation in one go, but I’ve found that if it’s at all possible, it’s often better to do it incrementally with interactive rebase, where you go back and forth between the clean refactoring branch (or commit) and the feature implementation branch. This certainly eases the code review process and if you ever find a bug, it can be very helpful for a git bisect as well, as the two changes are more self-contained.

                                                                        At least half of the refactors I’ve seen are basically pointless rewriting of working code but in a different way because “you ought to do X, Y, Z” without any actual objective reasons.

                                                                        Ugh, don’t get me started…

                                                                        1. 1

                                                                          Yeah, I’ve often done the “write a massive branch with all sorts of things and then split that up in more bite-size branches/patches”-thing many times.

                                                                          Often that’s worth it, even if just to get a clearer idea of the changes for yourself. Sometimes, not so much. It’s a very different workflow than proposed in the article, where you’re supposed to stop, put on the “refactor hat”, make the refactor, commit that, go back to your “change hat”, etc. If that works for you: great. But I would rarely do it like that.

                                                                        2. 1

                                                                          Conversely… I have been doing battle with a C module that when I started was 4k lines long, massive cyclomatic complexity, huge functions, massive amount of global variables, massive fanout, no encapsulation, no unit tests, no integration tests, no asserts, poorly commented, written by 11 people… all with an attitude “I don’t have time or a good reason to clean this up now”.

                                                                          That may have been survivable if it was a leaf node… but it sits on top of three different versions of a vast black box library that drives two different hardware devices.

                                                                          Unsurprisingly I tend to get a bit grumpy.

                                                                          1. 2

                                                                            Assuming that’s referring to the last paragraph, it all depends on the code and motivations for the refactor of course. In general I think refactors should have a clear goal in mind:

                                                                            • “It will be hard to add feature X without it”
                                                                            • “This code is a frequent source of bugs”
                                                                            • “This code is hard to understand, and this is a problem”
                                                                            • “This code is hard to modify, and we need to modify it frequently”

                                                                            Some subjectivity is involved of course, especially with the third and fourth items. But it’s still a fairly straight-forward calculation; something like: value = (time_saved - time_on_refactor) * risk.

                                                                            I’ve seen people refactor code that was admittedly less-than-great, but had nonetheless been running in production for years without major problems, was feature-complete, and didn’t cost the team any time to maintain. Someone needs to fix a minor bug, add a minor feature, or just has to use the module and they will spend hours, days, or even weeks refactoring the damn thing for basically zero benefit. But it does have a risk of regressions, other people who were used to the code will no longer understand the new version, etc. In some of the most extreme cases I’ve seen refactor code that was actually perfectly fine, but just didn’t fit some notion of how things “ought to be” or because “it didn’t look nice” or whatnot.

                                                                            If you need to modify it weekly then it’s a different story; this code is active and “alive” and it pays to invest in it.

                                                                        1. 5

                                                                          Yes, you are reading this right: it is scraping debug output to find the pkg-config search path.

                                                                          I could barely read the thing at all, it looks like gibberish.

                                                                          1. 2

                                                                            It’s just some m4 mixed in with shell scripting mixed in with awk. What’s so hard about that? 🙃

                                                                          1. 3

                                                                            Maybe instead of ranting first investigate the history behind it, since this used to work. There must have been a good reason this has changed (perhaps working around systems which are broken in some way)

                                                                            1. 7

                                                                              This is the thread that prompted the change. Basically, some installations use non-standard paths and ncurses needs a way to detect this.

                                                                              Looks like Thomas just couldn’t figure out a way to get this information from pkg-config/pkgconf in a clean way: “I don’t see a way to ask pkg-config what its built-in search order happens to be”, so he fell back to this rather ugly method. I doubt he was happy with it in the first place. A mistake on his part for not being able to find the correct way of doing things; but these thing happen to the best of us.

                                                                              His reply to this rant made me laugh. Paragraphs of text wasted, when three lines would have been enough.

                                                                            1. 10

                                                                              As someone who works with an EBCDIC platform, the default US EBCDIC codepage (CCSID 37, at least on i) is capable of representing accents.

                                                                              1. 7

                                                                                According to the bank’s lawyer accents weren’t possible in 1995, when the system put in service, and that they “have been added since”. In 1995 it was “technically impossible to use accents”.

                                                                                A quick search however reveals that e.g. codepage 01047 was published in 1991, so I’m calling bullshit. They also store names in all-caps by the way, and I’m fairly sure you could at least do lower-case letters before that. The also pull the “it’s based on punch cards!” Technically correct, but so is ASCII and by extension UTF-8. We still have the delete control character in that weird position because delete meant “punch all the holes”.

                                                                                I suspect they just migrated from a previous IBM system and that this is the real reason.

                                                                                From the article:

                                                                                Look, I’m not a lawyer (sorry mum!) so I’ve no idea whether this sort of ruling has any impact outside of this specific case.

                                                                                It sets a precedent in Belgium, not in other countries.

                                                                                1. 7

                                                                                  In the mainframe/legacy world, it’s totally plausible they couldn’t update their systems to use the new codepage between ’91 and ‘95. It’s not so much about updating a table that stores the customer’s name.

                                                                                  There are probably hundreds of tables that store the customer’s name, and thousands of data feeds and interfaces that would need to be updated: first to be aware of codepages in the first place, then to support the extended character sets.

                                                                                  And yeah, 100% guaranteed they migrated from an older system. That’s what mainframes do.

                                                                                  1. 5

                                                                                    Precedents de jure do not exist in EU law. It’s a civil and not a common law system. Not even another court in Belgium is bound by this ruling. They exist de facto though, in that courts are encouraged to look at other rulings. Which means this ruling can indeed and reasonably be used to interpret the GDPR in another country.

                                                                                    See: https://academic.oup.com/icon/article/12/3/832/763797

                                                                                    It’s a legal corner case though and to my view, every customer would need to sue. (IANAL, etc.) So it may be feasible to just eat the cost.

                                                                                    1. 1

                                                                                      Mabe “precedent” isn’t exactly the right word, but previous ruling are usually considered AFAIK; there are a bunch of references to previous ones in this ruling as well. And while not entirely legally binding in the same way as in e.g. the US, this ruling does empower Belgian consumers in similar to some degree.

                                                                                      1. 6

                                                                                        Here’s a bit of precedent from Finland: the Parliamentary Ombudsman found in 2018 that the population register using ISO-8859-1 violated the Sami people’s rights because not all Sami names can be expressed in that encoding and it would have been possible to use Unicode. https://www.oikeusasiamies.fi/en/web/guest/-/vaestorekisterikeskus-laiminloi-saamelaisten-oikeudet

                                                                                1. 4

                                                                                  If you live in a dormitory or if you get internet access via wifi from your neighbor or via tethering to your phone, you may not be able to perform the necessary modifications to your router’s configuration even if you knew how.

                                                                                  Another use case are things like carrier-grade NAT, where you share an IP address with multiple people. I can play with my router settings all I want, but no way I can expose any service to the internet. Workarounds exists, but it’s a bit annoying at times.

                                                                                  Note that http://greenhouse.io already exists. If this takes off you may find yourself on the receiving end of some unpleasant letters. Just FYI.