1.  

    To some extent I agree, however if there is too much code in the constructor it also is quite often an indicator that the code should be refactored. The convenience of what the RAII principle is providing allows also to avoid coding and resource management mistakes/bugs. In c++ exceptions are considered to be used for exceptional circumstances, and they are quite expensive for binary size, at least I made that experience when I used to work on some bigger applications. On the other hand I have been writing a lot of python code in the last few years and exceptions are the defacto way to handle errors. And it’s very much not a problem to use exceptions for non exceptional use cases.

    In C++ I rather would argue that the approach of acquiring resources during initialization makes sense because it reduces your need for introducing checks that could lead to wrong code paths especially in the destruction. When applied correctly your program will be more robust if you use the raii principle. I recommend reading the exceptional C++ books by Herb Sutter those are very much still one of the best sources in writing C++ code that is handling exceptions properly.

    Complexity in constructors and destructors is something to avoid which is something as I understand is your main criticism to the principle.

    1.  

      it’s very much not a problem to use exceptions for non exceptional use cases.

      Agreed. Many dynamic programming problems are a good fit for backtracking with longjmp/“exceptions” which is how you’d write it in assembly anyway.

      Lately (well, sometime in the last couple decades) I’m thinking “exceptions” aren’t a good use for exceptional situations either. That is to say, we’re using exceptions exactly backwards.

      One of the best examples I’ve come up with is the out of disk space error. When you’re out of disk space, many programs will give up/crash – but consider the ergonomics of a CAD or video editing tool, that upon save starts moving this multi-gigabyte asset out to disk over what’s perhaps several minutes and then – oops! that partition is full.

      If we unwind, the (temporary) file needs to be cleaned up(deleted), the user informed to make room somehow – none of these tasks are straightforward – all the file opens to save this large asset need to be in on it, which is often tricky if the file-saving was done by a different team than the UI team (who handles the exception, and thus deletes the files).

      But what if we use a handler? We call the handler, from deep in the file saving code, and tell it “we’re out of disk space”. We can easily offer a callback to “retry”, and a list of any (temporary) file created so far so that they can be deleted (or moved onto another disk, or whatever). This could be provided by the UI team, but if not the operating system could have a crude “abort retry fail” dialog. We live in a multitasking world these days, so that’s probably good enough for a lot of enterprise applications – use goes and moves those files around and then resumes the save (by returning from the handler).

    1. 14

      What’s the point though? If you don’t want to use it, why bother? Why do you care what “Linux users seem to be so obsessed” with?

      The world would be a lot better without this kind of hate/phobic trash. Doesn’t matter if it’s desktops or dick, nobody should care about what another person likes if it’s not hurting anyone else.

      1. 13

        The post’s main points are 1. not knowing why people use lightweight linux distros, 2. imagining that the only reason is performance, and then 3. dismissing that argument based on the author’s own laptop. It sounds like the author didn’t read about the distros, talk to the users, or otherwise extend the slightest bit of charity that thousands of people who have organized dozens of projects over the last 20 years to build and use these distros maybe have their own experiences and do things for reasons, even if he can’t guess at them and wouldn’t be motivated by them. It’s frustrating because when I realize I’m ignorant of a group that’s made totally different decisions about a familiar topic, I see an opportunity to get a deeper appreciation by incorporating a new perspective. It’s not just that this article fails an ideological turing test, it’s that it seems oblivious to the possibility of learning from others.

        (And: what does the author think the word “laborious” means? Nothing in the article touches on anything like toil, industriousness, or painstaking care.)

        1. 4

          On top of it, the author points out his hardware is below average. Several of us had a Core Duo 2 with 4GB of RAM as our best machine. Those 1GB apps certainly add up on that. When it broke, the backup I had available was a Celeron I bartered for. The web apps tax the crap out of that. I don’t even try playing good games on it. The money that could’ve gone to a good PC had to go to savings for car repair or replacement. I’m not sure what percentage of us are using older systems due to financial hardship. There’s quite a few of us out there, though.

          There’s also people that try to hold onto old systems long as possible to maximize ROI and reduce waste. I’m in that category, too. It’s why I had my Core Duo 2 for long time. Ran great unless app or OS truly wasted resources. “Lightweight” apps ran even better. We try to repurpose or donate these machines when they don’t meet our needs. Maybe a student would appreciate building up some skills on a lightweight Linux than not having a PC at all. Some school with little money might also find it useful at least some of the time.

      1. 10

        Would you tell the community so that other devs don’t have to go through the same problem?

        1. Switch to a bank that does invoice factoring: They pay roughly 85% of the invoice value when you invoice, so you need to raise your rates by 15%.
        2. …then you can offer a 15% discount to any company who would prepay.
        3. Consider subcontracting instead of referrals: It makes it easier for you to get the revenue you need for step #1, it makes it easier to engage a lawyer if it comes to it.

        I am asking what would you do in my situation since in general controversy in business is always a bad idea even if you are the injured party. … The client is a startup that is about to do an IPO. … Some of their investors have a relationship with a very well known and established business magazine. I am not sure they aware of this.

        I would find out which company is helping them do the IPO and talk to them I might find a commercial debt collection agency which is willing to take my case, then contact as many trade journalists and investors I can to explain that they’re really raising money to prepare for a lawsuit against creditors.

        Then I might call the PR office that these articles are coming, and that I want to make sure that this wasn’t all some “honest mistake” on their part. Obviously they need to pay the full invoice immediately plus any expenses I incurred in this process, but it’ll also be good if they compensate me for any stress I had to endure.

        If you need help with this, reach out to me privately.

        Have you used a debt collection agency?

        Yes. In general, they’re worth as much as you pay them, but except for the above I wouldn’t bother with them since it doesn’t sound like this company is unaware of their debt with you, but is being wilful.

        It doesn’t sound useful in this case, but if I’m really worried someone is going to go bankrupt I get on DeBN. I might do that anyway since it’s easy. If they file for bankruptcy simply prepare a proof of claim and deliver it to the clerk. Only then do I engage a commercial debt collection agency. If they’re quiet and there’s no bankruptcy, I’ll keep adding interest until it becomes worth my time (to hire a lawyer).

        Have you filled a legal complaint against a company before?

        I’ve served notice to companies, and that’s generally all it takes. Companies keep the registered agent information up-to-date in a way they neglect for the accounts payable department.

        Suing for unpaid debt in the USA is actually very easy:

        1. Send a registered letter to the company’s registered agent informing them of their debt and that you will attempt to recover it if they do not respond in 30 days. They have a right to deny the debt, and they have a right to request an alternate way to resolve it. A lawyer should not charge you more than $1k to write such a letter. They may incur $2-5k in additional costs responding to replies, and of course: talking to you.
        2. File the complaint with the court. If you file federal it will cost $400. If you are a US corporation, then your costs will vary based on the state you choose “at home”. It will be less controversial to file in the defendants state but it may be cheaper to file in another.
        3. Serve them the complaint. A routine serve should cost between $50-$75.
        4. You will receive a court date. You and your lawyer will need to show up. A lawyer may cost a few thousand dollars to show up and prepare (you will want them to prepare).
        5. If they don’t show up (for example because you served the complaint by singing telegram) you can file request for default judgement. This will cost another $400 for the filing, but you won’t need as expensive a lawyer to show up.

        Between travel, filing, and so on: $15k sounds extremely reasonable for your situation. I’ve never had to finish a court case I’ve started – I have however been sued and won, so I still have some idea what the entire process looks like.

        Learning how to talk to lawyers and use them effectively is a very useful skill. It can easily mean the difference between spending $20k and $200k. A key thing is that they cannot tell you what to do. They can give you your options and point you to the most immediate one, but they won’t tell you to go nuclear early, and because you’re outside the US this might be your best move: You will have to tell them that you want to do it.

        Giving them a sob is also a good way to spend a lot of money. If you say “here’s my invoice, I want to collect it” it’ll be much cheaper than what you tell them everything you told us.

        1. 1

          THANKS! I will reach you via private message.

        1. 2

          I’ve never heard of this joke, but if you want to find a cycle in a linked list, the canonical way to do it is to use two pointers and have one walk one step at a time, while the other walks two. If they are ever equal after they take a step, there’s a cycle. Using signals is – I think it’s safe to say – way over the top. (Unless that’s the joke.)

          1. 8

            That is the joke. The way I heard it was: keep free()ing the nodes, and if there’s a crash (due to double-free) you found a cycle.

            1. 2

              I’ve heard of the two-pointer approach (with pointer variables typically given the names “tortoise” and “hare”), but I really like the double-free approach.

              1. 1

                Of course, free(x) could be a noop if you have a garbage collecting C (like Zeta-C)

              1. 3

                Huh. If it’s fine to create your own “mediation pipe” to satisfy the API, and then splice(in, pipe); splice(pipe, out) … why doesn’t the kernel support that itself?

                1. 1

                  Maybe it’s because errors need to be reported on the pipe instead of on in or out?

                1. 1

                  There is no way to measure the number of times a billboard on the side of a highway has been looked at.

                  Companies like Verizon Precision Marketing do exactly this: They monitor how many people walk or drive by (facing) a given billboard using radio data from mobile phones.

                  It is not possible to estimate the percentage of people who glanced at a magazine ad and subsequently bought the product.

                  Of course it is. You try a new publisher for three months and look at the sales uptick.

                  You can estimate, sure, but it is an inexact science.

                  Digital is in the same boat: An “impression” can be caused by your cat.

                  I think we all know this though.

                  The amount of personally identifiable information companies have about their customers is absolutely perverse.

                  It’s also of very low-precision. So much so that a well-targeted ad actually gets into the news. If you don’t use an ad blocker, you can see the topics Oracle/BlueKai are monitoring at http://bluekai.com/registry and while there are some aggressive persona-based approaches to data, they’re only good in the statistical sense.

                  Publishers will call [blocking ads] unethical…

                  I know one gaming review site that sees 50% of their users with ad blockers. Their strategy? Double the ads. More clickbait. Reduce the quality year on year. But what can we do?

                  It’s a dark path, but it’s difficult to get anything else out there.

                  It is the publisher’s responsibility to develop a business model that is sustainable and ethical.

                  Says the man who uses Google and Twitter trackers on his own content.

                  Small publishers lack the ability to do this, so the “end game” is an “Internet” with Google, Facebook/Instagram, and Twitter. That’s it. I don’t want that Internet. I think it’s a horrible place.

                  Something else to consider: Chrome, Firefox, node.js, and literally everything you think you like is funded by advertising either directly, or by “speakers” and “contributors” who work for companies that do advertising.

                  Advertising has a long tail – the current state of digital marketing is a long way from sponsorship, but unless we too make suggestions on how publishers can get paid, they’ll just continue to taxi in the toilet as long as they can.

                  1. 0

                    The personally identifiable information Facebook and Google have about me is certainly not low precision. Just because they don’t target me doesn’t mean that they don’t have the data. Facebook not allowing highly targeted ads anymore doesn’t mean that the data they could use to make those highly targeted ads has gone away.

                    I have a lot more confidence that people would subscribe to things if they had good options to subscribe to than you do, I guess. People said for years that they’d stop torrenting if they could watch good movies on a paid subscription service, or if they could listen to music in decent quality with a subscription, and people said they were the exception if they were even telling the truth. But with the popularity of Spotify and Netflix and such, it’s proven to be true. I don’t know anyone that torrents most of the stuff like watch and listen to, and those that do still torrent things are downloading TV shows that aren’t on New Zealand Netflix, music that is blocked on New Zealand Spotify, etc. Old BBC TV shows that haven’t been on in years and aren’t even available on DVD. Not out of laziness or greed but because it’s literally impossible to get it any other way.

                    Now sure, Spotify does have a free advertising-supported mode, but nobody wants frequent repetitive advertising while they’re listening to music so they pay the subscription fee. And Netflix is subscription-only. People are fine with subscribing to things. People would subscribe to news media again if they produced news media of decent quality. I’d subscribe to the newspaper again if half the pages weren’t advertisements and it stopped getting thinner every year. And I’d definitely subscribe to news websites if they weren’t all free, ignoring for a moment that the best quality news website in NZ by far is the Government-operated RadioNZ.co.nz, the rest essentially being tabloid junk.

                    As for ‘literally everything you think you like is funded by advertising either directly [or indirectly]’, I certainly would not say I like Firefox, Chrome or node.js. There are non-advertisement-funded alternatives to most things, and if there aren’t, there could be. There’s no real reason that Firefox couldn’t be free but with a subscription if you want to have your tabs/settings/bookmarks/etc. synced between devices, except that it’s competing with browsers for which that isn’t true. But if they weren’t competing with ‘make everything we offer free so that competing with us is nigh-impossible’ Google it could become a viable model. Drew DeVault’s sr.ht plans to have a subscription model instead of the VC funding model of GitHub. Free software was completely fine for my purposes when it was all developed by volunteers because they enjoyed doing it. If anything it was better before the corporatisation of the Linux world.

                    So I rather heftily reject the claim that everything I like is funded by advertising. In fact, the things funded by advertising tend to be crap designed to appeal to the lowest common denominator because numbers are more important than anything else in advertising.

                    In theory the poor benefit from ad-funded services because they use them for free while richer people pay for the things being advertised, but in practice let’s be honest: Coke doesn’t advertise so it will appeal to rich educated people.

                    1. 2

                      Netflix absolutely advertises, and would not exist in its current form without previous successes with advertising.

                      So I rather heftily reject the claim that everything I like is funded by advertising.

                      It is difficult to pull apart your argument. I think at the beginning you’re talking about hypersegmentation, and then later I believe you’re confusing sponsorship with advertising. You agree that Firefox and GitHub are funded by advertising – but do not believe they need to be, and yet:

                      the things funded by advertising tend to be crap designed to appeal to the lowest common denominator because numbers are more important than anything else in advertising.

                      I don’t understand how to approach this: What “numbers” are you talking about?

                      I suspect we’re talking past each other.

                      Coke doesn’t advertise so it will appeal to rich educated people.

                      Coca-Cola spends almost $4bn a year on marketing and advertising, and that includes literally everything they do from television to coupons.

                      1. 0

                        Netflix absolutely advertises, and would not exist in its current form without previous successes with advertising.

                        I’ve never seen an ad for Netflix anywhere. They’re popular because they are of high quality, have a broad range of programmes, and are cheap and don’t have ads.

                        It is difficult to pull apart your argument. I think at the beginning you’re talking about hypersegmentation, and then later I believe you’re confusing sponsorship with advertising. You agree that Firefox and GitHub are funded by advertising – but do not believe they need to be, and yet:

                        What? What are you even talking about? I’m talking about advertising. Firefox and GitHub are funded by advertising and VC respectively, and neither are necessary.

                        I don’t understand how to approach this: What “numbers” are you talking about?

                        The number of people looking at the advertisements is all that advertisers care about.

                        Coca-Cola spends almost $4bn a year on marketing and advertising, and that includes literally everything they do from television to coupons.

                        To appeal to the kind of people that drink coke i.e. poor and uneducated people.

                        1. 2

                          To appeal to the kind of people that drink coke i.e. poor and uneducated people.

                          I live in an area (Mid-South) where people’s economic class varies incredibly. I can tell you poor people buy way less Coca-Cola than everyone else in my area since its strong brand and deals with retailers keeps its prices super high. They usually drink stuff like Sam’s Cola, Big K, cheap juices, and so on with Coca-Cola more sparingly. It’s working-class and up that drink Coca-Cola because it’s advertising and existing customer base pulled them into a substance that’s highly addictive. Then, many are basically junkies. Drugs work on all classes. Most that I see drink it are well-educated, too, that like its taste, mental high, and were introduced to it by parents or friends like the lower class people.

                          So, that claim is just bullshit through and through in this area. Probably most areas as anyone studying marketing, esp of addictive products, knows they target the emotional rather than rational aspects of the brain. Most people aren’t doing careful analysis of what they drink. They buy it for irrational reasons then continue drinking it for irrational reasons. But they like that product for those reasons. And advertisers play on those thinking patterns and/or impulses.

                          1. 2

                            Netflix absolutely advertises, and would not exist in its current form without previous successes with advertising.

                            I’ve never seen an ad for Netflix anywhere.

                            https://www.mediapost.com/publications/article/313753/netflix-spent-more-than-1b-on-advertising.html

                            Or did you actually believe that because you have never seen an ad that it somehow means that Netflix doesn’t advertise?

                            They’re popular because they are of high quality, have a broad range of programmes, and are cheap and don’t have ads.

                            My understanding is that they broke into the Blockbuster “monopoly” with a combination of a shorter supply chain and aggressive advertising.

                            Why exactly do you think they would have been in a position to launch their “high quality broad range of programmes” online service against a number of incumbents without advertising?

                            Or are you imagining some fantasy world where there is no advertising and everyone is on equal footing because of that?

                            If so, I’m not interested in that conversation because it’s pointless: We’ve had advertising longer than we’ve had the printing press.

                            The number of people looking at the advertisements is all that advertisers care about.

                            That’s not true.

                            Advertisers represent a wide range of interests from brand awareness to some action (direct response, sales uplift, etc). Incentive marketers are often interested more in the response than in their own brand (since the respondent is unlikely to recognise it). Content marketers are interested in shifting discussion points and trending (especially in news). And so on.

                            To appeal to the kind of people that drink coke i.e. poor and uneducated people.

                            Why exactly do you think Coke doesn’t market to any demographic except “poor and uneducated people?”

                            It is difficult to pull apart your argument.

                            What? What are you even talking about?

                            I’m trying to understand your blathering and you’re not making it easy.

                            Can you restate your point more carefully and succinctly? It’s all over the place.

                            1. 0

                              Or did you actually believe that because you have never seen an ad that it somehow means that Netflix doesn’t advertise?

                              If Netflix advertised in New Zealand I would be fairly likely to have seen it advertising in New Zealand. New Zealand’s media landscape is pretty small: there aren’t a huge number of TV channels or newspapers, there aren’t a lot of different places to advertise, and I live in one of our biggest cities. They might advertise, but even if they do, it’s small enough that I haven’t noticed, and I absolutely still keep to my claim that their popularity here is due to quality and price, not advertising. They didn’t need advertising to become popular. You never do if your product is any good.

                              Or are you imagining some fantasy world where there is no advertising and everyone is on equal footing because of that?

                              That’s a much more ideal world than the one we’re living in now. Certainly advertising should be made illegal, it’s psychologically manipulative. Advertising tobacco is illegally pretty much everywhere, alcohol advertising is pretty restricted, and advertising medicine is illegal everywhere that isn’t the US and NZ as far as I’m aware. It would be great to extend that to a blanket ban on advertising.

                              If so, I’m not interested in that conversation because it’s pointless: We’ve had advertising longer than we’ve had the printing press.

                              Something being around for a long time doesn’t make it good or immune to being banned today. We had lead in petrol for a long time, we’ve had murder for a long time. It was fully expected and condoned for soldiers in war to rape and pillage wherever they went, until we decided as a society that wasn’t okay.

                              Why exactly do you think Coke doesn’t market to any demographic except “poor and uneducated people?”

                              I think that’s pretty obvious. They obviously aren’t going to say so, but it’s their entire brand.

                              I’m trying to understand your blathering and you’re not making it easy.

                              This forum is meant to be polite, so I’ll try to be polite in saying this. I’m not blathering, nor is my point ‘all over the place’. Your lack of understanding is more indicative of you than it is of me.

                              My point was extremely clear: the world doesn’t need advertising, and open source doesn’t need advertisement funding or corporate funding. You claimed, wrongly, that ‘everything you think you like is funded by advertising either directly or indirectly’. That’s simply wrong, on the face of it, for obvious reasons. Some are, a lot aren’t, clearly and obviously it’s not the case that literally all of them are. Of those that are, none of them need to be funded by advertising. That’s a pretty clear and simple point.

                              1. 3

                                If Netflix advertised in New Zealand I would be fairly likely to have seen it

                                Here’s a billboard in New Zealand paid for by Netflix. You’re welcome.

                                I absolutely still keep to my claim that their popularity here is due to quality and price, not advertising.

                                Netflix wouldn’t exist without advertising full stop, let alone in New Zealand, so I disagree, but why are you arguing about this?

                                Why exactly do you think Coke doesn’t market to any demographic except “poor and uneducated people?”

                                I think that’s pretty obvious. They obviously aren’t going to say so, but it’s their entire brand.

                                It’s not obvious to me (who has worked with Coca-Cola’s marketing team in the past), or the first three links on a Google search for Coca-Cola’s target market.

                                Coca-cola spends around 10% of their revenue on marketing. Spending almost $565m on marketing in the US alone, I find it very difficult to believe their only target market is “poor and uneducated people”.

                                To put that in perspective, Google spends around $350m in the US, and I don’t know anyone who believes that only “poor and uneducated people” use Google in the US…

                                If you have an interesting point, you should get to it: It’s certainly not obvious that Coke only market to “poor and uneducated people”, and more to the point: I don’t even believe that it’s true.

                                the world doesn’t need advertising, and open source doesn’t need advertisement funding or corporate funding.

                                Great.

                                How do we get there from here?

                                That can be an interesting discussion. Trying to pretend we’re not currently dependant on advertising to produce good products isn’t productive.

                                You claimed, wrongly, that ‘everything you think you like is funded by advertising either directly or indirectly’. That’s simply wrong

                                It may be morally wrong, but it’s not incorrect.

                                I’m happy to talk about the former with you, but you’re not equipped to discuss the latter.

                                This forum is meant to be polite, so I’ll try to be polite in saying this. I’m not blathering, nor is my point ‘all over the place’. Your lack of understanding is more indicative of you than it is of me.

                                After being here three days, do you think you should be telling people what this forum is “meant to be”?

                                Are you furthermore calling me stupid for not understanding what your point is?

                                If so, you can go back to reddit.

                                1. -1

                                  being here three days

                                  How long someone has been commenting is not the same as how long someone has been here. Not interested in continuing a conversation with someone so rude.

                                2. 0

                                  I think that’s pretty obvious. They obviously aren’t going to say so, but it’s their entire brand.

                                  Why learn anything when you already know everything?

                      1. 10

                        Russ Cox wrote:

                        At Bell Labs, Rob switched acme and sam from black and white to color in the development version of Plan 9, called Brazil, in the late fall of 1997. I used Brazil on my laptop as my day-to-day work environment, but I was not a developer. I remember writing Rob an email saying how much I enjoyed having color but that it would be nice to have options to set the color scheme. He wrote a polite but firm response back explaining his position. He had worked with a graphic designer to choose a visually pleasing color palette. He said he believed strongly that it was important for the author of a system to get details like this right instead of defaulting on that responsibility by making every user make the choice instead. He said that if the users revolted he’d find a new set of colors, but that options wouldn’t happen.

                        It was really a marvelous email, polite yet firm and a crystal clear explanation of his philosophy. Over the years I have from time to time spent hours trying to find a copy of that email. It is lost.

                        1. 16

                          This is a good example of having a fundamentalist position to the point of absurdity. The idea that there will be a correct colour scheme for a text editor is an amazing mix of arrogance and over-simplification.

                          1. 12

                            And ignores the fact that not everyone has “correct” vision and color perception.

                            1. 2

                              And not every display is created equal, either. Nor every physical environment. (I switch themes when I use my laptop outdoors, because my usual low-contrast theme is illegible there.)

                              1. 1

                                That seems silly.

                                I go outside sometimes too, but I adjust the contrast and colour temperature of the entire system, since I do more than edit text.

                                1. 2

                                  Oh, that makes sense. When I take my old ThinkPad outside, I don’t have internet access, so, it’s highly unlikely that I will be doing anything other than editing text. :) In recent history, this has only happened when I’m a passenger on a long drive. There are a couple of toy C programs I putz around with to pass the time.

                            2. 2

                              The idea that there will be a correct colour scheme for a text editor is an amazing mix of arrogance

                              Why?

                              I have my own opinions on the matter, but it seems to me that optimising a colour scheme for a set of requirements (contrast, colour blindness, long exposure time, etc) is probably possible.

                              I think most people have a brand preference for a set of colours that is something else though.

                              and over-simplification.

                              The depth of the response may be lost to us, but surely it is irresponsible to assume it was done without thought given here is an accomplished programmers report that the response was in fact, quite thoughtful?

                              1. 2

                                I have my own opinions on the matter, but it seems to me that optimising a colour scheme for a set of requirements (contrast, colour blindness, long exposure time, etc) is probably possible.

                                There are many considerations that go into people’s selections of color schemes that intrinsically vary, including physical environments (e.g., home vs. office), time of day (e.g., daylight vs. evening light), and simple personal preference. To insist that there be no option to change the colors – on principle – is to tell everybody who might care about these considerations that they’re flat out wrong and that the author knows that before having talked with them. As if that’s not arrogant enough, it’s even more arrogant (and illogical) to further claim that even if one were wrong about the specific colors chosen, they’re still right about the broader point that there’s only one appropriate color scheme for an editor.

                                The bit about author’s responsibility is a red herring. I, too, believe that it’s important for authors to choose an appropriate set of configuration parameters. I also believe there’s nothing wrong with users wanting different values.

                                1. 1

                                  I think it supporting multiple physical environments (gamma/contrast) and time of day (temperature) doesn’t beg for multiple colour schemes: That’s just lazy engineering. This is obviously a job for the display manager or monitor setup.

                                  Being left with “simple personal preference” isn’t satisfying; People can have a “simple personal preference” about nearly everything, Flat earth, Metric system, Fish on Friday, and so on. Some editors support more preferences than others.

                                  In terms of something wrong with “users wanting different values”, one thing I particularly dislike about preferences is sitting down at another persons workstation and being unable to help them quickly (i.e. with minimal mental load on myself) because they have configured damn near everything that can be configured.

                                  1. 3

                                    I think it supporting multiple physical environments (gamma/contrast) and time of day (temperature) doesn’t beg for multiple colour schemes: That’s just lazy engineering. This is obviously a job for the display manager or monitor setup.

                                    I don’t think that’s obvious at all.

                                    Being left with “simple personal preference” isn’t satisfying; People can have a “simple personal preference” about nearly everything, Flat earth, Metric system, Fish on Friday, and so on.

                                    “Flat earth” is a scientific model, the purpose of which is to make predictions, and that model makes no useful predictions that aren’t made more accurately by other models. If you were building software that depended on a model of the earth, I think it would be fair to leave out a “flat earth” model because it’s objectively less useful. Meal choices are indeed a personal preference – and if you were building software for people to record their meals, I’d recommend against supporting only one possible food. Few people would find it reasonable to tell somebody what single food they must eat for all meals. Choice of unit system has properties of both; there are tradeoffs with different systems, and most software provides an option to switch between them.

                                    My main point is that to provide an option is to allow for the possibility that you might be wrong and enable users to adjust as they need to. To refuse the option (on principle) is to assert that anybody who wants it to work differently is wrong by construction.

                                    It seems like you’re taking the conjecture (that there is only one optimal color scheme) as an axiom and, faced with a data point like a person claiming to prefer a different color scheme, conclude that the person is irrational (akin to a flat earth believer). That seems backwards to me.

                                    In terms of something wrong with “users wanting different values”, one thing I particularly dislike about preferences is sitting down at another persons workstation and being unable to help them quickly (i.e. with minimal mental load on myself) because they have configured damn near everything that can be configured.

                                    I agree with how annoying this is, but I would not even consider to insist that people use no customizations for the tiny fraction of time I spend in their environments.

                                    1. 1

                                      It seems like you’re taking the conjecture (that there is only one optimal color scheme) as an axiom

                                      I’m humouring it, sure.

                                      Here’s a smart guy who has convinced another smart guy – the exact conversation lost, but the impression remained. I’m giving it the benefit of the doubt because that’s how we ourselves begin to be convinced of strange and unusual ideas.

                                      I’m still making my own opinion here:

                                      It seems possible to have a colour scheme optimised for certain things.

                                      It might not be possible to optimise for every thing, and it’s certainly not possible to optimise for everything once you’ve permitted “personal preference” to be one of those things; as an extreme example, people had personal preference not to sit next to black people on the bus – so I think it’s absolutely foolish to admit “personal preference” so quickly.

                                      most software provides an option to switch between them.

                                      Feature parity is often a useful goal, but I don’t see how it’s relevant. The feature either generally useful or specifically popular, and the argument is clearly about the former.

                                      That seems backwards to me.

                                      That’s why thinking about it is useful.

                                      I think you can start from either position: That choice is good or choice is bad. It’s almost certainly not that simplistic, but I see no good reason to start at the end you are starting from, and several easy reasons not to.

                                    2. 2

                                      Being left with “simple personal preference” isn’t satisfying;

                                      Their personal preference might be based on something like “I’m colorblind” or “I have a sensory integration disorder” or “I need a high-contrast theme because I have extremely poor vision.” Sometimes preferences are out of necessity.

                                      1. 1

                                        I don’t agree that those things are “simple personal preference”.

                                        I touched briefly on why, but reading back it might not be clear:

                                        Accessibility is actually probably something that can be optimised for – that is to say, a colour scheme can be optimised for colourblindness, contrast needs, integration disorders, and so on.

                                        However even if personal needs remain unaccommodated, I’m still not sure every application being written needs to reinvent the wheel to add this kind of configuration. Notwithstanding the risk/reward questions (e.g. how many people have these kinds of problems, really), it still seems like it would be smarter engineering to get your display manager/windowing environment to do it, not to mention more convenient for users.

                                        So: Still not convinced.

                            1. 12

                              Model M reporting in.

                              I’d like to find something new, but – and for reasons I haven’t investigated or discovered – even newer versions of the keyboard, where they claim to use the same switches, don’t feel the same. I wonder if it’s like leather shoes and I’ve simply developed a preference for the worn-in feeling.

                              The rest of the time, I use whatever keyboard is on my laptop (I have a MacBook Air 11” that is great for programming but aging, and a MacBook 12” with the shitty disgusting butterfly keyboard)

                              1. 3

                                Another Model M user here—I use Model Ms on everything, including the Mac laptop at work (which rarely moves off the desk). It’s kind of funny to see the cable with two adapters, one to convert from DIN to PS/2, and then from PS/2 to USB.

                                I also have a stash of Model Ms at home that I’ve collected over the years, but frankly, the ones I use have yet to wear our, so I’m probably set for life.

                                1. 2

                                  Why switch? :) You have the mother of all mechanical keyboards! :)

                                1. 17

                                  I’ve heard the “binary logs are evil!!!” mantra chanted against systemd so many times that it wasn’t funny anymore. It’s a terrible argument. With so many big players putting their logs into databases, the popularity of the ELK stack, it is pretty clear that storing logs in non-plaintext format works. Way back in 2015, I wrote two blog posts about the topic.

                                  The gist of it is that binary logs can be awesome, if put to good use. That the journald is not the best tool is another matter, but journald being buggy doesn’t mean binary logs are bad. It just means that the journald is possibly not the most carefully engineered thing out there. There are many things to criticize about systemd and the journal, and they both have their fair share of issues, but binary storage of logs is not one of them.

                                  1. 10

                                    Okay, so can we just assume all complaints about “binary logs” are just about these binary logs and get on with things?

                                    The journald/systemd people don’t act like they have any clue what’s going on in the real world: people can’t use the tools they used to, and these tools evidently suck; Plain text sucked less, so what’s the plan to get anything better?

                                    1. 8

                                      I don’t think that’s entirely reasonable. It’s converting a complaint about principle (“don’t do binary logs”) into a complaint about practice, and that makes a big difference. If journald is a bad implementation of an ok idea, that requires very different steps to fix than if it’s a fundamentally bad idea.

                                      What you’re describing makes sense for people on the systemd project to say (“woah, people hate our binary logs, maybe we should work on them”[0]), but not for the rest of us trying to understand things.

                                      [0] I fear they’re not saying that, as they seem somewhat impervious to feedback

                                      1. 2

                                        I feel like @geocar is against binary logs as a source format, but not as an intermediate or analytics format. Even if your application uses structured logging, it can still be stored in a text file, for example as JSON, at the source. It can be converted to a binary log later in the chain, for example on a centralized logging server, using ELK, SQL, MongoDB, Splunk or whatever. The benefit is that you keep a lot of flexibility at the source (in terms of supporting multiple formats depending on the source application) and are still able to go back to the plain text log if you encounter a problem.

                                        1. 4

                                          I’m not even against binary logs “as a source format.”

                                          Firstly: I recognise that “complaints about binary logs” is directed at journald and isn’t the same thing about complaints about logs in some non-text format.

                                          I think getting systemd in deep forced sysadmins to retool on top of journald and that hurt a lot for so very little gain (if there was any gain at all- and for most workflows I suspect there wasn’t). This has almost certainly put people off of binary logs, and has almost certainly got people complaining about binary logs.

                                          To that end: I don’t think those feelings around binary logs are misplaced.

                                          Some humility is [going to be] required when trying to win people over with binary logs, but appropriating the term “binary logs” to include tools the sysadmin chooses is like pulling the rug out from under somebody, and that’s not helping.

                                          1. 2

                                            Thank you very much for clarifying. I agree that forcing sysadmin “to retool on top of journald” hurts.

                                        2. 2

                                          No, it’s recognising that when enough people are complaining about “the wrong thing”, telling them it’s the wrong thing doesn’t help them. It just causes them to dig in.

                                          What’s the right thing?

                                          I think that’s the point of the bug…

                                        3. 1

                                          Okay, so can we just assume all complaints about “binary logs” are just about these binary logs and get on with things?

                                          As soon as the complaints start to be about journald and not “binary logs”, and the distinction is made explicit, yeah, we can. It’s been four years, so I’m not going to hold my breath.

                                          and these tools evidently suck

                                          For a lot of use cases, they do not suck. For many, they are a vast improvement over text logs.

                                          what’s the plan to get anything better?

                                          Stop logging unstructured text to syslog or stdout, and either log to files or to a database directly. Pretty much what you’ve been (or should have been) doing the past few decades, because both syslog and stdout are terrible interfaces for logs.

                                          1. 9

                                            As soon as the complaints start to be about journald and not “binary logs”, and the distinction is made explicit, yeah, we can. It’s been four years, so I’m not going to hold my breath.

                                            People complain about things that hurt, and between Windows and journald it should not be a surprise that “binary logs” is getting the flak. journald has a lot of outreach work to do if they want to fix it.

                                            For a lot of use cases, [the tools] do not suck. For many, they are a vast improvement over text logs.

                                            And yet when programmers make mistakes implementing them, the sysadmin are left cleaning up after them.

                                            Text logs have the massive material advantage that the sysadmin can do something with them. Binary logs need tools to do things, and the journald implementation has a lot of work to do.

                                            Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge. This allows people to get a lot of the advantages of binary logs with few disadvantages (and given how cheap disk is, the price is basically zero).

                                            Stop logging unstructured text to syslog or stdout, and either log to files or to a database directly. Pretty much what you’ve been (or should have been) doing the past few decades, because both syslog and stdout are terrible interfaces for logs.

                                            These are directions to developers, not to sysadmins. Sysadmins are the ones complaining.

                                            Are we really to interpret this as refuse to install any software that doesn’t follow this rule?

                                            I’m willing to whack some perl together to get the text log data queryable for my business, but you give me a binary turd I need tools and documentation and advice.

                                            1. 4

                                              Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge.

                                              What do you mean by a “transparent structuring layer”?

                                              1. 2

                                                Something to structure the plain text logs into some tagged format (like JSON or protocol buffers).

                                                Splunk e.g. lets users create a bunch of regular expressions to create these tags.

                                                1. 2

                                                  Got it now. Thanks for clarifying!

                                              2. 0

                                                Text logs have the massive material advantage that the sysadmin can do something with them. Binary logs need tools to do things, and the journald implementation has a lot of work to do.

                                                For some values of “can do”, yes. Most traditional text logs are terrible to work with (see my linked blog posts, not going to repeat them here, again). Besides, as long as your journal files aren’t corrupt (which happens less and less often these days, I’m told), you can just use journalctl to dump the entire thing, and grep in the logs, just like you grep in text files. Or filter them first, or dump in JSON and use jq, and so on. Plenty of options there.

                                                Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge.

                                                Clearly our experience differs. Most syslog-ng PE customers (and customers of related products) made binary logs (either PE’s LogStore, or an SQL database) their golden source of knowledge. A lot of startups - and bigger businesses - outsourced their logging to services like loggly, which are a black box like binary logs.

                                                These are directions to developers, not to sysadmins. Sysadmins are the ones complaining.

                                                These are directions to sysadmins too. The majority of daemons support logging to files, or use a logging framework where you can set them up to log directly to a central collector, or to a database directly. For a huge list of applications, bypassing syslog has been there since day one. Apache, Nginx, pretty much any Java application can all do this, just to name a few things. There are some notable exceptions such as postfix which will always use syslog, but there are ways around that too.

                                                You can bypass the journal with most applications, some support that easily, some require a bit more work, but it has been doable by sysadmins all these years. I know, because I’ve done it without modifying any code.

                                                I’m willing to whack some perl together to get the text log data queryable for my business, but you give me a binary turd I need tools and documentation and advice.

                                                With the journal, you have journalctl, which is quite well documented.

                                                1. 2

                                                  Clearly our experience differs. Most syslog-ng PE customers…

                                                  Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?

                                                  outsourced their logging to services like loggly, which are a black box like binary logs.

                                                  I would be surprised to find that most people that use loggly don’t keep any local syslog files.

                                                  What exactly are you arguing here?

                                                  Plenty of options there.

                                                  And?

                                                  You can bypass the journal with most applications, some support that easily, some require a bit more work, but it has been doable by sysadmins all these years. I know, because I’ve done it without modifying any code.

                                                  Right, and the goal is to get people using journald right?

                                                  If journald doesn’t want to be used, what it’s reason for existing?

                                                  1. 0

                                                    Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?

                                                    Yes.

                                                    I would be surprised to find that most people that use loggly don’t keep any local syslog files.

                                                    Most I’ve seen only keep local logs because they’re too lazy to clean them up, and just leave them to the default logrotate. In the past… six or so years, all loggly (& similar) users I worked with, never looked at their text logs, if they had any to begin with.

                                                    Right, and the goal is to get people using journald right?

                                                    For systemd developers, perhaps. I’m not one of them. I don’t mind the journal, because it’s been working fine for my needs. The goal is to show that you can bypass it, if you don’t trust it. That you can get to a state where your logs are processed and stored efficiently, in a way that is easy to work with - easier than plain text files. Without using the journal. But with it, it may be slightly easier to get there, because you can skip the whole getting around it dance for those applications that insist on using syslog or stdout for logging.

                                                    1. 2

                                                      Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?

                                                      Yes.

                                                      I think you’re completely wrong.

                                                      There are a lot of Debian/RHEL/Ubuntu/*BSD (let alone Windows) machines out there, and they’re definitely not using syslog-ng by default…

                                                      Debian publishes install information: syslog-ng verus rsyslogd. It’s no contest.

                                                      A big bank I’m working with has zero: all rsyslogd or Windows.

                                                      Also, the world is moving to journald…

                                                      So, why exactly do you believe this?

                                                      In the past… six or so years, all loggly (& similar) users I worked with, never looked at their text logs, if they had any to begin with.

                                                      Most I’ve seen only keep local logs because they’re too lazy to clean them up, and just leave them to the default logrotate.

                                                      Okay, but why do you think this contradicts what I say?

                                                      You’re talking about people who have built a custom (text based!) logging system, streaming via the syslog protocol. The golden source was text files.

                                                      The goal is to show that you can bypass it, if you don’t trust it.

                                                      Ah well, this is a very different topic than what I’m replying to.

                                                      I can obviously bypass it by not using it.

                                                      I was simply trying to explain why people who complain about binary logging aren’t ignorant/crackpots, and are complaining about something important to them.

                                                      1. 1

                                                        I think you’re completely wrong.

                                                        I think I know better how many syslog-ng PE customers there are out there (FTR, I work at BalaBit, who make syslog-ng). It has a significant market share. Significant enough to be profitable (and growing), in an already crowded market.

                                                        A big bank I’m working with has zero: all rsyslogd or Windows.

                                                        …and we have big banks who run syslog-ng PE exclusively, and plenty of other customers, big and small.

                                                        Also, the world is moving to journald…

                                                        …and syslog-ng plays nicely with it, as does rsyslog. They nicely extend each other.

                                                        You’re talking about people who have built a custom (text based!) logging system, streaming via the syslog protocol. The golden source was text files.

                                                        I think we’re misunderstanding each other… What I consider the golden source may be very different from what you consider. For me, the golden source is what people use when they work with the logs. It may or may not be the original source of it.

                                                        I don’t care much about the original source (unless it is also what people query), because that’s just a technical detail. I don’t care much how logs get from one point to another (though I prefer protocols that can represent structured data better than the syslog protocol). I care about how logs are stored, and how they are queried. Everything else is there to serve this end goal.

                                                        Thus, if an application writes its logs to a text file, which I then process and ship to a data warehouse, I consider that to be binary logs, because that’s how it will ultimately end up as. Since this warehouse is the interface, the original source can be safely discarded, once it shipped. As such, I can’t consider those the golden source.

                                                        If we restricted “binary logs” to stuff that originated as binary from the application, then we should not consider the Journal to use binary logs either, because most of its sources (stdout and syslog) are text-based. If the Journal uses binary logs, then anything that stores logs as binary data should be treated the same. Therefore, everything that ends up in a database, ultimately makes use of binary logs. Even if their original form, or the transports they arrived there, were text.

                                                        (Transport and storage are two very different things, by the way.)

                                                        I was simply trying to explain why people who complain about binary logging aren’t ignorant/crackpots, and are complaining about something important to them.

                                                        I never said they are. All I said is that storing logs in binary is not inherently evil, linked to blog posts where I explain pretty much the same thing, and give examples for how binary storage of logs can improve one’s life. (Ok, I also asserted that syslog and stdout are terrible interfaces for logs, and I maintain that. This has nothing to do with text vs binary though - it is about free-form text being awful to work with; see the linked blog posts for a few examples why.)

                                                        1. 1

                                                          I think I know better how many syslog-ng PE customers there are out there

                                                          Or we just have different definitions of significant.

                                                          Significant enough to be profitable (and growing), in an already crowded market.

                                                          Look, I have an advertising business that makes enough money to be profitable, and is growing, but I’m not going to say I have a “significant” market share of the digital advertising business.

                                                          But whatever.

                                                          All I said is that storing logs in binary is not inherently evil

                                                          And I didn’t disagree with that.

                                                          If you try and re-read my comments knowing that, maybe it’ll be more clear what I’m actually pointing to.

                                                          At this point, we’re just talking past each other, and there’s no point in that.

                                          2. 2

                                            Thanks for linking to the blog posts, they were most informative.

                                          1. 8

                                            a.out binaries are smaller than elf binaries, so let’s statically link everything into one big executable ala busybox.

                                            Similarly, a modular kernel with all its modules takes up more total space than a single kernel with everything built in. So don’t even bother implementing modules. Linux 1.2 was the last great Linux before they ruined everything with modules.

                                            64-bit code takes up more space than 32-bit, so let’s build for 32-bit instruction sets. Who has more than 4GB of addressable memory anyway?

                                            Optimized code usually takes up more space, often a lot more when inlining is enabled. Let’s build everything with -Os so we can fit more binaries on our floppies.

                                            Icons are really superfluous anyway, but maybe we’ll want X11R5 or some other GUI on a second floppy. (I’d say X11R6 but all those extensions take up too much space). Make sure to use an 8-bit storage format with a common palette – 24-bit or 32-bit formats are wasteful.

                                            (I lament the bloated nature of the modern OS as much as the next Oregon Trail Generation hacker, but really – is “fits on a 1.7MB floppy really the right metric? Surely we can at least afford to buy 2.88MB drives now?)

                                            1. 5

                                              64-bit code takes up more space than 32-bit, so let’s build for 32-bit instruction sets. Who has more than 4GB of addressable memory anyway?

                                              Most programs don’t need more than 4GB of addressable memory, and those that do, know it. Knuth flamed about this some, but while you can use X32 to get the big registers and little memory, it’s not very popular because people don’t care much about making fast things.

                                              I lament the bloated nature of the modern OS as much as the next Oregon Trail Generation hacker, but really – is “fits on a 1.7MB floppy really the right metric? Surely we can at least afford to buy 2.88MB drives now?

                                              No, but smaller is better. If you can fit inside L1, you’ll find around 1000x speed increase simply because you’re not waiting for memory (or with clever programming: you can stream it).

                                              There was a time when people did gui workstations in 128kb. How fast would that be today?

                                              1. 2

                                                Most programs don’t need more than 4GB of addressable memory, and those that do, know it.

                                                All the integer overflows with values under 64-bits suggests otherwise. I know most programmers aren’t doing checks on every operation either. I prefer 64-bit partly to cut down on them. Ideally, I’d have an automated tool to convert programs to use it by default where performance or whatever allowed.

                                                “No, but smaller is better. If you can fit inside L1, you’ll find around 1000x speed increase simply because you’re not waiting for memory”

                                                Dave Long and I agreed on 32-64KB in a bootstrapping discussion for that very reason. Making that the maximum on apps kept them in the fastest cache even on lots of older hardware. Another was targeting initial loaders to tiny, cheap ROM (esp to save space for updates). Those were only memory metrics we could find that really mattered in general case. The rest were highly situation-specific.

                                                1. 1

                                                  Most programs don’t need more than 4GB of addressable memory, and those that do, know it.

                                                  All the integer overflows with values under 64-bits suggests otherwise.

                                                  How?

                                                  I know most programmers aren’t doing checks on every operation either. I prefer 64-bit partly to cut down on them. Ideally, I’d have an automated tool to convert programs to use it by default where performance or whatever allowed.

                                                  What does that mean?

                                                  1. 0

                                                    My point isn’t about the addressable memory: it’s about even being able to represent a number. Programs are usually designed with assumption that the arithmetic they do will work like real-world arithmetic on integers. In machine arithmetic, incrementing a number past a certain value will lead to an overflow. That can cause apps to misbehave. Another variation is a number coming from storage with many bits goes to one with fewer bits which caller didn’t know had fewer bits. That caused the Ariane 5 explosion.

                                                    Overflows happen more often with 8-16-bit fields since their range is so small. They can happen to 32-bit values in long running systems or those with math pushing numbers up fast. They either won’t happen or will take a lot longer with 64-bit values. I doubt most programmers are looking for overflows throughout their 32-bit applications. So, I’d rather just default on 64-bit for a bit of extra, safety margin. That’s all I was saying.

                                              2. 1

                                                Linux 1.2 was the last great Linux before they ruined everything with modules.

                                                https://twitter.com/1990sLinuxUser :P

                                                1. 2

                                                  Why has systemd deprecated support for /usr on a different filesystem!!

                                                  That issue bit me last month! I moved my /usr because it was too large, and the damned system couldn’t even boot into an init=/bin/sh shell! It dropped me into an initrd shell. I had to boot off a live CD to fix it. (If the initrd shell should have been sufficient, pardon me. I tried, but lvm wasn’t working.)

                                              1. 3

                                                I feel like APL took terseness too far. Every code snippet looks like somebody was playing “code golf”. It may be great for demoing the language, but won’t it be a nightmare for real code?

                                                And I could technically achieve (nearly) the same thing in Lisp by giving my functions and variables names like “ῴ”, but it’s easier to understand when it’s spelled out like “solution-matrix”. Just because everything can be abbreviated to a single symbol doesn’t mean it should be.

                                                Also, as neat as these purely algorithmic problems are, what does real life code look like in APL? What’s an HTTP request look like? How would I parse a JSON blob?

                                                1. 2

                                                  won’t it be a nightmare for real code?

                                                  No. Not generally.

                                                  In fact usually the opposite.

                                                  Iverson won a Turing award on this very subject, and I recommend you read his notation as a tool of thought paper for more on this subject.

                                                  I programmed in Common Lisp for about a decade but I do a fair amount of programming in q/k (an aplish language that uses ascii characters) these days and having good array support is a massive improvement in my code size and how quickly I can bring solutions. One of the applications I work on has a dozen or so developers on it at the moment.

                                                  What’s an HTTP request look like? How would I parse a JSON blob?

                                                  Pretty similar to other languages: we just use library or built-ins like everyone else.

                                                  To do an HTTP GET in q I write:

                                                  .Q.hg`:https://domain/url
                                                  

                                                  And to parse JSON I say:

                                                  .j.k text
                                                  

                                                  If you want to see what a parser looks like, I can point you at an example, but you will find it unsatisfying as a beginner since you will lack the ability to read it at this point.

                                                  1. 4
                                                  1. 1

                                                    Note especially that Safari doesn’t (and won’t) support pointer events and Chrome has flip-flopped on the idea a couple times.

                                                    1. 3

                                                      A super common example for “clean code” is that instead of doing p * t to add sales tax, you write price * sales_tax. I pulled up an ecommerce suite and looked how they did it. They track an array of the tax markups, one for each applicable tax, and bundle that along with the cart.

                                                      btw, “clean code” (I hate that name, I prefer “intention revealing code”) is much more than nicely named variables, it’s well partitioned code at multiple levels. Short methods with meaningful names, divided into classes that are cohesive, divided into modules, etc., with cohesion at each level, and each one being at the same level of abstraction.

                                                      I took a look at the tax.php code and, while I’m no PHP developer, that’s not at all how I’d do it, because that code isn’t OO, it’s just code that happens to have method names. We can argue about OO vs. FP, but if I were to write this as OO code, I wouldn’t have code like this (in fact I’d hardly have if statements at all):

                                                      if ($calculate != 'P' && $calculate != 'F') {
                                                      	$amount += $tax_rate['amount'];
                                                      } elseif ($tax_rate['type'] == $calculate) {
                                                      	$amount += $tax_rate['amount'];
                                                      }
                                                      

                                                      That’s a sign that the author doesn’t understand OOP, or didn’t want to do it.

                                                      1. 2

                                                        “clean code”; I hate that name, I prefer “intention revealing code”

                                                        “Clean code” is Uncle Bob’s branded term at this point.

                                                        I wouldn’t have code like this…

                                                        Oh you think that’s bad. Look at how it’s called e.g. royal_mail.php

                                                        This is a good way to make mistakes. “Clean” or not, it’s still rubbish.

                                                        1. 1

                                                          “clean code”; I hate that name, I prefer “intention revealing code”

                                                          “Clean code” is Uncle Bob’s branded term at this point.

                                                          Yeah, and I never liked his brand, even before some of his recent remarks.

                                                          Oh you think that’s bad. Look at how it’s called e.g. royal_mail.php This is a good way to make mistakes. “Clean” or not, it’s still rubbish.

                                                          What’s so dismaying is that regardless, this stuff works, and that’s the feedback that most developers get: if it works, it must be OK. When they find out it’s not, i.e., find a bug, they just figure that’s the way it is.

                                                          1. 2

                                                            it works, it must be OK. … bug … [is] the way it is.

                                                            Agreed completely, but its not just delusional programmers: I quote three months, I hit three months; If this jackass quotes three months and runs over by six, weirdly everyone is still happy because software has this terrible reputation. Seriously. Have you tried turning it off and on?

                                                            Our society accepts bugs, so we’re stuck with this until that changes.

                                                      1. 3

                                                        The WSL is very good. The console is a bit shit but screen/tmux fixes that enough for me.

                                                        For IRC I use emacs+erc.

                                                        1. 8

                                                          Someone who controls your network will simply drop the DNSKEY/DS records, so DNSSEC would not have provided any protection for “MyEtherwallet”. People who have already visited it were (hypothetically) protected by TLS, and people who hadn’t, would have received bogus records anyway.

                                                          So DNSSEC could, in an ideal setting, provide a benefit similar to HPKP, but why wait? HPKP is here now.

                                                          Furthermore, “DNSSEC wasn’t easy to implement” is a massive understatement.

                                                          1. 7

                                                            No, they can’t drop your DS records, because those reside in the parent TLD. They would have to also hack your domain registrar to do that.

                                                            1. 1

                                                              That’s completely wrong.

                                                              If someone controls your network, they don’t need to hack anyone else: They can feed you whatever they want.

                                                              1. 1

                                                                If someone controls your network, they don’t need to hack anyone else: They can feed you whatever they want.

                                                                That’s only true of non-DNSSEC signed records. DNSSEC is PKI allows one to cryptographically delegate authority of a zone. In practice, this means that the root zone signs the public keys of authoritative top-level domains, TLDs then sign the public keys of owners of regular domain names. These keys can then be used to sign any arbitrary DNS record. So, if you have a validating local resolver, it can use the public key of the root zone to cryptographically validate the chain of trust from ICANN down to the authoritative nameserver for a domain.

                                                                DNSCurve isn’t a bad idea, I think lookup privacy is a good thing and I would much prefer to trust Google or Cloudflare than my local ISP for unsigned domain names. That being said, it doesn’t fix the massive problem of computers trusting the DNS cache of a 10-year-old router controlled by malware. It’s also really unhelpful when people claim that DNSCurve is some sort of alternative to DNSSEC.

                                                                Seriously, go read Dan Kaminsky’s critique of the DNSCurve proposal.

                                                                1. 1

                                                                  DNSSEC is PKI allows one to cryptographically delegate authority of a zone

                                                                  Which an attacker guarantees you’ll never see.

                                                                  This isn’t a hypothetical attack: Your computer asks your ISP’s nameservers, and it strips out all the DNSSEC records. Unless your computer expects those records, it won’t ever be able to tell you anything is wrong.

                                                                  if you have a validating local resolver, it can use the public key of the root zone to cryptographically validate the chain of trust from ICANN down to the authoritative nameserver for a domain.

                                                                  If you don’t, and for some reason use a “validating local resolver” on another machine, you have nothing.

                                                                  Even if you have a validating-capable resolver, and you never see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 then you might never learn that you might find keys for cloudflare.com.

                                                                  Even if you do have a validating-capable resolver, and you see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 you still can’t visit google.com safely.

                                                                  And what about .ae? Or other roots?

                                                                  DNSSEC supporters are happy enough to ignore the problem of deploying DNSSEC, like it’s somehow someone else’s problem.

                                                                  That being said, it doesn’t fix the massive problem of computers trusting the DNS cache of a 10-year-old router controlled by malware.

                                                                  What are you talking about?

                                                                  It’s also really unhelpful when people claim that DNSCurve is some sort of alternative to DNSSEC.

                                                                  It’s annoying that DNSSEC “supporters” hand-wave the fact that DNSSEC has no security, and doesn’t have a deployment plan except “do it”.

                                                                  IPv6 is at 23% deployment. After more than twenty years. DNSSEC is something like 0.5% of the dot-com. After more than twenty years (although admittedly they completely changed what DNSSEC was several times in that time). DNSSEC isn’t a real thing. It’s not even a spec for a real thing. How can I possibly take it seriously?

                                                                  Seriously, go read Dan Kaminsky’s critique of the DNSCurve proposal.

                                                                  Have you read it?

                                                                  It’s bonkers. It admits DNSSEC is a moving target that hasn’t yet been implemented “in all its glory” and puts this future fantasy version of DNSSEC that has been fully deployed and had all operating systems, routers and applications rewritten, against DNSCurve.

                                                                  Kaminsky is as brain-damaged as those IPv6 nutters, waiting for some magic moment for over twenty years that simply never came – and the only way his “critique” would have any value at all is if it were printed on bog roll.

                                                                  For what it’s worth: I think DNSCurve solves a problem I don’t have, but it attracts no ire from me.

                                                                  1. 3

                                                                    This isn’t a hypothetical attack: Your computer asks your ISP’s nameservers, and it strips out all the DNSSEC records. Unless your computer expects those records, it won’t ever be able to tell you anything is wrong.

                                                                    A resolver can refuse to perform DNSSEC validation or even strip out records, but a local resolver can detect this and even work around it.

                                                                    if you have a validating local resolver, it can use the public key of the root zone to cryptographically validate the chain of trust from ICANN down to the authoritative nameserver for a domain.

                                                                    If you don’t, and for some reason use a “validating local resolver” on another machine, you have nothing.

                                                                    What do you mean by using a validating local resolver on another machine? It’s local, there is no other machine.

                                                                    If you are saying that most clients rely on their router (or whatever) to do DNSSEC validation then yes, that router can perform a MITM attack. It’s still more secure than trusting every single upstream DNS resolver, but we need to move to local validation. The caching layer provided by DNS is a byproduct of the limited computing resources 1985.

                                                                    Even if you have a validating-capable resolver, and you never see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 then you might never learn that you might find keys for cloudflare.com.

                                                                    It sounds like you are describing a broken resolver.

                                                                    Even if you do have a validating-capable resolver, and you see that com has DS record 30909 8 2 E2D3C916F6DEEAC73294E8268FB5885044A833FC5459588F4A9184CF C41A5766 you still can’t visit google.com safely.

                                                                    I believe the local resolver would just ask com for a DS record for google.com and receive either a DS record or an NSEC record. If it doesn’t receive one of those two records, then you are correct: you can’t visit google.com safely. It’s no different than an HTTPS downgrade attack.

                                                                    And what about .ae? Or other roots?

                                                                    1. We can deploy DNSSEC incrementally.
                                                                    2. ~96% of all domains are registered on a TLD that supports DNSSEC. The stats would probably be even better if they were based on traffic instead of total domains.
                                                                    3. All registrars are required to support DNSSEC.

                                                                    If we can get people to stop claiming that “DNSSEC does nothing for security” and make use of the cool stuff you can do with DNSSEC, then the market will force the last 10% of ccTLDs to adopt it.

                                                                    DNSSEC supporters are happy enough to ignore the problem of deploying DNSSEC, like it’s somehow someone else’s problem.

                                                                    I personally am working very, very hard on addressing every pain point there is. There are a lot of moving pieces and the standards left some holes until recently. I believe captive portals and VPN domains are thorny issues, but these issues can be addressed in an incremental fashion.

                                                                    It doesn’t help when people make erroneous claims about DNSSEC based on an incorrect understanding of DNS, DNSSEC, DNSCurve, and decentralized naming systems.

                                                                    That being said, it doesn’t fix the massive problem of computers trusting the DNS cache of a 10-year-old router controlled by malware.

                                                                    What are you talking about?

                                                                    DNSCurve relies on trusting the DNS resolver above you. For most people that is a 10-year-old router which has never gotten a security update. Best-case scenario is someone switching to Google DNS or Cloudflare - but with proper encryption, no upstream resolver would be capable of performing MITM attacks.

                                                                    It’s annoying that DNSSEC “supporters” hand-wave the fact that DNSSEC has no security

                                                                    I have patiently responded to every single claim you have made about DNSSEC’s security model. Please refrain from repeating this claim until you have figured out how a MITM attacker can force a local validating resolver to accept forged DS or NSEC records.

                                                                    IPv6 is at 23% deployment. After more than twenty years. DNSSEC is something like 0.5% of the dot-com. After more than twenty years (although admittedly they completely changed what DNSSEC was several times in that time). DNSSEC isn’t a real thing. It’s not even a spec for a real thing. How can I possibly take it seriously?

                                                                    So was IPv6 until ~6 years ago - now there is exponential growth. DNSSEC is at a similar tipping point: the basic security model was worked out a long time ago, but there were plenty of sharp corners until recently (large key sizes, NSEC3, etc). If we can stop people from claiming that the security model is broken then Cloudflare and other big providers will pour money into taking business away from the HTTPS certificate authorities.

                                                                    It’s also a necessity for decentralized DNS, which gives us an environment where we can implement everything without having to wait for legacy infrastructure to catch up.

                                                                    Seriously, go read Dan Kaminsky’s critique of the DNSCurve proposal.

                                                                    It’s bonkers. It admits DNSSEC is a moving target that hasn’t yet been implemented “in all its glory” and puts this future fantasy version of DNSSEC that has been fully deployed and had all operating systems, routers and applications rewritten, against DNSCurve.

                                                                    The post is mainly useful for explaining how DNSSEC and DNSCurve relate to one another. While the grand vision is the eventual goal, there are incremental benefits and huge gains can be had by simply making the application DNSSEC aware. For example, browsers are already switching to doing DNS resolution themselves, so the work required isn’t much more involved than that of upgrading to TLS.

                                                                    For what it’s worth: I think DNSCurve solves a problem I don’t have, but it attracts no ire from me.

                                                                    Then why hate on DNSSEC but evangelize DNSCurve? You can happily ignore DNSSEC as an end user or even as a system admin. If you care about security, well, that’s a different story.

                                                                2. 0

                                                                  If they hack your primary nameserver and keep the zone signed, then maybe, as long as you’re running the primary. But as the original commenter said, “they can drop your DNSKEYs and DS”, no. They can drop the DNSKEY, but the DS resides in the parent node, and as long as it’s there, resolvers will look for DNSSEC validated responses, which they won’t get.

                                                                  1. 1

                                                                    If someone controls your network, whenever you request a DNSSEC “protected” domain, you will never know because the attacker can drop whatever records they want. DNSSEC clearly offers nothing.

                                                                    If someone controls the network of a website, they don’t need to interfere with the nameservers. They can simply MITM the traffic. Since they can request a TLS certificate from anyone who does HTTP or mail validation, DNSSEC still offers nothing. This is true whether they control the network by broadcasting “invalid” BGP routes, or whether they attack the physical infrastructure.

                                                                    Why are you defending this snake oil? “Hack[ing] your primary nameserver” is a pointless strawman that nobody cares about: “your primary nameserver” is likely controlled by Amazon or someone else competent. Your webserver is controlled by you, who lack the experience to identify the (complex) services at risk and properly secure them.

                                                                    1. 2

                                                                      f someone controls your network, whenever you request a DNSSEC “protected” domain, you will never know because the attacker can drop whatever records they want. DNSSEC clearly offers nothing.

                                                                      I don’t know what you mean by this. For starters, are you assuming “your network” includes all the nameservers? Let’s assume so. If DNSSEC is enabled, they can’t alter any of the DNS responses because they will break DNSSEC validation for aware resolvers. Sure, they can drop queries but what does that buy them other than a DDOS? They can’t stand up a fake site.

                                                                      Why are you defending this snake oil? “Hack[ing] your primary nameserver” is a pointless strawman that nobody cares about: “your primary nameserver” is likely controlled by Amazon or someone else competent. Your webserver is controlled by you, who lack the experience to identify the (complex) services at risk and properly secure them.

                                                                      Clearly you haven’t been paying attention. The entire chain of events that started my posts about this was a BGP hijack that was used to impersonate Route53 nameservers by hijacking Amazon IP space, which were the nameservers that MyEtherWallet was using. From there they stood up fake nameservers which directed victims to a fake myetherwallet site. That’s the exactly what happened, why don’t you go tell the people who had their wallet’s drained not to worry about it because it is all a pointless strawman.

                                                              2. 2

                                                                So DNSSEC could, in an ideal setting, provide a benefit similar to HPKP, but why wait?

                                                                Doing PKI at the DNS level means that we can leverage it for every network and application protocol … including public key pinning. It would also enable us to do cool stuff like encrypt at both the network and application layers.

                                                                It’s just that sysadmins didn’t want the headache of key management, so everyone engaged in bikeshedding. It didn’t help that a few (well intentioned!) security researchers threw shade on DNSSEC for not solving Zooko’s triangle or offering encryption for DNS lookups.

                                                                So now we are stuck with Mozilla funding Let’s Encrypt to the tune of $2 million/year and non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI. Which, in practice, means that it’s either non-existent (SSH) or barely functioning (GPG).

                                                                HPKP is here now.

                                                                Sadly, HPKP has been deprecated by Chrome. But, FWIW, these standards existed long before HPKP.

                                                                1. 2

                                                                  It didn’t help that a few (well intentioned!) security researchers threw shade on DNSSEC for not solving Zooko’s triangle or offering encryption for DNS lookups.

                                                                  The current system (CA’s) is human meaningful, secure, and decentralized federated. It’s not perfect, but there are ways to improve the last point, so that we have more control over badly behaving CAs. But even as implemented, that’s better than human meaningful, secure, and a single point of failure (DNSSEC).

                                                                  So now we are stuck with Mozilla funding Let’s Encrypt to the tune of $2 million/year and non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI.

                                                                  You can use x509 certificates from Let’s Encrypt to secure any IP connection. What’s the problem?

                                                                  1. 1

                                                                    It’s not perfect, but there are ways to improve the last point, so that we have more control over badly behaving CAs.

                                                                    For non-decentralized naming systems, the (abstract) DNSSEC chain of trust looks (roughly) like this:

                                                                    Government -> ICANN -> Registrar -> DNS Provider -> Local Validating Resolver -> Browser
                                                                    

                                                                    HTTPS certificate authorities “validate” control over a domain by checking DNS records (either TXT or via an email). Their chain of trust looks like this:

                                                                    Government -> ICANN -> Registrar -> DNS Provider -> ~650 CAs [1] -> Browser
                                                                    

                                                                    The best way to exercise more control over them is to cut them out of the trust chain entirely. Or switch to a decentralized naming system … which also relies on DNS (and thus DNSSEC) for compatibility reasons:

                                                                    Blockchain -> Lightclient w/ DNSSEC auto-signer -> Browser
                                                                    

                                                                    But even as implemented, that’s better than human meaningful, secure, and a single point of failure (DNSSEC).

                                                                    In terms of the security model, DNS is still a single point of failure. If you don’t like managing PKI you can always outsource it to someone … just like you do with HTTPS certificates.

                                                                    1. 1

                                                                      If I want to compromise you, attacking your DNS resolver doesn’t mean I’ve also attacked PayPal’s CA even if they used their DNS resolver to verify ownership of paypal.com

                                                                      1. 1

                                                                        My point is that one can trick one of the ~650 CAs into generating an X509 certificate by hacking their upstream DNS client or performing a MitM attack. This would be pretty easy for any large network operator to pull off.

                                                                  2. 1

                                                                    Doing PKI at the DNS level means that we can leverage it for every network and application protocol … including public key pinning. It would also enable us to do cool stuff like encrypt at both the network and application layers.

                                                                    What exactly are you referring to: DNSCurve?

                                                                    DNSSEC doesn’t offer anything like this.

                                                                    It’s just that sysadmins didn’t want the headache of key management, so everyone engaged in bikeshedding.

                                                                    Paul Vixie, June 1995: “This sounds simple but it has deep reaching consequences in both the protocol and the implementation – which is why it’s taken more than a year to choose a security model and design a solution. We expect it to be another year before DNSSEC is in wide use on the leading edge, and at least a year after that before its use is commonplace on the Internet”

                                                                    Paul Vixie, November 2002: “We are still doing basic research on what kind of data model will work for DNS security. After three or four times of saying NOW we’ve got it THIS TIME for sure there’s finally some humility in the picture … Wonder if THIS’ll work? … It’s impossible to know how many more flag days we’ll have before it’s safe to burn ROMs … It sure isn’t plain old SIG+KEY, and it sure isn’t DS as currently specified. When will it be? We don’t know… There is no installed base. We’re starting from scratch.”

                                                                    It didn’t help that a few (well intentioned!) security researchers threw shade on DNSSEC for not solving Zooko’s triangle or offering encryption for DNS lookups.

                                                                    Or the fact DNSSEC creates DDOS opportunities, introduced lots of bugs in the already buggy BIND, and still offers no real security.

                                                                    No thanks.

                                                                    So now we are stuck with Mozilla funding Let’s Encrypt to the tune of $2 million/year

                                                                    DNSSEC has received millions of US tax dollars offers nothing, while Let’s Encrypt actually provides some transport security. Hrm…

                                                                    non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI. Which, in practice, means that it’s either non-existent (SSH) or barely functioning (GPG).

                                                                    I don’t see how DNSSEC even begins to solve these problems.

                                                                    FWIW: Almost everything is HTTPS anyway.

                                                                    Sadly, HPKP has been deprecated by Chrome

                                                                    It is sad. Firefox and others still support it, and HSTS + Certificate Transparency is probably good enough anyway.

                                                                    1. 1

                                                                      Doing PKI at the DNS level means that we can leverage it for every network and application protocol … including public key pinning. It would also enable us to do cool stuff like encrypt at both the network and application layers.

                                                                      What exactly are you referring to: DNSCurve?

                                                                      No, using DNSSEC to bootstrap the public keys for … any cryptographic protocol. Just as DANE can be used to distribute the TLS keys for an HTTPS server, SSHFP records can be used to publish the public keys for a given SSH server. AWS, for example, could just publish SSHFP records when they provision a new instance and you would have end-to-end verification for your SSH connection. No need for Amazon to partner with Let’s Encrypt or force SSH clients to switch to X509 certificates.

                                                                      Since DNSSEC makes it simple to publish arbitrary public keys for a domain, you can use something like TCPCrypt to encrypt connections at the transport level. Transport level encryption reduces information leakage (SNI headers for HTTPS, what application you are using, network level “domain” fronting, etc) and mitigates flaws in any application layer encryption.

                                                                      WRT to your Paul Vixie quotes: they are 16 years old. I’ve tried really hard to find showstopper issues, but when you dig into criticisms of DNSSEC they boil down to complaints about DNS, problems that have already been fixed, or gripes about the complexity of managing PKI.

                                                                      Or the fact DNSSEC creates DDOS opportunities

                                                                      DNS reflection attacks are a thing because there are tens of thousands of public DNS resolvers willing to send DNS record requests to anyone. The worst offender here are ANY requests that return all records associated with a domain. The public key and signature used to verify a DNS response do not incur that much overhead 1 2.

                                                                      The response from DNS providers hasn’t been to rip out DNSSEC, but to rate limit requests that produce large responses. More fundamental changes include switching to TCP, ingress filtering of spoofed UDP packets, supporting edns_client_subnet, and shutting down public DNS servers.

                                                                      introduced lots of bugs in the already buggy BIND

                                                                      Please do not blame DNSSEC for BIND being a buggy POS.

                                                                      and still offers no real security.

                                                                      DNSSEC prevents a wide range of attacks. Are seriously arguing that removing trust in every DNS server between yourself and the registrar doesn’t materialistically improve security? What about removing trust in the ~650 CAs capable of producing an HTTPS certificate? Wouldn’t you like to live in a world where TCP, SSH, email, IRC, etc can take advantage of PKI instead of opportunistic crypto?

                                                                      non-HTTPS applications are forced to replicated all of the infrastructure required for a PKI. Which, in practice, means that it’s either non-existent (SSH) or barely functioning (GPG).

                                                                      I don’t see how DNSSEC even begins to solve these problems.

                                                                      Publish a DNS record with the public key for the encryption protocol you would like to use (see: SSHFP, DANE, PGP).

                                                                      It is sad. Firefox and others still support it, and HSTS + Certificate Transparency is probably good enough anyway.

                                                                      As a decentralized domain name nerd, I strongly disagree. We need a standard way for naming systems to declare the public keys for their services. Seriously, we have to sign TOR domains with HTTPS certificates from DigiCert because the browser doesn’t support DANE.

                                                                      1. 1

                                                                        No, using DNSSEC to bootstrap the public keys for … any cryptographic protocol.

                                                                        This is some fantasy version of DNSSEC that doesn’t exist yet and likely never will: Browers don’t do DANE because it’d piss people off.

                                                                        Are seriously arguing that removing trust in every DNS server between yourself and the registrar doesn’t materialistically improve security?

                                                                        Yes.

                                                                        Until you know that you’re supposed to be seeing a DS/DNSKEY chain, every recursive resolver (and every stub resolver) gains nothing, and risks tricking people into thinking they have some security because they installed something called DNSSEC.

                                                                        As a decentralized domain name nerd, I strongly disagree.

                                                                        Well, you’re wrong. Decentralising trust just creates multiple single-points of failure unless you’re willing to wait for consensus, in which case you might as well use HSTS+Certificate Transparency (and your favourite mirror).

                                                                        1. 1

                                                                          This is some fantasy version of DNSSEC that doesn’t exist yet and likely never will: Browers don’t do DANE because it’d piss people off.

                                                                          It would only piss off people who think DNSSEC is a bad thing. Chrome actually implemented it but it was removed due to lack of critical mass. I’m thinking of pitching Cloudflare on pushing for DANE.

                                                                          Until you know that you’re supposed to be seeing a DS/DNSKEY chain, every recursive resolver (and every stub resolver) gains nothing, and risks tricking people into thinking they have some security because they installed something called DNSSEC.

                                                                          If the parent zone is signed and has a DS key for the child zone, then your local resolver would know that the child zone is supposed to be signed.

                                                                          As a decentralized domain name nerd, I strongly disagree.

                                                                          Well, you’re wrong.

                                                                          No, I’m not. This was a major issue with Namecoin: we had to MITM every HTTPS connection to check the certificate against the blockchain records then replace it with a local certificate. There was no uniform way of making this work: the hack required tweaking for every OS and application and prevented users from selecting their own SOCKS5 proxy. The entire team agreed that DANE was the only way forward and we even got DigiCert to ensure that they used DANE when minting their .onion certs.

                                                                          Decentralising trust just creates multiple single-points of failure

                                                                          Um, what?

                                                                          unless you’re willing to wait for consensus

                                                                          Consensus from the Blockchain?

                                                                1. 4

                                                                  “Protocol-level faculties that let you read, write, and hash specific chunks out of the middle of large files without making you transfer the whole large file? does SSH have that? Nope.” - sftp does have that actually. SFTP is basically just a protocol that fowards file descriptors.

                                                                  two thing that always bothered me about openssh:

                                                                  the naming id_rsa and id_rsa.pub means tab completion can cause you to accidentally send your secret key. I would have called it id_rsa.priv

                                                                  It would be neat if it had more ways to support machine to machine workflows. I use ssh to forward unix sockets to do secure cluster networking, force commands are ok, but they are not the easiest to use.

                                                                  1. 1

                                                                    You don’t need the id_rsa.pub file. In fact, I delete mine.

                                                                    If you’re using files, you can use:

                                                                    ssh-keygen -y -f ~/.ssh/id_rsa
                                                                    

                                                                    If you use ssh-agent you can use:

                                                                    ssh-add -L
                                                                    

                                                                    If you’re using a smartcard you can use:

                                                                    pkcs15-tool --read-ssh-key 69 # or whatever your key number is
                                                                    

                                                                    and so on…

                                                                  1. 6

                                                                    Are you confident that every single user of your systems is going to out-of-band verify that that is the correct host key?

                                                                    If your production infrastructure has not solved this problem already, you should fix your infrastructure. There are multiple ways.

                                                                    1. Use OpenSSH with an internal CA
                                                                    2. Automate collection of server public ssh fingerprints and deployment of known_hosts files to all systems and clients (we do it via LDAP + other glue)
                                                                    3. Utilize a third party tool that can do this for you (e.g., krypt.co)

                                                                    Your users should never see the message “the authenticity of (host) cannot be established”

                                                                    1. 4

                                                                      Makes me wonder how Oxy actually authenticates hosts. The author hates on TOFU but mentions no alternatives AFAICS, not even those available in OpenSSH?

                                                                      1. 3

                                                                        It only authenticates keys, and it makes key management YOUR problem. see https://github.com/oxy-secure/oxy/blob/master/protocol.txt for more details.

                                                                        I.e. you have to copy over keys from the server to the client before the client can connect(and possibly the other way from the client to the server, depending on where you generate them).

                                                                        1. 1

                                                                          Key management is already your problem.

                                                                          ssh’s default simply lets you pretend that it isn’t.

                                                                          1. 2

                                                                            Very true. I didn’t mean to imply otherwise.

                                                                    1. 1

                                                                      kdb allows nested columns, so:

                                                                      q)select email by team_id,name from users lj teams
                                                                      team_id name | email                                                      
                                                                      -------------| -----------------------------------------------------------
                                                                      1       Citus| `craig@citusdata.com`farina@citusdata.com                  
                                                                      2       ACME | `jennifer@acmecorp.com`tom@acmecorp.com`peyton@acmecorp.com
                                                                      

                                                                      of course, being king of timeseries means it has lots of good date/time types (most with syntax) so regular operators are extended over them:

                                                                      q)select email from users where created_at > 2018.06.22 - 7
                                                                      email                
                                                                      ---------------------
                                                                      craig@citusdata.com  
                                                                      jennifer@acmecorp.com
                                                                      tom@acmecorp.com     
                                                                      
                                                                      q)select count i by created_at.week from users
                                                                      week      | x
                                                                      ----------| -
                                                                      2018.06.04| 2
                                                                      2018.06.11| 3
                                                                      

                                                                      JSON is often found in lieu of proper data structures and good design, but if you’ve got a data cleaning exercise, you can dip into q as needed:

                                                                      q)select email,(.j.k each location_data)@'`state from users
                                                                      email                 x   
                                                                      --------------------------
                                                                      craig@citusdata.com   "AL"
                                                                      farina@citusdata.com  ()  
                                                                      jennifer@acmecorp.com "CA"
                                                                      tom@acmecorp.com      ()  
                                                                      peyton@acmecorp.com   ()  
                                                                      

                                                                      This has other great advantages, for example being able to build your application on top of the database cuts out a massive source of latency.

                                                                      1. 4

                                                                        Most of the large resolver services such as Google. Quad9, OpenDNS and Cloudflare are all DNSSEC enabled.

                                                                        OpenDNS does not support DNSSEC at this time.

                                                                        1. 1

                                                                          Whoah. I could have sworn they did.

                                                                          1. 6

                                                                            OpenDNS does support DNSCurve, a protocol that faster, simpler, offers real incremental benefits (unlike DNSSEC which is all-or-nothing), and is easier to deploy than DNSSEC.

                                                                            DNSCurve however is unlikely to be implemented by ICANN for political reasons.

                                                                            1. 12

                                                                              I’m getting really sick of people talking about DNSCurve as if it was an alternative to DNSSEC, the two do completely different things. DNSCurve secures the traffic between the authoritative nameserver and a DNSCurve enabled resolver (the only one I know of being OpenDNS). DNSSEC authenticates the validity of the DNS responses themselves.

                                                                              1. 2

                                                                                You’re right. DNSCurve protects all of your DNS traffic from tampering between you and the resolver and the resolver can implement their own security infrastructure for detecting and protecting from tampering and cache poisoning, while DNSSEC validates the DNS traffic of far less than 1% of all domains on the internet AND requires each client to not use a caching resolver if they want to be able to trust the results.

                                                                                So DNSCurve has a real world impact on protecting users, and DNSSEC is still vaporware.

                                                                                1. 0

                                                                                  You’re right. DNSCurve protects all of your DNS traffic from tampering between you and the resolver and the resolver can implement their own security infrastructure for detecting and protecting from tampering and cache poisoning.

                                                                                  The only way to reliably detect against tampering with DNS records is for the owner of the domain to sign them cryptographically.

                                                                                  while DNSSEC validates the DNS traffic of far less than 1% of all domains on the internet

                                                                                  IPv6 had pretty low adoption too, until things started to get bad enough.

                                                                                  AND requires each client to not use a caching resolver if they want to be able to trust the results.

                                                                                  They don’t have to use a caching resolver, that’s the whole point of the owner of the domain signing the DNS records - you can verify it for yourself! A client can also query the root name servers directly, it would increase load on the authoritative nameservers and has a different privacy profile … but there is nothing wrong with that.

                                                                                  1. 2
                                                                                    • DNSCrypt for privacy
                                                                                    • TLS for validation (and of course the other security benefits that come with)

                                                                                    That’s the stack everyone should be using. It works and is reliable; duplicate certificates forged by shady CAs is a thing of the past with CAA, certificate transparency, and the ability to do stapling. Pushing down the validation to DNS is the wrong approach.

                                                                                    1. 0
                                                                                      • DNSCrypt for privacy

                                                                                      DNSCrypt is dead. TLS adopted DJB’s curves and some IETF working groups wrote a few standards which can pass through firewalls.

                                                                                      • TLS for validation (and of course the other security benefits that come with)

                                                                                      TLS does not validate that a record came from a domain name name, it only validates that the response came from a third party resolver.

                                                                                      That’s the stack everyone should be using. It works and is reliable; duplicate certificates forged by shady CAs is a thing of the past with CAA, certificate transparency, and the ability to do stapling.

                                                                                      We still have to sign .onion domains using DigiCert’s certificate. Certificate transparency doesn’t protect against domains which aren’t using “high assurance” certificates. Nor does this protect any other protocol outside of TLS.

                                                                                      Pushing down the validation to DNS is the wrong approach.

                                                                                      Why are you so against cryptographic verification of DNS records?

                                                                              2. 2

                                                                                DJB threw a bunch of shade on DNSSEC when he announced DNSCurve, but he was (at best) misguided 1.

                                                                                DNSCurve however is unlikely to be implemented by ICANN for political reasons.

                                                                                DNSCurve doesn’t have anything for ICANN to implement as there is no signing of DNS records. It will validate that a response came from a specific DNS cache, but not that the records were produced by the owner of the domain.

                                                                                He’s right that we need privacy for DNS lookup and the adults in room created DNS over TLS.

                                                                              3. 0

                                                                                OpenDNS doesn’t support DNSSEC, and prevents doing the validation yourself if you wanted to do so, by stripping required records before forwarding a response to you. 1

                                                                                Their business model used to rely on NXDOMAIN hijacking, which DNSSEC prevents. They stopped doing that a while ago, but I just checked and they are still stripping out DNSSEC records 🤯!

                                                                                I really wish I hadn’t gotten sick, I was going to help work on a standard for DNS filtering. At any rate, these are bad actors in the DNS ecosystem.