Threads for quicksilver03

  1. 8

    I use professionally a refurbished ThinkPad T450 with Linux (openSUSE Tumbleweed, but I’ll soon switch to openSUSE Leap) with 16 GiB RAM. It works flawlessly for operations, but also for Rust / Go / Erlang development! (Although for Rust it is somewhat slow, but not too much…)

    Why such an old laptop? None of the “newer” ones actually meet the following simple requirements:

    • have proper SATA disk support; I want to be able to just remove my SSD from one laptop and put it into another one and be up-and-running within 30 minutes! (this is the third laptop I’ve moved my SSD through with barely touching my Linux /etc configurations;) (sure, I only get ~500 MiB/s I/O with SATA, meanwhile with a proper NVMe I would get ~2-3 GiB/s, but guess what, even ~500 MiB/s is good enough…)

    • have a proper Ethernet RJ45 connector; I can’t imagine why on earth a sysadmin (or even a developer) would choose a laptop without a proper network connector… (see the next point about dongles;)

    • HDMI (or DisplayPort), again I can’t imagine why wouldn’t one want a proper monitor plug?

    • no “dongle madness” – I don’t want to carry with me “dongles”, “adapters”, “docking stations”, etc.; I want proper USB, HDMI/DisplayPort, Ethernet connectors in my laptop!

    • memory should be self-serviceable, not be soldered on! (as should the disk, and other important components…)

    • if possible, it shouldn’t cost me a kidney!

    • (at this point battery life, display quality, Intel vs AMD, become moot, as by the middle of my list, most “business” laptops fail to meet the requirements…)

    What would I be replacing it with? A ThinkPad T460 or T470, as the T480 and newer seem to fail to meet the proper SATA support… (Or perhaps a Tuxedo laptop, as they ship in Europe without increasing the price too much… I also like the Framework laptop idea, but at the moment it’s a bit pricey…)

    1. 4

      have a proper Ethernet RJ45 connector; I can’t imagine why on earth a sysadmin (or even a developer) would choose a laptop without a proper network connector…

      Colleagues at work have been hearing me saying that a laptop without a RJ45 connector is an expensive tablet with a built-in keyboard.

      1. 1

        I can count the number of times I’ve used the network jack on a laptop (or wished I could) in the past few years on zero hands. If you’re a sysadmin, sure that’s one thing, but I can’t imagine going anywhere where I have Ethernet access but not wifi. Even if I do I can just use an adapter or something.

      2. 1

        Nothing stops you from moving a m.2 drive between laptops. Even easier in many cases to move into a desktop, since you don’t need to fuzz with cables.

        1. 1

          Well no… M.2 is no SATA in practical terms…

          The M.2 format supports both SATA and NVMe. However older laptops support only M.2 SATA, and lately you can’t easily find M.2 SATA drives, and when you do the prices are quite high… (I know because I had to buy a few for some HP t620 thin clients.)

          Then, with SATA people already have lying around USB to SATA adapters or enclosures, or if not any decent general store has one on the shelf. Plus in a desktop one has 6 or more SATA ports and only a handful M.2 connectors.

          (Not to mention how easily one can damage an M.2 SSD with all the circuitry in the open, as opposed to the encased one with SATA.)

          Thus perhaps M.2 is a good alternative to soldered-on SSD in laptops and other consumer devices, but I don’t think they are a good choice for professional equipment. (Although NVMe might provide increased bandwidth as opposed to SATA.)

          1. 1

            It’ll be all NVMe before long, already well headed that way. I have a nice USB-C to M.2 NVMe enclosure that I bought when I had to pull data off of a dead laptop; it’s ludicrously fast and also quite small.

      1. 13

        I think until you’ve been in the loop of a security reporting/response group, it’s hard to understand the signal/noise ratio of such things. Even before you get to the kinds of edge-case “is it a security issue or not” reports like the one described here, there’s often just tons and tons of low-effort stuff – many reports come from people who have only minimal understanding and are relying on (bug-prone and false-positive-prone) automated scanners in hopes of a bug bounty payday.

        1. 9

          It’s even harder for a library. I agree that the bug in question is not a security vulnerability in libcurl, but it might be a security vulnerability in a program that is using libcurl.

          1. 7

            I worked at an e-commerce company for a few years and tried to “do the right thing” and put a security@(website).com e-mail on our contact page for white hats who wanted to report any issues found. Within hours we were getting a constant stream of support emails and other correspondence which had nothing to do with security. It was quickly removed.

            1. 5

              I deliberately don’t add DMARC on my domains, because beg bounty scanners flag it as an issue, and I can immediately block anyone who mentions lack of DMARC to me.

              1. 2

                Don’t get me started on people who report the “urgent” “critical” security vulnerability of “my scanner didn’t like your domain’s DMARC/SPF”.

                1. 1

                  uh that’s an interesting approach - I know of providers who don’t use SPF and DMARC for technical reasons (and use a giant IP whitelist)

                  I did something similar for some time: set up a mail server and block every incoming login attempt, because I don’t have incoming email on that host at all

                2. 3

                  Yeah, people using it as a support channel is a thing, too.

                  Also the incredible amount of spam, but you can’t really spamfilter a security inbox that heavily or you might miss something legit.

                3. 1

                  I had the same experience when the parent company of the one I worked for decided to instate a HackerOne bug bounty. The sheer number of useless reports made it difficult to properly address the actual vulnerabilities, IIRC we got 1 actual vulnerability every 20 reports.

                  I’m not sure if that’s indicative of the whole public bug bounty industry, but after that experience I no longer want to run one and especially I do not want to have anything to do with HackerOne.

                  1. 2

                    It’s pretty common to get that kind of spam, it doesn’t have to do much with H1 itself - they just make the bounties easier to discover. You can actually pay them to get things screened for you so you get a much more reasonable list. I was happy with that result.

                1. 3

                  I’m still not sure how I feel about Silverblue as the future, especially as a developer’s OS. I’ve never really used it in anger though. I wonder if there will ever be an upgrade path from Workstation to Silverblue?

                  1. 4

                    Silverblue sounds OK to me; it feels like how I’m used to macOS being updated. I’m more sceptical about Toolbox.

                    I don’t mind (much) deploying containers, but I categorically don’t want to use them locally on my machine for dev environments. Perhaps I’m misunderstanding how Toolbox would work, but my experience using Docker (on a Mac) has effectively inoculated me against the idea of containers for dev environments. We use that at work and I’m finding it frustrating.

                    Nix has a better story for dev environments IMHO, and thankfully Nix seem to be gaining some traction at work. We are starting to see shell.nix files checked into projects which means less need to run stuff in Docker.

                    1. 4

                      Toolbox uses container technology, but it is not like Docker. Toolbox containers pretty much act like independent hosts - think of them as like VMs, but with a shared kernel and shared resources, and /home and such automatically shared so you don’t have to muck around with copying files to and from the container.

                    2. 3

                      Silverblue is, to me, a solution looking very hard for a problem to solve. I don’t see what actual issues warrant this shift, nor what improvement Silverblue is on “regular” Fedora.

                      Also, the vast majority of containers I see are based on Debian or Ubuntu, and I’m using Fedora exactly because I don’t want to run any of those on my own machines.

                      1. 3

                        An alternative understanding, that I’m trying to be open to, is that Silverblue is a solution for a lot of problems that people who aren’t using Linux yet have. Or, at least, for problems that people who are trying to use Linux in some specific environments/specific deployment cases have.

                        I played with Silverblue back when it was first introduced and I got pretty much the same impression. It makes a lot of things I need (e.g. managing local changes) harder to do, in order to solve a bunch of problems I don’t have in the first place, or which are entirely self-inflicted by the distro. The very description that the author uses – “a image based OS model, similar to what people had gotten used to on their phones” raises the obvious question of why I would want my desktop to work like that security nightmare, unreviewable, CVE exhibition thrashfire that my phone is.

                        On the other hand, a good chunk of today’s computer industry either uses this exact deployment model (Apple) or would really, really like to use it (Microsoft, were it not for those pestering users). Maybe, if it’s not what “the people” want, it’s at least something that makes development easier. I’m not sure. Lots of things people seem to love about it (e.g. easy rollbacks) are things I can’t even remember the last time I needed on a desktop/laptop, maybe I’m just not the target audience here.

                        1. 2

                          Silverblue’s ability to rollback seems like a solid improvement.

                          1. 2

                            It’s not the same mechanism, but the way I understand it, the end result is not much different from suse’s snapper which will take a root fs snapshot before each update. You lose the immutability, but keep the rollback. There are similar wrappers for Fedora too.

                      1. 2

                        I think netdata (federated or in cloud mode) might be worth a look. Or glances: https://nicolargo.github.io/glances/

                        https://linoxide.com/install-use-netdata-monitoring-tool-linux/

                        https://tech.davidfield.co.uk/netdata-monitoring-your-servers/

                        Netdata should be light on resources, both should be somewhat simpler/ready out of the box than grafana+prometheus.

                        But I’m interested to see what you land on, I’m not aware of a “perfect” opinionated, zéro setup monitoring system.

                        I second the recommendation for Uptimerobot for… Monitoring (external services) uptime.

                        1. 3

                          Glances - which I’ve never heard of until now - looks a whole lot like what I was looking for. I will definitely try setting it up. Thanks for the tip.

                          1. 2

                            Netdata isn’t particularly light on resources, it takes between 100MB and 150MB RSS on each of my various VPS and dedicated servers, according to htop. However, I believe it can be tuned to use less memory, for example by collecting fewer metrics.

                            1. 1

                              Do you have examples of agents that collect similar amount of data using less resources?

                              1. 2

                                At work we use Telegraf with Prometheus and Grafana, Telegraf takes between 30MB and 45MB RSS in htop; However with this kind of solution you have to configure and maintain a central collector (Prometheus), whereas each Netdata instance on my own infrastructure is completely autonomous.

                                The tradeoff is that I use more resources per monitored node to protect against failures of a central collector, but this was a personal choice which I’m ready to reverse if I can find something like a SaaS collector where someone else is in charge of availability.

                                1. 1

                                  Thanks for the two datapoints - I would probably have guessed the relationship would be the reverse. But maybe I’m mixing up netdata and a different, lightweight collector.. Although I can’t think of which that would likely be.

                          1. 23

                            I used to think this was the case until I realized that Google funds Firefox through noblesse oblige, and so all the teeth-gnashing over “Google owns the Internet” is still true whether you use Chrome directly or whether you use Firefox. The only real meaningful competition in browsers is from Apple (God help us.) Yes, Apple takes money from Google too, but they don’t rely on Google for their existence.

                            I am using Safari now, which is… okay. The extension ecosystem is much less robust but I have survived. I’m also considering Brave, but Chromium browsers just gulp down the battery in Mac OS so I’m not totally convinced there yet.

                            Mozilla’s recent political advocacy has also made it difficult for me to continue using Firefox.

                            1. 19

                              I used to think this was the case until I realized that Google funds Firefox through noblesse oblige, and so all the teeth-gnashing over “Google owns the Internet” is still true whether you use Chrome directly or whether you use Firefox.

                              I’m not sure the premise is true. Google probably wants to have a practical monopoly that does not count as a legal monopoly. This isn’t an angelic motive, but isn’t noblesse oblige.

                              More importantly, the conclusion doesn’t follow–at least not 100%. Money has a way of giving you control over people, but it can be imprecise, indirect, or cumbersome. I believe what Google and Firefox have is a contract to share revenue with Firefox for Google searches done through Firefox’s url bar. If Google says “make X, Y and Z decisions about the web or you’ll lose this deal”, that is the kind of statement antitrust regulators find fascinating. Since recent years have seen increased interest in antitrust, Google might not feel that they can do that.

                              1. 9

                                Yes, I agree. It’s still bad that most of Mozilla’s funding comes from Google, but it matters that Mozilla is structured with its intellectual property owned by a non-profit. That doesn’t solve all problems, but it creates enough independence that, for example, Firefox is significantly ahead of Chrome on cookie-blocking functionality - which very much hits Google’s most important revenue stream.

                                1. 4

                                  Google never has to say “make X, Y and Z decisions about the web or you’ll lose this deal,” with or without the threat of antitrust regulation. People have a way of figuring out what they have to do to keep their job.

                                2. 17

                                  I’m tired of the Pocket suggested stories. They have a certain schtick to them that’s hard to pin down precisely but usually amounts to excessively leftist, pseudo-intellectual clickbait: “meat is the privilege of the west and needs to stop.”

                                  I know you can turn them off.

                                  I’m arguing defaults matter, and defaults that serve to distract with intellectual junk is not great. At least it isn’t misinformation, but that’s not saying much.

                                  Moving back to Chrome this year because of that, along with some perf issues I run into more than I’d like. It’s a shame, I wanted to stop supporting Google, but the W3C has succeeded in creating a standard so complex that millions of dollars are necessary to adequately fund the development of a performant browser.

                                  1. 2

                                    Moving back to Chrome this year because of that, along with some perf issues I run into more than I’d like. It’s a shame, I wanted to stop supporting Google, but the W3C has succeeded in creating a standard so complex that millions of dollars are necessary to adequately fund the development of a performant browser.

                                    In case you haven’t heard of it, this might be worth checking out: https://ungoogled-software.github.io/

                                    1. 1

                                      Except as of a few days ago Google is cutting off access to certain APIs like Sync that Chromium was using.

                                      1. 1

                                        Straight out of the Android playbook

                                  2. 4

                                    Mozilla’s recent political advocacy has also made it difficult for me to continue using Firefox.

                                    Can you elaborate on this? I use FF but have never delved into their politics.

                                    1. 16

                                      My top of mind example: https://blog.mozilla.org/blog/2021/01/08/we-need-more-than-deplatforming/

                                      Also: https://blog.mozilla.org/blog/2020/07/13/sustainability-needs-culture-change-introducing-environmental-champions/ https://blog.mozilla.org/blog/2020/06/24/immigrants-remain-core-to-the-u-s-strength/ https://blog.mozilla.org/blog/2020/06/24/were-proud-to-join-stophateforprofit/

                                      I’m not trying to turn this into debating specifically what is said in these posts but many are just pure politics, which I’m not interested in supporting by telling people to use Firefox. My web browser doesn’t need to talk about ‘culture change’ or systemic racism. Firefox also pushes some of these posts to the new tab page, by default, so it’s not like you can just ignore their blog.

                                      1. 6

                                        I’m started to be afraid that being against censorship is enough to get you ‘more than de-platformed’.

                                          1. 10

                                            Really? I feel like every prescription in that post seems reasonable; increase transparency, make the algorithm prioritize factual information over misinformation, research the impact of social media on people and society. How could anyone disagree with those points?

                                            1. 17

                                              You’re right, how could anyone disagree with the most holy of holies, ‘fact checkers’?

                                              Here’s a great fact check: https://www.politifact.com/factchecks/2021/jan/06/ted-cruz/ted-cruzs-misleading-statement-people-who-believe-/

                                              The ‘fact check’ is a bunch of irrelevant information about how bad Ted Cruz and his opinions are, before we get to the meat of the ‘fact check’ which is, unbelievably, “yes, what he said is true, but there was also other stuff he didn’t say that we think is more important than what he did!”

                                              Regardless of your opinion on whether this was a ‘valid’ fact check or not, I don’t want my web browser trying to pop up clippy bubbles when I visit a site saying “This has been officially declared by the Fact Checkers™ as wrongthink, are you sure you’re allowed to read it?” I also don’t want my web browser marketer advocating for deplatforming (“we need more than deplatforming suggests that deplatforming should still be part of the ‘open’ internet.) That’s all.

                                              1. 15

                                                a bunch of irrelevant information about how bad Ted Cruz and his opinions are

                                                I don’t see that anywhere. It’s entirely topical and just some context about what Cruz was talking about.

                                                the meat of the ‘fact check’ which is, unbelievably, “yes, what he said is true, but there was also other stuff he didn’t say that we think is more important than what he did!”

                                                That’s not what it says at all. Anyone can cherry-pick or interpret things in such a way that makes their statement “factual”. This is how homeopaths can “truthfully” point at studies which show an effect in favour of homeopathy. But any fact check worth its salt will also look at the overwhelming majority of studies that very clearly demonstrate that homeopathy is no better than a placebo, and therefore doesn’t work (plus, will point out that the proposed mechanisms of homeopathy are extremely unlikely to work in the first place, given that they violate many established laws of physics).

                                                The “39% of Americans … 31% of independents … 17% of Democrats believe the election was rigged” is clearly not supported by any evidence, and only by a tenuous interpretation of a very limited set of data. This is a classic case of cherry-picking.

                                                I hardly ever read politifact, but if this is really the worst fact-check you can find then it seems they’re not so bad.

                                                1. 7

                                                  This article has a few more examples of bad fact checks:

                                                  https://greenwald.substack.com/p/instagram-is-using-false-fact-checking

                                                2. 7

                                                  Media fact-checkers are known to be biased.

                                                  [Media Matters lobby] had to make us think that we needed a third party to step in and tell us what to think and sort through the information … The fake news effort, the fact-checking, which is usually fake fact-checking, meaning it’s not a genuine effort, is a propaganda effort … We’ve seen it explode as we come into the 2020 election, for much the same reason, whereby, the social media companies, third parties, academic institutions and NewsGuard … they insert themselves. But of course, they’re all backed by certain money and special interests. They’re no more in a position to fact-check than an ordinary person walking on the street … — Sharyl Attkisson on Media Bias, Analysis by Dr. Joseph Mercola

                                                  Below is a list of known rebuttals of some “fact-checkers”.

                                                  Politifact

                                                  • I wanted to show that these fact-checkers just lie, and they usually go unchecked because most people don’t have the money, don’t have the time, and don’t have the platform to go after them — and I have all three” — Candace Owens Challenges Fact-Checker, And Wins

                                                  Full fact (fullfact.org)

                                                  Snopes

                                                  Associated Press (AP)

                                                  • Fact-checking was devised to be a trusted way to separate fact from fiction. In reality, many journalists use the label “fact-checking” as a cover for promoting their own biases. A case in point is an Associated Press (AP) piece headlined “AP FACT-CHECK: Trump’s inaccurate boasts on China travel ban,” which was published on March 26, 2020 and carried by many news outlets.” — Propaganda masquerading as fact-checking

                                                  Politico

                                                  1. 4

                                                    I’m interested in learning about the content management systems that these fact checker websites use to effectively manage large amounts of content with large groups of staff. Do you have any links about that?

                                                    1. 3

                                                      The real error is to imply that “fact checkers” are functionally different from any other source of news/journalism/opinion. All such sources are a collection of humans. All humans have bias. Many such collections of humans have people that are blind to their own bias, or suffer a delusion of objectivity.

                                                      Therefore the existence of some rebuttals to a minuscule number of these “fact checks” (between 0 and 1% of all “fact checks”) should not come as a surprise to anyone. Especially when the rebuttals are published by other news/journalism/opinion sources that are at least as biased and partisan as the fact checkers they’re rebutting.

                                                      1. 1

                                                        The real error is to imply that “fact checkers” are functionally different from any other source of news/journalism/opinion.

                                                        Indeed they aren’t that different. Fact-checkers inherit whatever bias that is already present in mainstream media, which itself is a well-documented fact, as the investigative journalist Sharyl Atkisson explored in her two books:

                                                        • The Smear exposes and focuses on the multi-billion dollar industry of political and corporate operatives that control the news and our info, and how they do it.
                                                        • Slanted looks at how the operatives moved on to censor info online (and why), and has chapters dissecting the devolution of NYT and CNN, recommendations where to get off narrative news, and a comprehensive list of media mistakes.
                                                3. 5

                                                  After reading that blog post last week I switched away from Firefox. It will lead to the inevitable politicization of a web browser where the truthfulness of many topics is filtered through a very left-wing, progressive lens.

                                                  1. 23

                                                    I feel like “the election wasn’t stolen” isn’t a left- or right-wing opinion. It’s just the truth.

                                                    1. 15

                                                      To be fair, I feel like the whole idea of the existence of an objective reality is a left-wing opinion right now in the US.

                                                      1. 5

                                                        There are many instances of objective reality which left-wing opinion deems problematic. It would be unwise to point them out on a public forum.

                                                        1. 8

                                                          I feel like you have set up a dilemma for yourself. In another thread, you complain that we are headed towards a situation where Lobsters will no longer be a reasonable venue for exploring inconvenient truths. However, in this thread, you insinuate that Lobsters already has become unreasonable, as an excuse for avoiding giving examples of such truths. Which truths are being silenced by Lobsters?

                                                          Which truths are being silenced by Mozilla? Keep in mind that the main issue under contention in their blog post is whether a privately-owned platform is obligated to repeat the claims of a politician, particularly when those claims would undermine democratic processes which elect people to that politician’s office; here, there were no truths being silenced, which makes the claim of impending censorship sound like a slippery slope.

                                                          1. 4

                                                            Yeah but none that are currently fomenting a coup in a major world power.

                                                      2. 16

                                                        But… Mozilla has been inherently political the whole way. The entire Free Software movement is incredibly political. Privacy is political. Why is “social media should be more transparent and try to reduce the spread of blatant misinformation” where you draw the line?

                                                        1. 5

                                                          That’s not where I draw the line. We appear to be heading towards a Motte and Bailey fallacy where recent events in the US will be used as justification to clamp down on other views and opinions that left-wing progressives don’t approve of (see some of the comments on this page about ‘fact checkers’)

                                                          1. 7

                                                            In this case though, the “views and opinions that left-wing progressives don’t approve of” are the ideas of white supremacy and the belief that the election was rigged. Should those not be “clamped down” on? (I mean, it’s important to be able to discuss whether the election was rigged, but not when it’s just a president who doesn’t want to accept a loss and has literally no credible evidence of any kind.)

                                                            1. 2

                                                              I mentioned the Motte and Bailey fallacy being used and you bring up ‘white supremacy’ in your response! ‘White Supremacy’ is the default Motte used by the progressive left. The Bailey being a clamp down on much more contentious issues. Its this power to clamp down on the more contentious issues that I object to.

                                                              1. 6

                                                                So protest clamp downs on things you don’t want to see clamp downs on, and don’t protest clamp downs on things you feel should be clamped down on? We must be able to discuss and address real issues, such as the spread of misinformation and discrimination/supremacy.

                                                                But that’s not even super relevant to the article in question. Mozilla isn’t even calling for censoring anyone. It’s calling for a higher degree of transparency (which none of us should object to) and for the algorithm to prioritize factual information over misinformation (which everyone ought to agree with in principle, though we can criticize specific ways to achieve it).

                                                                1. 4

                                                                  We are talking past each other in a very unproductive way.

                                                                  The issue I have is with what you describe as “…and for the algorithm to prioritize factual information over misinformation”

                                                                  Can you not see the problem when the definition of ‘factual information’ is in the hands of a small group of corporations from the West Coast of America? Do you think that the ‘facts’ related to certain hot-button issues will be politically neutral?

                                                                  It’s this bias that i object to.

                                                                  This American cultural colonialism.

                                                                  1. 3

                                                                    Can you not see the problem when the definition of ‘factual information’ is in the hands of a small group of corporations from the West Coast of America?

                                                                    ReclaimTheNet recently published a very good article on this topic

                                                                    https://reclaimthenet.org/former-aclu-head-ira-glasser-explains-why-you-cant-ban-hate-speech/

                                                                    1. 3

                                                                      That’s an excellent article. Thank you for posting it.

                                                                      1. 3

                                                                        You’re welcome. You might be interested in my public notes on the larger topic, published here.

                                                        2. 3

                                                          Out of interest, to which browser did you switch?

                                                    2. 2

                                                      if possible, try vivaldi, being based on chromium, it will be easiest to switch to f.e. you can install chromium’s extensions in vivaldi. not sure about their osx (which seems to be your use-case), support though, so ymmv.

                                                    1. 14

                                                      This is how I git, as a self-admitted near-idiot:

                                                      • Never branch, always on m*ster.

                                                      • Commit mainly from IntelliJ’s GUI.

                                                      • Push either from IntelliJ or command line, can go either way.

                                                      • On the server, git pull.

                                                      • If there’s any trouble, mv project project.`date +%s` and re-clone.

                                                      1. 8

                                                        In my opinion people tend to pay too much attention to CLI commands and steps. As long as one understands what branches and commits are, it becomes immensly easier to handle git and potential problems.

                                                        1. 1

                                                          This is what I refer to as the “xkcd git workflow”: https://xkcd.com/1597/

                                                          1. 1

                                                            I feel like even people more used to git resort to the last bullet point every now and then, I know I have :P

                                                            1. 3

                                                              https://sethrobertson.github.io/GitFixUm/fixup.html is a fantastic resource for fixing mistakes, which helps demystify got. It’s a ‘choose your own adventure’ guide where you decide what state you want to end up at and a few other facts, and it tells you what to do.

                                                              1. 1

                                                                First step

                                                                Strongly consider taking a backup of your current working directory and .git to avoid any possibility of losing data […]

                                                                Hehe, off to a good start. This basically sums it up though, that copy of a directory is a safety net in case any other steps go wrong.

                                                              2. 1

                                                                I admit I used it a lot at my university, because they didn’t taught us how git works and I didn’t took to the time to learn it on my own.

                                                                Now, when my local branch is mess, if I have no local changes to keep and I if know for sure that my branch is in a clean state on the remote repository, I just do:

                                                                git reset --hard origin/my-branch

                                                                With the years passing, it appears to me that you don’t end up with this “fak I have to reclone my repo” or “fak I don’t know how to fix this conflict” problems if you are meticulous with what you commit and where.

                                                                It take a bit more time upfront to make commit that you are proud of, but in the end it makes it very easy to understand what you have done some days/weeks/month ago (and it will save your ass when you have to find when a regression/bug happened).

                                                                TL;DR: git flow + self-explanatory commits = <3

                                                                1. 1

                                                                  Oh man! I did this two weeks ago. I had folders numbered 1-n and in each one I had the same project cloned but in a messed up state. Granted that it was a new technology stack for me, nodejs to be precise.

                                                              1. 1

                                                                I use WordPress because I have established a relatively sane workflow around it, even for minimal sites such as my home page https://www.sebastianopilla.com/ , though in the rare occasions I want to write something I do it on https://www.datafaber.com/ .

                                                                1. 2

                                                                  I like the look of comments at “datafaber”. Very clean, no icons, no noise.

                                                                1. 6

                                                                  I haven’t used it myself, but https://www.opalstack.com/ was founded by one former employee of WebFaction (source: https://www.opalstack.com/2019/03/13/its-time-to-switch-to-an-independently-owned-hosting-company/ ) and from a cursory look at their pages they should match your requirements.

                                                                  1. 1

                                                                    Wow! Thanks for sharing. I had not heard about OpalStack, I think I am going to give it a try.

                                                                  1. 1

                                                                    A couple of years ago I wrote a book on cloud hosting which has never sold very well and doesn’t actually have gotten me a job, but it has made the process much easier (when the interviewer notices it) and has generated many interesting opportunities.

                                                                    Being a book, I didn’t really code much, but I think that the research, the writing and editing were comparable in effort and difficulty to a coding project.

                                                                    1. 10

                                                                      At ${job}-2 I gave this a lot of thought and wrote a detailed guide on our conventions at the time: A Proper Server Naming Scheme. Essentially, the hardware would get a permanent/unique name for its lifecycle, and then CNAMES were added with more conventional structured names and convenience names. One detail I liked was using the UN/LOCODE codes instead of IATA airport codes for more specific geographic information.

                                                                      These days, I do a lot more work with dynamic/ephemeral hosts where it doesn’t come up as much; but, we still do have some static hosts and have settled on:

                                                                      <role>-<id>.<project>.<datacenter>.<provider>.<tld>
                                                                      

                                                                      …which ends up looking something like this for a Jenkins worker, for example:

                                                                      worker-0428a29567cb818a7.build.us-west-2.aws.example.com
                                                                      

                                                                      That said, being 100% in the cloud changes the situation a bit compared to pointing at bare metal in your own data centers.

                                                                      1. 2

                                                                        That’s an excellent naming scheme, which I’ve been using since a few years when there are less than 100 physical servers. I’ve found that above that number I’m more likely to use ephemeral hosts, where I do not really care what the naming scheme is because I very rarely need to connect to them.

                                                                        1. 1

                                                                          And then the hostname -s maps perfectly to this, and you can use hostname worker-0428a29567cb818a7.build.us-west-2.aws.example.com in confidence.

                                                                          I might copy-cat you…

                                                                          This also works in case of VM migration, which means network name change (unless you use loopback IPv6 everywhere then IPs might follow your VMs).

                                                                          Even bare metal in your datacenters can make use of this scheme I guess…

                                                                        1. 3

                                                                          If you’re putting binary files into git you’re doing it wrong. One could argue about small files, but compiled code/executables, photos or “gifs for the readme” are definitely misplaced in a git repository.

                                                                          1. 12

                                                                            I do find that having image files in a resources/ directory for something like a website is often simpler than separating the two. Even then making sure that images are compressed and generally not bloating repo size / git history is essential.

                                                                            1. 18

                                                                              I do find that having image files in a resources/ directory for something like a website is often simpler than separating the two.

                                                                              Yeah, the is exactly the use case here. Mercurial (and git) aren’t designed for handling large binary files, but if you’re checking in static assets/resources that rarely change it still tends to work fine. This repo was fine on Bitbucket for many years, and is working fine on an hgweb instance I’ve spun up in the mean time.

                                                                              I specifically asked about limits because if it’s just the size of the repo being a technical problem for their infrastructure, I can understand. But they would not specify any limits, but just reiterated several times that Mercurial wasn’t designed for this. So I don’t know which of these was the actual problem:

                                                                              1. The repo is so taxing on their infrastructure it’s causing issues for other users.
                                                                              2. The repo is so large it’s costing more to store than some portion of the $100/year account price can cover.
                                                                              3. They are morally opposed to me using Mercurial in a way that it wasn’t designed for (but which still works fine in practice).

                                                                              Cases 1 and 2 are understandable. Setting some kind of limit would prevent those problems (you can still choose to “look the other way” for certain repos, or if it’s only code that’s being stored). Case 3 is something no limit would solve.

                                                                              1. 3

                                                                                If you want to store large files and you want to pay an amount proportional to the file sizes, perhaps AWS S3 or Backblaze B2 would be more appropriate than a code hosting website? I don’t mean to be obtuse, but the site is literally called source hut. Playing rules lawyer on it read like saying “Am I under arrest? So I’m free to go? Am I under arrest? So I’m free to go?” to a police officer.

                                                                                1. 5

                                                                                  B2 or S3 would make things more complicated than necessary for this simple repo. I’ve spun up a $5/month Linode to run hgweb and it’s been working great. I’m all set.

                                                                            2. 6

                                                                              This case was hg, but the same limitations are present. Hg has a special extension for supporting this:

                                                                              https://www.mercurial-scm.org/wiki/LargefilesExtension

                                                                              And it’s considered “a feature of last resort”. It’s not designed to deal with these use-cases.

                                                                              LFS support requires dedicated engineering and operations efforts, which SourceHut has planned, but is not ready yet.

                                                                              1. 5

                                                                                I have a repository with mostly PNG files. Each PNG file is source code; a chunk of data inside each PNG file is machine-readable code for the graph visually encoded in that PNG’s pixels. What would you have me do?

                                                                                I suspect that you would rather see my repository as a tree of text files. While this would be just as machine-readable, it would be less person-readable, and a motivating goal for this project is to have source files be visually readable in the way that they currently are, if not more so.

                                                                                git would not support binary files if its authors did not think that binary-file support were not useful; that is the kind of people that they are and the kind of attitude that they have towards software design.

                                                                                With all that said, I know how git works, and I deliberately attempt to avoid checking in PNGs which I think that I will have to change in a later revision. It would be quite nice if git were able to bridge this gap itself, and allow me to check in plaintext files which are automatically presented as PNGs, but this is not what git was designed to do, and we all can imagine the Makefile which I’d end up writing instead.

                                                                                1. 1

                                                                                  I like the project, but pardon my ignorance - aren’t the PNG files still binary assets produced by the “real” source code, which is the textual expression parsed to generate both the embedded bitstring and the dot graph? If they’re machine readable, that places them in the same category as compiled object files.

                                                                                  1. 3

                                                                                    The real source code is non-textual; it is the diagram (WP, nLab) which is being given as a poset (WP, nLab). To achieve optimal space usage, each poset is stored as a single integer which codes for the adjacency matrix. However, this compressed format is completely unreadable. There are several layers around it, but each layer is meant to do one thing and add a minimum of overhead; JSON (in the future, BSON or Capn) for versioning and framing, and PNG for display and transport. There isn’t really source code; there’s just a couple Python and Monte scripts that I use to do data entry, and I want them eventually automated away in favor of API-driven development.

                                                                                    For example, the raw integer for this “big” poset is (at the time of writing) 11905710401280198804461645206862582864032733280538002552643783587742343463875542982826632679979531781130345962690055869140174557805164079451664493830119908249546448900393600362536375098236826502527472287219502587641866446344027189639396008435614121342172595257280100349850262710460607552082781379116891641029966906257269941782203148347435446319452110650150437819888183568953801710556668517927269819049826069754639635218001519121790080070299124681381391073905663214918834228377170513865681335718039072014942925734763447177695704726505508232677565207907808847361088533519190628768503935101450078436440078883570667613621377399190615990138641789867825632738232993306524474475686731263045976640892172841112492236837826524936991273493174493252277794719194724624788800854540425157965678492179958293592443502481921718293759598648627823849117026007852748145536301969541329010559576556167345793274146464743707377623052614506411610303673538441500857028082327094252838525283361694107747501060452083296779071329108952096981932329154808658134461352836962965680782547027111676034212381463001532108035024267617377788040931430694669554305150416269935699250945296649497910288856160812977577782420875349655110824367467382338222637344309284881261936350479660159974669827300003335652340304220699450056411068025062209368014080962770221004626200169073615123558458480350116668115018680372480286949148129488817476018620025866304409104277550106790930739825843129557280931640581742580657243659197320774352481739310337300453334832766294683618032459315377206656069384474626488794123815830298230349250261308484422476802951799392281959397902761456273759806713157666108792675886634397141328888098305747354465103699243937608547404520480305831393405718705181942963222123463560268031790155109126115213866048693391516959219000560878337219324622230146226960346469769371525338127604307953786112516810509019551617885907067412613823285538493443834790453576561810785102306389953804151473860800342221969666874213156376831068606096772785272984102609049257833898258081466729520326827598704376424140779421965233471588921765110820238036094910936640446304632443760482611408445010230964335747094869968021425396439555206085281953007985784739643408074475440039274314217788647485602069097474262381690379456154426900896918268563062231294937080146199930562645748389040251871291840481739518244706752426504146889097315360662429293711705265772337748378759001582638301784557163848933046038798381667545043026975297902178839764134784634179453671000024868722179355800776002690855305662785522771116635997791339179517016284742206819482196944663461005128697584753594559406283638837841370287286682993990297923202976404261911087739188860505577427942276773287168600954693735964671046522557013031834557159173262849132567983767216098382093390056878765856939614383049277441.

                                                                                    1. 1

                                                                                      Ah, okay, I see. Makes sense, thank you for explaining!

                                                                                2. 4

                                                                                  I’ve seen this argument quite a number of times, and almost always without a coherent explanation of why is that wrong. What’s the rationale behind this argument?

                                                                                  1. 4

                                                                                    Shameless plug, I contributed heavily to this help topic back when I was the PM for Microsoft’s Git server: https://docs.microsoft.com/en-us/azure/devops/repos/git/manage-large-files?view=azure-devops

                                                                                    FWIW I disagree with the comment up-thread which says that GIFs for READMEs don’t belong. If you’re going to check in a logo or meme or whatever, that’s perfectly fine. Just don’t do 1000 of them and don’t churn it every week.

                                                                                    1. 2

                                                                                      I think a big part is also “are my tools there for me or am I slave to my tools?”

                                                                                      If I have a website and most content is under version control, it’s annoying and complicated to have (big) assets outside. Most people simply want one repo with everything inside, and it’s mostly additive, often once per week - it simply doesn’t matter if it’s the wrong tool.

                                                                                1. 1

                                                                                  Writing the business plan for the infrastructure/DevOps/cloud consulting service which I plan to launch before the end of the year, hopefully.

                                                                                  1. 5

                                                                                    A stupid question from somebody that doesn’t really know anything about PKI: is it a problem that they make certificates for 30% of domains? Would it be a problem if they made certificates for 100% of domains?

                                                                                    1. 18

                                                                                      I kind of wish that Mozilla, Apache, maybe Microsoft etc would offer similar services with a compatible API so we could spread the load a bit and diversify. Wishful thinking though

                                                                                      1. 3

                                                                                        I haven’t tried their services, but https://www.buypass.com/ssl/resources/acme-free-ssl claims to offer free SSL certificates by using the same ACME protocol as Let’s Encrypt. It might be worth checking it out, if only for the sake of diversity as you say.

                                                                                      2. 6

                                                                                        The danger that jumps out at me is that getting your root CA trusted by browsers is a slow and expensive process. If LE gained a monopoly and tried to cash in, getting a competitor up and running would be a non trivial amount of effort.

                                                                                        That said I think there are factors that work against that as well:

                                                                                        • big companies need the ability to purchase certificates with >90 day lifespan because enterprise
                                                                                        • other major players offer similar products — Amazon’s free certificates are even lower friction (in the AWS ecosystem ;))
                                                                                        • governments have anti-monopoly counterweights built into the system in the form of their own trusted CAs that could be used

                                                                                        Overall I don’t think anyone can argue that LE hasn’t dramatically improved the landscape of TLS CAs.

                                                                                      1. 1

                                                                                        Oh no, not again.

                                                                                        It’s clearly time exim was sent to the great mailer-daemon in the sky. How many RCE CVE’s in the last two years? Too many.

                                                                                        Sadly there don’t seem to be any open source SMTP servers written in memory safe langauges around. Unless I’ve missed one?

                                                                                        1. 1

                                                                                          Fortunately there are SMTP servers with a proper design that greatly reduce the severity of the effects of memory corruption.

                                                                                          1. 2

                                                                                            I disagree that those 2 examples are good solutions: Postfix’s configuration is even less readable than Exim’s, and OpenSMTPd is really under-documented and looks much more trouble to run on Linux than it’s worth. Exim is still the least bad of the bunch.

                                                                                            1. 1

                                                                                              All of them should be run in a container (or jail/chroot) if not a VM (QubesOS). Furthermore, we need to get rid of root.

                                                                                              **edit, oh jebus. It looks like ASN.1 parsing strikes again. The most profitable back door in the history of computers.

                                                                                          1. 0

                                                                                            Too little, too late. Surpassed by Ansible

                                                                                            1. 2

                                                                                              Ansible, the security nightmare of allowing automation to SSH to machines, copy over a python blob to a temp directory, and execute it as sudo/root.

                                                                                              Yes, people should really use that instead. /s

                                                                                              1. 2

                                                                                                It is also unbelievably slow at what it does, despite applying all documented optimizations and using Mitogen. The simple existence of Mitogen points out that Ansible’s design is fundamentally wrong.

                                                                                                1. 1

                                                                                                  Also: Ansible, the tool that uses YAML for semi-declarative semi-programming automation. I kinda like it but feel dirty for doing so.

                                                                                                  I kinda like (in a less kinky way than I like Ansible) that Chef is at least using a real programming language.

                                                                                                  And I like that they wrote Habitat in Rust. But that’s also a bit kinky.

                                                                                                  1. 2

                                                                                                    I kinda like (in a less kinky way than I like Ansible) that Chef is at least using a real programming language.

                                                                                                    How much Chef have you written? Every Chef best practice ever cites the fact that if you drop into Ruby it’s a major anti-pattern.

                                                                                                    I don’t mean that as an attack, I’m honestly curious.

                                                                                                    1. 1

                                                                                                      How much Chef have you written? Every Chef best practice ever cites the fact that if you drop into Ruby it’s a major anti-pattern.

                                                                                                      None at all :) Good to know, that wasn’t obvious.

                                                                                                2. 1

                                                                                                  I don’t think that’s accurate at all. I also don’t think Chef is doing this just to compete with Ansible.

                                                                                                  I don’t have any source of for the market breakdown, but I’m guessing it’s a safe bet Ansible is the market leader. With that said, though, I think there’s still room in the market for other entries. Saltstack has been making pretty big inroads for example.

                                                                                                  1. 1

                                                                                                    @steveno is right, you’re oversimplifying and also not taking a number of important factors into account - chief being installed base. There is a HUGE amount of Chef code running out there in production.

                                                                                                    Also, Ansible is amazing but it is not well suited for every task, especially configuration management challenges that embody a fair bit of implementation complexity.

                                                                                                  1. 5

                                                                                                    I use WordPress for https://www.datafaber.com because by now I know it well enough to have built a decent semi-automated workflow for updating it and backing it up. I have 4 other sites running on it and in a former company we ran a high-traffic site where WordPress served ~ 90% of the pages.

                                                                                                    1. 3

                                                                                                      PHP 7 has made a world of difference in terms of performance when it comes to WordPress.

                                                                                                      I get about 2,000 visitors (unique’s?) any given day. I’m using a low end shared hosting platform that sets me back about 1 USD a month. After blogspot went down for a week I’ve been self hosting.

                                                                                                      1. 2

                                                                                                        Yes PHP7 made running WordPress robustly much simpler and cheaper.

                                                                                                    1. 1

                                                                                                      I self host since a few years, using Exim, Dovecot replicated on 2 different servers at 2 different providers, Rspamd and Rainloop for the few times when I have only a web browser available.

                                                                                                      I have SPF, DKIM and DMARC records, I’ve had the chance of inheriting relatively clean IPs from the providers in question (Hetzner and OVH) and I’ve never had any issue in removing those IPs from the 2 or 3 blacklists the previous owners had managed to get into.

                                                                                                      For the moment, both Gmail and Hotmail accept my emails (that is, no bounces, no deliveries in spam folders and no email disappearing into thin air).

                                                                                                      I do not plan to move to a hosted provider for my email, the only maintenance I perform is staying up to date with the above packages and having some alerts in the logs if something goes wrong.

                                                                                                      If I had to do it again, I would use OpenSMTPD instead of Exim, only because reading the configuration file seems way easier.

                                                                                                      1. 5

                                                                                                        I’m not sure about Heroku part. I recommend a small instance on Prgmr.com. If you are reading this, then they can handle running Lobsters. They always do. ;)

                                                                                                        1. 5

                                                                                                          I’ll ruin the joke in favor of clarity, and to explicitly thank the folks at prgrmr. Lobsters is not only hosted on Prgmr.com, but has @alynpost is the owner of prgrmr and a sysop here.

                                                                                                          Thanks for all you guys do - community-wise and tech-wise!

                                                                                                          1. 6

                                                                                                            Thank you both. I’ll close the loop by saying that lobste.rs is running as a Xen DomU with 2 vCPUs, 8GiB RAM, and a 50GiB disk. Since beginning to host the site last year we’ve added a 2nd vCPU to deal with contention between the MariaDB work queue and the Ruby / Unicorn work queue. We’ve doubled the memory from 4GiB as traffic and utilization has demanded. The disk is DRBD and replicates to a secondary RAID10 on another physical host in the same rack.

                                                                                                            We’re under 50% disk capacity, and sites with less traffic can certainly be tuned to run on machines with less than 8GiB RAM. We use memory in part as cache to improve responsiveness.

                                                                                                            All that said, at least one lobste.rs user is a Heroku engineer, @apg. As @355E3B reports the codebase can be deployed to Heroku. I do not know what instance size you’ll need but other folk probably do.

                                                                                                            1. 4

                                                                                                              Just curious, but why DRBD and not MariaDB replication? DRBD is very fragile and difficult to get right in my experience.

                                                                                                              1. 5

                                                                                                                DRBD works at the block-device level, and so integrates with our cluster management software, Ganeti. That lets us failover or migrate instances between physical hosts ~regardless of what applications those instances are running. It solves a general problem of moving instances between physical hosts for us.

                                                                                                                I’ll give database replication a closer look if and when the compute resources required to run the site exceed what we can get with a single physical host.

                                                                                                                Some of this answer you can chalk up to path dependency. However, I have not found DRBD to be fragile or difficult to get right. It does what it says on the tin for us.

                                                                                                                Your book looks interesting, btw. Congratulations on publishing it.

                                                                                                            2. 1

                                                                                                              Oh it’s fine haha. Yeah, let’s go ahead to thank everyone hosting, admining, coding, and moderating the site for their time. I appreciate it a lot. :)

                                                                                                          1. 3

                                                                                                            @feoh why the heck is your blog grey on white? I’d love to read this but even after I increase the text size twice it’s still hard on my eyes.

                                                                                                            Contrast Rebellion - to hell with unreadable, low-contrast texts!

                                                                                                            1. 2

                                                                                                              Please take another look and see what you think of the new theme I installed. It’s the only theme in the default Wordpress arsenal that cites high contrast and accessibility.

                                                                                                              I couldn’t figure out how to adjust the text color in and of itself. Sorry, i’m not a web dev :)

                                                                                                              1. 2

                                                                                                                As a reader I thank you very much for taking into account the remarks.

                                                                                                                I really enjoyed the article, I’m still a junior in sysadm/ops and I hope I will Learn as much as you do!

                                                                                                                1. 1

                                                                                                                  Welcome to the fold! It’s an incredible career path and I love my job to bits and am regularly excited to get up and go to work in the morning :)

                                                                                                                2. 2

                                                                                                                  This text is much more readable. The layout of the site has lost a bit of ‘style’ and your header graphic is the same as the article graphic now which looks like a bug, but if you’re going for accessibility this is a bit better.

                                                                                                                  I guess digging through Wordpress theme CSS is not much fun, but your original theme just with a tweaked font colour would have been fine too ;)

                                                                                                                  (And nothing against Wordpress here, use it when I have certain kinds of projects that need to get deployed v. fast and with certain kinds of user constraints)

                                                                                                                  1. 1

                                                                                                                    Digging through the CSS isn’t an option for me. I’m a System Development Engineer with Amazon Web Services. I mean, I know enough CSS to set a background and maybe change some spacing in HTML, but I haven’t the foggiest about how to dig in and modify a particular CSS attribute in Wordpress.

                                                                                                                    I’ll play with the theme more, I’d bet dollars for donuts that there’s a way to get the header graphic for my blog back, but accessibility is super important to me, so if I can’t with the time I have available then that’s a price I’m happy to pay.

                                                                                                                    Thanks again for the report.

                                                                                                                    1. 1

                                                                                                                      Ah. Interesting. In point of fact I CAN’T modify the CSS myself. To do that I’d need to go from paying wordpress.com $100 a year to $200 a year. Not gonna happen :)

                                                                                                                      1. 1

                                                                                                                        I have a dreamhost account which I use for their free unlimited Wordpress hosting, because it’s generally zero hassle and is a ‘proper’ full Wordpress install. Happy to host your WP there if it’s any use, with couple of caveats.

                                                                                                                  2. 1

                                                                                                                    I’m partially blind so I’m super sensitive to this. Thanks for letting me know, I will choose a different theme post haste.

                                                                                                                    If you can manage to refrain from taking the usual dump on Wordpress (It’s what I use and like. Please deal appropriately :) do you have any suggestions on higher contrast themes you like? Or even other Wordpress blogs you find more readable?

                                                                                                                    1. 2

                                                                                                                      I like almost all of Anders Noren’s themes: http://www.andersnoren.se/teman/

                                                                                                                      The code quality is better than the average WordPress theme, and every one of those looks clean and readable (to me at least).

                                                                                                                      1. 1

                                                                                                                        I’m gonna confess to using wordpress.com so I pretty much only use themes they provide by default, but thanks for the pointer. If I get time and if I can install random themes I’ll definitely look into it!

                                                                                                                  1. 1

                                                                                                                    Between DNS-based blocklists, anti-spam filters and general inbox overload, email is a very fragile medium for communicating anything, let alone authentication credentials.

                                                                                                                    There’s absolutely zero guarantee than an email would be delivered at all: Gmail and Office 365, to cite just a couple of the big email providers, sometimes drop incoming email without any notification for the sender.

                                                                                                                    Also, there’s absolutely zero guarantee that an email will be delivered quickly enough for this scheme to work.