1. 32

    I don’t see why this progress bar should be obnoxiously put at the top of the page. It’s cool if you wanna do a donation drive but don’t push it in the face of everybody who comes here. Honestly at first I thought this was a bar for site expense. Then I realised it’s to ‘adopt’ an emoji.

    1. 7

      Lobsters isn’t a daily visit for most readers, probably even for most users. They can’t see it to join in if there isn’t anything visible for it, and it has an id for adblocking if you prefer not to see it.

      1. 22

        Personally a check this site quite regularly on my mobile device… which doesn’t have an ad-blocker.

        1. 13

          That sounds awful. If you’re an android user, normal uBlock Origin works on Firefox for Android just like it does on desktop. :)

          1. 3

            Or use Block This!, which blocks ads in all apps.

            1. 3

              Oh, that’s a cool little tool. Using a local VPN to intercept DNS is a neat trick. Unfortunately doesn’t help with in this case because it blocks requests to domains and not elements on a page via CSS selectors.

              That does make me want to actually figure out my VPN to home for my phone and setup a pi-hole, though.

            2. 2

              Ohh! Good to know, thanks.

            3. 2

              Firefox 57+ has integrated adblocker nowadays, on both desktop and mobile; plus, there’s also Brave.

            4. 27

              That is still annoying that I need to setup my adblocker to fix lobste.rs. So much for all the rant articles about bad UX/UI in here.

              1. 11

                maybe one could just add a dismiss button or sometimes like that? I don’t find it that annoying, but I guess it would be a pretty simple solution.

                1. 1

                  I concur, either a client side cookie or session variable.

                  1. 1

                    Well, yeah… that’s how you could implement it, and I guess that would be the cleanest and simplest way?

                2. 2

                  It’d be great to see data about that! Personally I visit daily or at least 3 times a week. Lack of clutter and noise is one of the biggest advantages of Lobsters. And specifically, I looked at the link, and I have no idea who this Unicode organization is, or their charitable performance, or even if they need the money. I’d imagine they are mostly funded by the rich tech megacorps?

                  1. 1

                    [citation needed] ;-)

                  2. 3

                    Adopting an emoji isn’t the end goal: the money goes to Unicode, which is a non-profit organization that’s very important to the Internet.

                    1. 5

                      If this bar actually significantly annoys you, I’m surprised you haven’t literally died from browsing the rest of the internet.

                    1. 1

                      If Google implemented a “previous versions” feature for emails I think this would eliminate the problems addressed here (although, there are other problems with AMP).

                      1. 1

                        Interesting presentation, nice to see how it’s configured in an enterprise sense. On my project list is to configure a l2tp / IPsec VPN on my digital ocean VM for occasional use from Arch Linux and Android (Windows and Mac would be a bonus) anyone had luck getting this working on OpenBSD?

                        1. 1

                          Yes. I’ve got that working with my Android phone as a client.

                          1. 1

                            Did you go with a PSK or certs?

                        1. 25

                          I think ads are the worst way to support any organization, even one I would rate as highly as Mozilla. People however are reluctant to do so otherwise, so we get to suffer all the negative sides of ads.

                          I just donated to Mozilla with https://donate.mozilla.org, please consider doing the same if you think ads/sponsored stories are the wrong path for Firefox.

                          1. 14

                            Mozilla has more than enough money to accomplish their core task. I think it’s the same problem as with Wikimedia; if you give them more money, they’re just going to find increasingly irrelevant things to spend it on. Both organizations could benefit tremendously from a huge reduction in bureaucracy, not just more money.

                            1. 9

                              I’ve definitely seen this with Wikimedia, as someone who was heavily involved with it in the early years (now I still edit, but have pulled back from meta/organizational involvement). The people running it are reasonably good and I can certainly imagine it having had worse stewardship. They have been careful not to break any of the core things that make it work. But they do, yeah, basically have more money than they know what to do with. Yet there is an organizational impulse to always get more money and launch more initiatives, just because they can (it’s a high-traffic “valuable” internet property).

                              The annual fundraising campaign is even a bit dishonest, strongly implying that they’re raising this money to keep the lights on, when doing that is a small part of the total budget. I think the overall issue is that all these organizations are now run by the same NGO/nonprofit management types who are not that different from the people who work in the C-suites at corporations. Universities are going in this direction too, as faculty senates have been weakened in favor of the same kinds of professional administrators. You can get a better administration or a worse one, but barring some real outliers, like organizations still run by their idiosyncratic founders, you’re getting basically the same class of people in most cases.

                            2. 21

                              So Mozilla does something bad, and as a result I am supposed to give it money?? Sorry, that doesn’t make any sense to me. If they need my money, they should convince me to donate willingly. What you are describing is a form of extortion.

                              I donate every month to various organizations; EFF, ACLU, Wikipedia, OpenBSD, etc. So far Mozilla has never managed to convince me to give them my money. On the contrary, why would I give money to a dysfunctional, bureaucratic organization that doesn’t seem to have a clear and focused agenda?

                              1. 9

                                They may be a dysfunctional bureaucratic organisation without a focused agenda (wouldn’t know as I don’t work for it) which would surely make them less effective, but shouldn’t the question instead be how effective they are? Is what they produce a useful, positive change and can you get that same thing elsewhere more cost-effectively?

                                If I really want to get to a destination, I will take a run-down bus if that is the only transport going there. And if you don’t care about the destination, then transport options don’t matter.

                                1. 17

                                  They may be a dysfunctional bureaucratic organisation without a focused agenda (wouldn’t know as I don’t work for it) which would surely make them less effective, but shouldn’t the question instead be how effective they are? Is what they produce a useful, positive change and can you get that same thing elsewhere more cost-effectively?

                                  I am frequently in touch with Mozilla and while I sometimes feel like fighting with windmills, other parts of the org are very quick moving and highly cost effective. For example, they do a lot of very efficient training for community members like the open leadership training and the Mozilla Tech speakers. They run MDN, a prime resource for web development and documentation. Mozilla Research has high reputation.

                                  Firefox in itself is in constant rebuild and is developed. MozFest is the best conferences you can go to in this world if you want to speak tech and social subjects.

                                  I still find their developer relationship very lacking, which is probably the most visible part to us, but hey, it’s only one aspect.

                                  1. 9

                                    The fact that Mozilla is going to spend money on community activities and conferences is why I don’t donate to them. The only activity I and 99% of people care about is Firefox. All I want is a good web browser. I don’t really care about the other stuff.

                                    Maybe if they focused on what they’re good at, their hundreds of millions of dollars of revenue would be sufficient and they wouldn’t have to start selling “sponsored stories”.

                                    1. 18

                                      The only activity I and 99% of people care about is Firefox.

                                      This is a very easy statement to throw around. It’s very hard to back up.

                                      Also, what’s the point of having a FOSS organisation if they don’t share their learnings? This whole field is fresh and we have maintainers hurting left and right, but people complain when organisations do more then just code.

                                      1. 6

                                        To have a competitive, web browser we can trust plus exemplary software in a number of categories. Mozilla couldve been building trustworthy versions of useful products like SpiderOak, VPN services, and so on. Any revenue from business licensing could get them off ad revenue more over time.

                                        Instead, they waste money on lots of BS. Also, they could do whaf I say plus community work. It’s not either or. I support both.

                                        1. 8

                                          To have a competitive, web browser we can trust plus exemplary software in a number of categories. Mozilla couldve been building trustworthy versions of useful products like SpiderOak, VPN services, and so on. Any revenue from business licensing could get them off ad revenue more over time.

                                          In my opinion, the point of FOSS is sharing and I’m pretty radical that this involves approaches and practices. I agree that all you write is important, I don’t agree that it should be the sole focus. Also, Mozilla trainings are incredibly good, I have actually at some point suggested them to sell them :D.

                                          Instead, they waste money on lots of BS. Also, they could do whaf I say plus community work. It’s not either or. I support both.

                                          BS is very much in the eye of the beholder. I also haven’t said that they couldn’t do what you describe.

                                          Also, be aware that they often collaborate with other foundations and bring knowledge and connections into the deal, not everything is funded from the money MozCorp has or from donations.

                                          1. 1

                                            “Also, Mozilla trainings are incredibly good, I have actually at some point suggested them to sell them :D.”

                                            Well, there’s a good idea! :)

                                        2. 3

                                          That’s a false dichotomy because there are other ways to make money in the software industry that don’t involve selling users to advertisers.

                                          It’s unfortunate, but advertisers have so thoroughly ruined their reputation that I simply will not use ad supported services any more.

                                          I feel like Mozilla is so focused on making money for itself that it’s lost sight of what’s best for their users.

                                          1. 2

                                            That’s a false dichotomy because there are other ways to make money in the software industry that don’t involve selling users to advertisers.

                                            Ummm… sorry? The post you are replying to doesn’t speak about money at all, but what people carry about?

                                            Yes, advertising and Mozilla is an interesting debate and it’s also not like Mozilla is only doing advertisement. But flat-out criticism of the kind “Mozilla is making X amount of money” or “Mozilla supports things I don’t like” is not it

                                          2. 3

                                            This is a very easy statement to throw around. It’s very hard to back up.

                                            Would you care to back up the opposite, that over 1% of mozilla’s userbase supports the random crap Mozilla does? That’s over a million people.

                                            I think my statement is extremely likely a priori.

                                            1. 1

                                              I’d venture to guess most of them barely know what Firefox is past how they do stuff on the Internet. They want it to load up quickly, let them use their favorite sites, do that quickly, and not toast their computer with malware. If mobile tablet, maybe add not using too much battery. Those probably represent most people on Firefox along with most of its revenue. Some chunk of them will also want specific plugins to stay on Firefox but I don’t have data on their ratio.

                                              If my “probably” is correct, then what you say is probably true too.

                                          3. 5

                                            This is a valid point of view, just shedding a bit of light on why Mozilla does all this “other stuff”.

                                            Mozilla’s mission statement is to “fight for the health of the internet”, notably this is not quite the same mission statement as “make Firefox a kickass browser”. Happily, these two missions are extremely closely aligned (thus the substantial investment that went into making Quantum). Firefox provides revenue, buys Mozilla a seat at the standards table, allows Mozilla to weigh in on policy and legislation and has great brand recognition.

                                            But while developing Firefox is hugely beneficial to the health of the web, it isn’t enough. Legislation, proprietary technologies, corporations and entities of all shapes and sizes are fighting to push the web in different directions, some more beneficial to users than others. So Mozilla needs to wield the influence granted to it by Firefox to try and steer the direction of the web to a better place for all of us. That means weighing in on policy, outreach, education, experimentation, and yes, developing technology.

                                            So I get that a lot of people don’t care about Mozilla’s mission statement, and just want a kickass browser. There’s nothing wrong with that. But keep in mind that from Mozilla’s point of view, Firefox is a means to an end, not the end itself.

                                            1. 1

                                              I don’t think Mozilla does a good job at any of that other stuff. The only thing they really seem able to do well (until some clueless PR or marketing exec fucks it up) is browser tech. I donate to the EFF because they actually seem able to effect the goals you stated and don’t get distracted with random things they don’t know how to do.

                                      2. 3

                                        What if, and bear with me here, what they did ISN’T bad? What if instead they are actually making a choice that will make Firefox more attractive to new users?

                                      3. 9

                                        The upside is that atleast Mozilla is trying to make privacy respecting ads instead of simply opening up the flood gates.

                                        1. 2

                                          For now…

                                      1. 11

                                        you will be banned. No warnings.

                                        Pretty heavy handed IMHO

                                        1. 6

                                          That is extremely harsh. Mistakes happen; I would suggest two strikes, assuming the deficiency in the original posting is rectified immediately.

                                          1. 6

                                            Given how often people spam bad job postings, and given how basic that information is, I think it is reasonable.

                                            If you put up a sloppy posting, you are wasting the time and being disrespectful to all the people who have to parse through it. If you are unwilling to proof it for those four things, you should be banned.

                                            1. 5

                                              I’m pretty tired of recruiter shenanigans, but this got upvoted strongly, so I’m rewriting it to say “it will be deleted.” We can reserve banning for repeat offenders, and hopefully it won’t come up.

                                              1. -1

                                                I think part of the problem is you desire to run this site with an iron fist and that’s really not needed. We’re all mature and have been vetted through the invite process, there is no need to be the über admin. Instead this site needs something akin to a caretaker.

                                                1. 6

                                                  I think part of the problem is you desire to run this site with an iron fist and that’s really not needed.

                                                  I think this is being unfair to @pushcx. The only “iron fist” thing he did in the past month was ban a person calling for a race war, which IMO was long-overdue. Everything else was quality of life adjustments like merging dupes and removing off topic content, and I for one am glad they’re done.

                                                  We’re all mature and have been vetted through the invite process, there is no need to be the über admin.

                                                  The invite process isn’t really a vetting process. It means you either 1) knew a person who already had an account, or 2) went on the IRC channel and demonstrated that you know tech stuff (which is how I got an invite). It doesn’t filter for maturity or decency.

                                              2. 3

                                                I agree.

                                                I’ve occasionally floated semi-public postings over the years when I was not ready to divulge the company name until the req was actually public and in all cases salary was listed as “competitive”.

                                                I’d say that “competitive” really means “negotiable” which can be off-putting and doesn’t bracket it. It was only for junior hires that I had a solid range rather than a floor.

                                                1. 3

                                                  I think the company name is vital because it allows people to look up lots more information.

                                                  For salary range, “competitive” is used by everyone but non-profits, so it doesn’t mean anything. And all salaries are negotiable, so again, that doesn’t convey any information.

                                                  1. 1

                                                    There are plenty of places you can post those listings and far fewer places where you can escape them. It’s nice for this group to be one of the latter.

                                                1. 2

                                                  Thanks to everyone that left a comment, this is amazing! I decided to start with Dokuwiki and see how that goes.

                                                  1. 3

                                                    Based on some recommendations here I read The Phoenix Project by Gene Kim. I really enjoyed reading it.

                                                    1. 3

                                                      It would probably be Country Driving: A Chinese Road Trip by Peter Hessler.

                                                      1. 5

                                                        Peter Hessler

                                                        Doesn’t look like it’s about OpenBSD though.

                                                        1. 4

                                                          Nothing about the great firewall?

                                                      1. 10

                                                        I have had a personal MediaWiki/[mysql|mariadb] setup going since 2005. I honestly don’t know how I would live without it. pretty much every little side project, OS install, ROM build, woodworking project, recipe, brewing protocol, cocktail recipe, knitting pattern, and discography listen-through have been documented there. I doubt anyone other than me could possibly navigate it, but that seems only natural. the only headaches through the years have been the half dozen badly-broken upgrade nightmares I have encountered. at least twice I have had to restart and restore from backup via some very one-off perl scripts. still, it beats the shelves of notebooks that came before.

                                                        1. 4

                                                          Yeah this is making me feel old, but I have had a personal wiki since late 2004… I wrote it myself in Python and I still use it every day. I just checked and there are 2,737 pages, which is a rate of 1 page created every 1.73 days. Of course I edit pages more than creating them.

                                                          All my work notes, side project ideas and notes, notes on videos and books that I’ve finished, that I want to check out, tax info, etc. goes there.

                                                          I’m a bit of an information hoarder but it doesn’t seem to be a bad thing on balance.

                                                          1. 3

                                                            I think Saturday morning cartoons taught me knowledge is power, so I don’t see hoarding as a problem. Especially if it’s all organised. Can you shed some light on the details/ architecture of your solution?

                                                            1. 3

                                                              Sure, I think it was a really helpful project, although as mentioned it’s old.

                                                              The code started out as a fork of a 50 or 100 line CGI script I found on the web. It was a toy wiki backed by the file system. Back in 2004, I ran it on shared hosting with a shared SSL certificate! I had never done any web programming before – so this was my introduction to the web!

                                                              It has evolved a lot since then, and right now it’s about ~7200 lines of Python, which includes a Python web framework I wrote. It’s now backed by sqlite, but not in a particularly nice way. It has a tiny bit of JavaScript that I also wrote myself. It is now a WSGI program, which even runs under this web server I wrote (which is not necessarily a good idea – I’ve used it as a testbed for some experiments in web infrastructure.)

                                                              I try to iterate on what I need personally. It never seems like enough, but I like using my own code, even if the UI is crappy. I think that low latency is more important than bells and whistles on the UI – and I consciously write it for low latency, which is not hard at all if you’re not using any third party libraries. It should be 50ms or less for every page load – if it’s not, there’s a bug somewhere.

                                                              I even wrote my own markup syntax – back in 2004 that was perhaps somewhat reasonable. Now I wish it was markdown, but I didn’t even start using markdown heavily until a year or two ago.

                                                              I most often use it as a bookmark manager. I have browser bookmark like this:

                                                              javascript:void(location.href='https:/my-wiki-name.org/jot?url='+encodeURIComponent(location.href)+'&title='+%20encodeURIComponent(document.title)+'&selection='+encodeURIComponent(window.getSelection()))
                                                              

                                                              And then that opens up a form with a section for notes. And then I can type notes and append it to a given wiki page.

                                                              The wiki is very messy, but I try to refactor it from time to time. Hyperlinks are exactly the right abstraction for notes IMO. Some people use flat text files in vim or emacs, but I think hyperlinks are essential. When you have thousands of pages, that’s the only reasonable organization mechanism for notes. (I think emacs org-mode might have hyperlinks, but I’ve never used it.)

                                                              It is honestly a great memory augmentation device. I often look back at old projects and I’m shocked at how much I forgot and how much I’ve learned since then! (And most people think I have a good memory in general.)


                                                              Another note: I have a cron job that syncs the sqlite database and checks plain text into an hg rep every night. So I have a coarse history by day. I don’t have a history of every edit, and I don’t think that’s necessary anyway.

                                                          2. 1

                                                            Awesome! That’s basically what I’m going for, a catch all for every little morsel of information. Do you host your own setup?

                                                            1. 1

                                                              yep. originally ran on whatever desktop I had, but now lives on a dedicated little box I use for one-off stuff too custom or weird for freenas plugins.

                                                          1. 6

                                                            I did this once. After some research I ended up using DokuWiki. It worked pretty well, I stuck with it for a few years. I liked how the database was just the filesystem, my backups were just tarballs of the data directory.

                                                            My main gripe, and the reason why I’ve moved over to google keep + workflowy, is that it was difficult to edit pages while using my phone.

                                                            1. 2

                                                              That’s a good point about editing it on mobile, that might be a sore spot.

                                                              1. 2

                                                                I had the same problem recently, I got pretty annoyed by the editor on my iPhone’s Safari so I’ve started a side project recently to sync Markdown on Dropbox into DokuWiki (https://github.com/milanaleksic/notesforlife). There are many very good mobile Markdown editors that can store into Dropbox… Now I’m happy since dokuwiki backup is trivial, it has search… everything I need

                                                              1. 3

                                                                I use Gollum, which uses git for its database and it’s the same wiki GitHub uses.

                                                                I use it, or used it. It supports Markdown editing, creation/editing in wiki, file upload (well, I’m 90%).

                                                                1. 1

                                                                  Any particular reason for not using it anymore?

                                                                1. 10

                                                                  This topic touches a nerve for me, after iOS 7 made the iPhone 4 so laggy that I felt that I had no alternative but to recycled it.

                                                                  Apple could probably silence the critics with an iOS notification to tell users the state of their battery when the CPU starts to throttle due to battery issues. Though I would not place a bet on Apple doing that.

                                                                  Fingers crossed that Microsoft’s alleged Andromeda device, and the Librem 5 device from Purism, can inject some fresh ideas into the market. The current mobile market needs a good shake-up.

                                                                  1. 12

                                                                    Fingers crossed that Microsoft’s alleged Andromeda device, and the Librem 5 device from Purism, can inject some fresh ideas into the market. The current mobile market needs a good shake-up.

                                                                    I doubt people are willing to trust Microsoft again after they had backstabbed Windows Phone 8 users with lack of WM10 upgrades, and the WM10 users getting sunsetted, in a long line of backstabs of Windows Mobile users from them; and I severely doubt the Librem 5 will do much better than the FreeRunner, let alone N900, did. Unfortunate, but it’s based on precedent.

                                                                    I think MS and others are waiting for the theoretical future form factor that will obsolete or at least put a dent into smartphone sales; the existing market is too entrenched, but a new one is fertile. The problem is guessing what’s going to actually take off.

                                                                    1. 1

                                                                      Honestly, if Apple started popping up notifications that say, “Your battery is old, and we had to slow the phone down,” they’d be ragged on for telling people to buy a new phone.

                                                                      1. 1

                                                                        Well my Parent’s Macbook Air is saying the battery “Needs Servicing” so it’s not like they aren’t warning their computer users…

                                                                    1. 8

                                                                      There have been several different vulnerabilities reported, is this true for each and every one of them?

                                                                      What about unreported vulnerabilities? Do you really have “nothing to worry about” when your freakin’ CPU is hardwired to listen for packets from the Internet?

                                                                      1. 4

                                                                        Amen.

                                                                        Also, on the physical-access angle, how to know if you’re a high-value target or just that paranoid? Just how wide-scale are shipping shenanigans nowadays? Who else might be up to similar tricks? And don’t forget the evil maids… maybe put your laptop in a safe when you’re not using it? Hmm… and who might have access to your remote cloud-based resources? This whole thing looks like a really big mess to me.

                                                                        1. 2

                                                                          Especially if there is wide-spread malware in the future that makes use of ME vulns, hardly not-anyone’s-problem then.

                                                                        1. 14

                                                                          I’m shocked that 1) they didn’t anticipate $1 donors getting angry at this change 2) the change wasn’t telegraphed ahead of time. How can a company change such a fundamental part of their business like this? I worry that their VC-funded business is going to result in all sorts of shenanigans once they have to start showing 10x returns on venture capital.

                                                                          1. 10

                                                                            Yup, my thoughts exactly. I’d like to see a co-op model where creators have control over company decisions and share in the profits.

                                                                            1. 3

                                                                              Yes! Something like the way Vanguard is structured. It’s a workable model for a company that commits to it.

                                                                            2. 10

                                                                              Wow, I had no idea Patreon was VC-funded. Talk about an oxymoron.

                                                                              1. 7

                                                                                It’s why I recommended against it. A lean nonprofit is best for that kind of thing.

                                                                            1. 4

                                                                              So IMO there are two problems you’re citing here: 1) Motivation and 2) Organization.

                                                                              The Pomodoro system is fantastic for organization and even for motivation in that it helps you focus and rewards completion.

                                                                              However, it cannot motivate you to set out upon the path to begin with. For cases like this I find gamification rather effective. Pomodoro can be viewed that way too, I know, but in this case I tend to pull out tools like Exercism or CheckIO

                                                                              1. 1

                                                                                falsetto voice naaaaaailed it

                                                                                Motivation and Organization are the primary stumbling blocks here. Thanks for the gamification stuff, I’m not a gamer* at all really so I don’t know if I will get the same satisfaction as someone else might but I do like the idea of tracking if a day is a net positive or net negative.

                                                                                *love me some Cribbage

                                                                              1. 9

                                                                                If they manage to commercialize any high-temp chips, that could be a huge market. I worked in engineering oilfield tools, and high temperature electronics were a constant struggle. Any company in that industry would be willing to spend billions of dollars on microcontrollers with MTBF significantly above 100 hours or so at temperatures above 150c. That’s the rough order of magnitude they’re dealing with, and going an order of magnitude or more above that would revolutionize how deep directional oilwells could be drilled.

                                                                                1. 1

                                                                                  Not to mention cross-room milkshake drinking!

                                                                                  1. 2

                                                                                    I’m thinking this might be because everyone else is misspelling it and Twitter is providing those suggestions?

                                                                                    1. 3

                                                                                      I think the title is missing a “know” between customers and about.

                                                                                      1. 1

                                                                                        Thanks, I thought it was correct without the know.

                                                                                      1. 10

                                                                                        I don’t really see a lot of smaller open source projects having their own LTS releases.

                                                                                        What I see is them suffering from trying to support their ecosystem’s LTS releases. When CentOS ships with a Python that the Python core team has dropped, I’m stuck supporting it in my packages, because there are users there (and they blame me, not CentOS, for their troubles if I drop support).

                                                                                        1. 2

                                                                                          I don’t understand CentOS, is enterprise really so inflexible with a shorter release cycle?

                                                                                          1. 12

                                                                                            Yes. Change is bad. Especially when you have scary SLAs (that is, downtime on your end costs your company thousands of dollars per minute) you tend to be very careful about what and when you upgrade, especially if things are working (if it ain’t broke, don’t fix it).

                                                                                            1. 1

                                                                                              I wonder why we don’t make better software / devops to handle change. Maybe the pain to change once in a while is less than to roll with tighter LTS windows?

                                                                                              1. 7

                                                                                                Because starting to use a new methodology to handle change is a change on its own. And so a new technology can only climb the scale relatively slowly (so many projects half our size have used this technology that we can as well run a small trial). This means that some importants kinds of feedback are received with a timescale of years, not weeks…

                                                                                                1. 4

                                                                                                  Exactly, its not that enterprises don’t want to change its that change in and of itself is hard. It is also expensive in time, which means money. Which basically means: keep things as static as possible to minimize when things break. If you have N changes amongst M things, debugging what truly broke is non trivial. Reducing the scope of a regression is probably the number one motivator of never upgrading things.

                                                                                                  An example, at work we modify the linux kernel for $REASONS, needless to say, testing this is just plain hard. Random fixes to one part of the kernel can drastically alter how often say the OOM killer triggers. Sometimes you don’t see issues until several weeks of beating the crap out of things. When the feedback cycle is literally a month, I am not sure one could argue that we want more change to be possible.

                                                                                                  I don’t see much of a way to improve the situation beyond just sucking it up and accepting that certain changes cannot be rushed without triggering unknown unknowns. Even with multi week testing you still might miss regressions.

                                                                                                  This is a different case entirely than making an update to a web application and restarting.

                                                                                                  1. 2

                                                                                                    First of all, thanks for a nice presentation of this kind of issues.

                                                                                                    This is a different case entirely than making an update to a web application and restarting.

                                                                                                    I am not sure what you mean here, because a lot of web applications have a lot of state, and a lot of inner structure, and a lot of rare events affecting the behaviour. I don’t want to deny that your case is more complicated than many, I just want to say that your post doesn’t convey that it is qualitatively different as opposed to quantitatively.

                                                                                                    I am not sure one could argue that we want more change to be possible.

                                                                                                    What you might have wanted, is comparing throwing more hardware at the problem (so that you can run more code in a less brittle part of the system) with continuing with the current situation. And then there would be questions of managing deployments and their reproducibility, possiibility or impossibility of redundancy, fault isolation, etc. I guess in your specific case the current situation is optimal by some parameters.

                                                                                                    Then of course, the author of the original post might effectively have an interest — opposing to yours — in making your current situation more expensive to maintain (this could be linked to a change that might make something they want less expensive to get). Or maybe not.

                                                                                                    How much do the things discussed in the original post apply to your situation by the way? Do you try to cherry-pick fixes or to stabilize an environment with minimal necessary minor-version upgrades?

                                                                                                    1. 3

                                                                                                      I am not sure what you mean here, because a lot of web applications have a lot of state, and a lot of inner structure, and a lot of rare events affecting the behaviour. I don’t want to deny that your case is more complicated than many, I just want to say that your post doesn’t convey that it is qualitatively different as opposed to quantitatively.

                                                                                                      I’m not entirely sure I can reasonably argue that kernel hacking isn’t qualitatively different from say web application development but here goes. Mind that some of this is going to be specific to the use cases I encounter and thus can be considered an edge case, however edge cases are always great for challenging assumptions you may not realize you had.

                                                                                                      Lets take the case of doing 175 deployments in one day that another commenter linked. For a web application, there are relatively easy ways of doing updates with minimal impact to end users. This is mostly possible as the overall stack is so far removed from the hardware, its relatively trivial to do. Mind you I’m not trying to discount the difficulty but overall it amounts to some sort of HA or load balancing say via dns, haproxy, etc… to handle flipping a switch from old version to new.

                                                                                                      One might also have an in application way to do A/B version flips in place in the application as well, whatever the case here, the ability to update is in lots of ways a feature of the application space.

                                                                                                      A con to this very feature is that restarting the application and deploying a new version inherently destroys the state the application is in. Aka: lets say you have a memory bug, restarting it fixes it magically but you upgrade so often you never notice it. This is a case where I am almost 99% sure that any user space developer would catch bugs if they were to run their application for longer than a month. Now I doubt that will happen but its something to contemplate. The ability to do rapid updates is a two edged sword.

                                                                                                      Now lets compare to the kernel. Lets take a trivial idea like adding a 64 bit pointer to the skb buffer. Easy right? Shouldn’t impact a thing, its just 64 bits what is 64 bits of memory amongst friends? Well a lot it turns out, lets say you’re running network traffic at 10Gb/s all the while where you have a user space application using up as much memory as it can. Probably overcommitting memory as well just to be annoying. Debugging why this application triggers the OOM killer after a simple change like I described is definitely non trivial. The other problem is you need to trigger the exact circumstances to hit the bug. And worst of all it can often be a confluence of bugs that trigger it. Aka some network driver will leak a byte every so often once some queue is over a certain size, meaning you have to run stuff a long time to get to that state again.

                                                                                                      I’m using a singular example but I could give others where the filesystem can similarly play into the same stats.

                                                                                                      Note, since I’m talking about linux lets review the things that a kernel update cannot reasonably do, namely update in place. This severely limits how a user space application can be run and for how long. Lets say this user space application can’t be shutdown without some effect on the users end goal. Unreasonable? Sure, but note that a lot of runtime processes are not designed with rapid updates and things like checkpointing so they can be re-run from a point in time snapshot. And despite things like ksplice to update the kernel in place, it has…. limitations to trying to update things. Limitations relating to struct layout tend to cause things to go boom.

                                                                                                      In my aforementioned case, struct layout and the impact on memory can also severely change how well user space code runs. Say you add another byte to a struct that was at 32bytes of memory already. Now you’re requiring 40bytes of memory per struct. This means that its likely your’e wasting 24 bytes of memory and hurting caching of data in the processor in ways you might not know. Lets say you decide to make it a pointer, now you’re hitting memory differently and also causing changes to the overall behavior of how everything runs.

                                                                                                      I’m only scratching the surface here, but I’m not sure how one can arrive at kernel development isn’t qualitatively different state wise than a web application. I’m not denigrating web app developers either here, but I don’t know of many web app developers worrying about adding a single byte to a struct as the performance impact causes more cache invalidation and making things ever so slower for what a user space process sees. They both involve managing state, but making changes in the kernel can be frustrating when a simple 3 line change can cause odd space leaks in how user applications run. If you’re wondering why Linus is such a stickler about breaking user space, its because its really easy to do.

                                                                                                      I also wish I could magically trip up every heisenbug related to long running processes abusing the scheduler, vm subsystem, filesystem, and network but much like any programmer, bugs at the boundary are hard to replicate. Its also hard to debug when all you’ve got is a memory image of the state of the kernel when things broke. What happened leading up to that is normally the important part but entirely gone.

                                                                                                      What you might have wanted, is comparing throwing more hardware at the problem (so that you can run more code in a less brittle part of the system) with continuing with the current situation. And then there would be questions of managing deployments and their reproducibility, possiibility or impossibility of redundancy, fault isolation, etc. I guess in your specific case the current situation is optimal by some parameters.

                                                                                                      Not sure its optimal but a bit of a casus belli in that if you have to run a suite of user land programs that have been known to trigger bad behavior, and run them for a month straight to be overly certain things aren’t broken, throwing more hardware at it doesn’t make the baby any faster. Just like throwing 9 women at the making a baby problem won’t make a baby any faster, sometimes the time it takes to know things work just can’t be reduced. You can test more in parallel at once sure, but even then, you run into cost issues for the hardware.

                                                                                                      How much do the things discussed in the original post apply to your situation by the way? Do you try to cherry-pick fixes or to stabilize an environment with minimal necessary minor-version upgrades?

                                                                                                      Pretty much that, cherry-pick changes as needed, and stick to a single kernel revision. Testing is mostly done on major version changes, aka upgrading from version N to version M reapply the changes and let things loose to see what the tree shaking finds on the ground. Then debugging what might have introduced the bug and fixing that along with more testing.

                                                                                                      Generally though the month long runs tend to be freaks of nature bugs. But god are they horrible to debug.

                                                                                                      Hopefully that helps explain my vantage point a bit. If its unconvincing feel free to ask for more clarification. Its hard to get too specific due to legal reasons but I’ll try to do as well as I can. Lets just say, I envy every user space application as to the debugging tools they have. I wish to god the kernel had something like rr to debug back in time to watch a space leak as an example.

                                                                                                      1. 1

                                                                                                        Thanks a lot.

                                                                                                        Sorry for a poor word choice — I meant that the end-goal problems you solve are on a continuum with no bright cutoffs that passes through the tasks currently solved by the most complicated web systems, by other user-space systems, by embedded development (let’s say small enough to have no use for FS), other kinds of kernel development etc. There are no clear borders, and there are large overlaps and crazy outliers. I guess if you said «orders of magnitude», I would just agree.

                                                                                                        On the other hand, poor word choice is the most efficient way to make people tell interesting things…

                                                                                                        I think a large subset of examples you gave actually confirm the point I have failed to express.

                                                                                                        Deploying web applications doesn’t have to reset the process, it is just that many large systems now throw enough hardware to reset the entire OS instance. Reloading parts of the code inside the web application works fine, unless a library leaks an fd on some rare operations and the server process fails a week later. Restarting helps, that’s true. Redeploying a new set of instances takes more resources, needs to be separately maintained, but allows to shrug off some other problems (many of which you have enumerated).

                                                                                                        And persistent state management still requires effort for web apps, but less than before more resources were thrown at it.

                                                                                                        I do want to hope that at some point kernel debugging (yes, device drivers excluded, that’s true) by running Bochs-style CPU emulator under rr becomes feasible. After all, this is a question of throwing resources at the problem…

                                                                                                        1. 1

                                                                                                          Deploying web applications doesn’t have to reset the process, it is just that many large systems now throw enough hardware to reset the entire OS instance. Reloading parts of the code inside the web application works fine, unless a library leaks an fd on some rare operations and the server process fails a week later. Restarting helps, that’s true. Redeploying a new set of instances takes more resources, needs to be separately maintained, but allows to shrug off some other problems (many of which you have enumerated).

                                                                                                          Correct, but this all depends upon the application. A binary for example would necessarily have to be restarted somehow, even if it means re exec()‘ing the process to get at the new code. Unless you’re going to dynamically load in symbols on something like a HUP it seems a bit simpler to just do a load balanced type setup and bleed off connections then restart and let connections trickle back in. But I don’t know I’m not really a web guy. :)

                                                                                                          I do want to hope that at some point kernel debugging (yes, device drivers excluded, that’s true) by running Bochs-style CPU emulator under rr becomes feasible. After all, this is a question of throwing resources at the problem…

                                                                                                          I highly doubt that will ever happen, but I wish it would. But qemu/bochs etc… all have issues with perfect emulation of cpus sadly.

                                                                                                2. 2

                                                                                                  It’s not like we don’t have the software. GitHub deployed to production 175 times in one day back in 2012. Tech product companies often do continuous deployment, with gradual rollout of both app versions across servers and features across user accounts, all that cool stuff.

                                                                                                  The “enterprise” world is just not designed for change, and no one seems to be changing that yet.

                                                                                                3. 1

                                                                                                  if it ain’t broke, don’t fix it

                                                                                                  And if it isn’t seriously affecting profitability yet, it ain’t broke.

                                                                                                  Even if there are known unpatched vulnerabilities which expose people you have a duty to to increased risk.

                                                                                              2. 1

                                                                                                The article recommendation seems to be to create a separate branch for LTS backports (so that the new development can initially happen easier) and (maybe gradually) handing over most of the control of the backports. Unless these users are a significant share of the project contributors already (regardless of form of contribution).

                                                                                                Whether this recommendation is aligned with your motivation for the project is another question.