1. 1

    just claim you quit for the reasons you were planning on quitting

    1. 5

      Strong disagree – please don’t lie on your resume or during your interview.

    1. 3

      Dependabot is a GitHub-acquired tool that scans for pinned dependencies in your repositories and automatically creates PRs to update them. I’ve been using it at work and on personal projects for a few weeks now and it’s been nice.

      1. 1

        I’ve just set it up! It’ll be fun to forget all about this and receive a PR a while later, I hope.

      1. 1

        I’ve boycotted Scott Adams because he’s gone so far on the wrong side of history it isn’t even funny.

        1. 10

          How is this comment relevant to the content of the linked story? I looks like pure virtue signaling.

          1. 5

            Because Scott Adams is an internet provocateur, and it’s hard to separate any factual content he writes from the weird manipulative writing style he’s adopted (and is proud of).

            Additionally, if the post is about writing style, it probably would make sense to evaluate the post in context with other posts written by the author, would it not?

            1. 1

              It’s hosted on Scott Adam’s blog and he would derive a minuscule amount of revenue from a visit.

              1. 3

                …I mean, would he? He doesn’t even host it, a business called typepad does.

              2. 1

                I personally think that this site is for technical articles which this is not and I would prefer to keep such articles out of here. - tt

                Because I don’t want to see Scott’s work on this site? Just like you want to keep lobste.rs a tech only site ;-)

                Also, people are motivated either by results or by virtue ethics. There is nothing wrong with showing your virtues on your sleeve. I think maybe too many people are afraid to do so and we need more people being honest.

            1. 2

              I wrote a small tool along similar lines to remember steps to do before submitting a change for code review: https://github.com/wickedchicken/checklist. I have a few ideas on how to expand it, but I’d be happy to hear people’s thoughts!

              1. 2

                I can’t seem to find an http link to download a file served from this. Does anyone know of one? I want to test its latency.

                1. 6

                  “through powers granted via the “EAR” (Export Administration Regulation 15 CFR, subchapter C, parts 730-774), along with a sometimes surprisingly broad definition of what qualifies as export-controlled US technology.”

                  Boom! I told people they might do that back in the crypto discussions. Custom crypto and high-assurance security are still munitions with only a few things re-classified like mass-market, one-size-fits-all software and use of ciphers in browser. This is what they might do to the rest with the leverage if it was ever truly threatening. They’re already doing it to companies over Huawei.

                  I also speculated they might have done this to get backdoors in products. A combo of offering payment and threats together. We know they do the payments. I don’t know if they do export threats, though.

                  “some independent security research would have already found and published a paper on this. Given the level of fame and notoriety such a researcher would gain for finding the “smoking gun””

                  Bunny is being really naive here or maybe doesn’t understand computer espionage. Most subversion must be done in a way that doesn’t look like subversion. The system just has to be remotely exploitable. The best route to that is to intentionally leave in memory safety bugs or a configuration that enables privilege escalation. Hackers find those all the time in all kinds of devices. They say, “Hey, they just made a common mistake.” Maybe it was there on purpose. We won’t know.

                  “It’s no secret that the US has outsourced most of its electronics supply chain overseas. From the fabrication of silicon chips, to the injection molding of plastic cases, to the assembly of smartphones, it happens overseas, with several essential links going through or influenced by China.”

                  And this is why what the U.S. government is doing is incredibly stupid. You could substitute other industries in here. It’s a smarter move to minimize one’s dependency on a country before pissing that country off in a way that can prevent them getting what they depend on.

                  1. 3

                    The best route to that is to intentionally leave in memory safety bugs or a configuration that enables privilege escalation.

                    There are many routes and often it does not makes sense to focus only on one.

                    Yet, as long as organization are not held in any way responsible for making very vulnerable software, exploits will remain a very good “deniable backdoor”.

                    1. 2

                      “often it does not makes sense to focus only on one”

                      I mentioned several classes of problems that cause almost all hacks in the field for these kinds of devices. Each class, such as memory unsafety or poor configurations/services, can lead to a multitude of specific exploits.

                      “as long as organization are not held in any way responsible for making very vulnerable software, exploits will remain a very good “deniable backdoor”.”

                      You nailed it there. It’s an externality to them.

                      1. 2

                        If corporations are liable for bugs, then no software will ever be made except from super corps that can afford extremely thorough processes.

                        Imagine writing a script to search youtube for cat videos, putting it online and somebody used it and somehow through some chain of events, he ended up dying and you got sued for millions?

                        1. 5

                          Liability law in Germany kind of works this way, and as a result nearly everyone has personal liability insurance that costs a few Euro a month and covers up to tens of millions of Euro of damage. Two examples I’ve been given: if you accidentally spill coffee on someone’s laptop at a coffee shop you’re liable to pay for the laptop, and if you jaywalk (therefore breaking the law) causing a car to swerve into a building you’re liable for basically all the damage caused. In both cases (and, I believe, the example you cite), you would be covered by Privathaftpflichtversicherung. The insurers are solvent at such a low cost because the heavy-hitting events are relatively rare.

                          1. 1

                            That gives me an idea along the lines of patent trolls. You sue the companies making insecure crap to fund high-quality, open alternatives. Each time, do write-ups on how little it cost to increase security with fairly-high velocity of features developed. They’ll constantly be reminded they can lose a huge pile of money or spend a fraction of it doing secure process. Some might even do it.

                            Dont know German laws, though. Can’t assess practicality.

                            1. 3

                              Alternately, insurance companies could base premiums off of audits or other evaluations of risk: https://www.dhs.gov/cisa/cybersecurity-insurance.

                          2. 1

                            That’s too broad a statement. It would be too broad a legal standard, too. What I advocated in similar discussions is that they be required to achieve a few goals or do a few things that cover the majority of problems. These things would be cost-effective. Examples include memory-safe languages, using secure approach to remote access (not Telnet), property-based testing on what logic they can encode, fuzzing, secure OS, and, if having the money, independent assessment by hackers. How much they’re expected to do goes up with what resources they’re earning off the product.

                            So, a small player building software to be resistant to at least code injection might use Rust with overflow checking on deployed on OpenBSD with OpenSSH for remote access. Nobody is blowing any budgets making this choice. They’re highly unlikely to be sued for hacks since it’s safer and more secure by default. That’s the kind of thing I’m thinking about. As a side effect, the market would shift piles of resources into creating ecosystems using all that stuff.

                    1. 1

                      On desktop, I browse using Chrome with third-party cookies disabled, and the web works fine. I just found out that Chrome on iOS doesn’t have that setting :(. Pretty sure it used to…

                      1. 45

                        I hope there’s an uproar about the name.

                        Really shitty move for a giant company to create a competing library with such a similar name to an existing project. Bound to cause confusion and potentially steal libcurl users because so many people associate Google with networking and the internet.

                        1. 22

                          I wonder how long it takes for google autosuggest to correct libcurl to libcrurl.

                          1. 11

                            Looks like crurl was just an internal working name for the library[0]. They’ve changed it already in their bug tracker to libcurl_on_cronet[1].

                            [0] https://news.ycombinator.com/item?id=20228237

                            [1] https://chromium-review.googlesource.com/c/chromium/src/+/1652540

                            1. 7

                              Holy shit! It’s with a Ru in the middle instead of a Ur! I actually missed that until I read your comment and reread the whole thing letter-by-letter. Google knows full well that this will cause confusion since they added a feature to chrome for this exact problem. Egregious and horrible.

                              1. 14

                                Google knows full well that this will cause confusion

                                I’m not part of the team anymore and have no connection to this project, but my guess is that some engineers thought it was a funny pun/play on words and weren’t trying to “trick” people into downloading their library. I’m not saying you shouldn’t be careful about perceptions when your company has such an outsized influence, but I highly doubt this was an intentional, malicious act.

                                1. 6

                                  I’d bet this is exactly what happened. I’ve given projects dumb working names before release, and had them snowball out of my control before.

                              2. 2

                                Honestly, I had to double check that I wasn’t reading libcrule.

                                1. 2

                                  Honestly, their lack of empathy here, and the need to extend rather than collaborate indicates in my opinion a concerning move away from OSS. I hope to be corrected though.

                                1. 2

                                  USB - A shitty problems-introducing half-baked solution, designed in the terms of the shittiest version of everything, to a problem that could have been perhaps left unsolved for a little longer.

                                  Now we’re going to go with this for who knows how long, with all the mess it lugs behind. 6-simultaneous-key-press-limit on keyboards and everything.

                                  Plus, with constant idiotic updates, the USB cables are becoming the issue they were attempting to solve. Great job!

                                  1. 10

                                    The 6-key limit is a myth. Competently designed USB keyboards can support NKRO fine. The problem seems more that a lot of keyboard makers don’t actually understand the the HID standard, or don’t care.

                                    There’s plenty about USB that’s crap though.

                                    1. 1

                                      Did look on and found ergodox drivers firmware that have NKRO. Will look on it when I’m more pissed about the limit than what I’m now. Thank you.

                                    2. 3

                                      You really think leaving the problem unsolved for longer would have resulted in a better solution?

                                      1. 0

                                        It’s more about whether anybody was needed to solve it in the first place. I’m sure they already thought of universal connection for peripherals in 1960s but they couldn’t make it yet back then. Also the existing serial ports would have been getting smaller and faster in any case. Possibly we could have handled without USB perfectly well.

                                        The answer to your question is yes though. You can use Internet protocol suite for communication between small devices as well. By now it could be extended to all peripherals. Instead of USB we could have had yet another entry on the link-layer.

                                        1. 7

                                          I think it’s important to view USB in the context of where it came from, rather than comparing it to current technology and evaluating it only in hindsight.

                                          It’s more about whether anybody was needed to solve it in the first place.

                                          The experience of using USB today completely outclasses the ISA, PCI, Parallel Port, and PS/2 connections of the day. I used to have to set physical jumpers on a sound card to make sure that the IRQ and DMA settings matched what my motherboard/OS supported and didn’t conflict with other installed cards. 20 minutes on my knees with a manual and screwdriver in hand, every time, only knowing if you got it right after booting up the OS each time and testing it with some software. Yes, I think someone needed to solve this.

                                          Possibly we could have handled without USB perfectly well.

                                          I honestly feel that we had to go through a painful phase (non-flippable connectors, manual jumpers, plethora of cable types, screwed-in vs non-screwed in connectors, manually setting non-conflicting IRQs, power distribution) before we could get to a decent one, and I’d rather that painful phase be in the past than the future. Same as with Bluetooth – there was a bad time, and now things “generally” work unless you’re doing something at the fringes. Waiting for the next thing would have just delayed any lessons the industry could have learned.

                                          Did you know the USB spec required the ‘trident’ logo to be on the top side of the connector, meaning you always knew which way to plug it in? This seems like a great solution, until you witness millions of people messing it up every time (without even knowing this was part of the standard), compounded by dubious manufacturers flooding the market and ignoring the spec (sometimes making cables without any trident, let alone on the wrong side). You only witness these things by having a product in the wild or having seen another products/specs suffer these problems in the wild. In either case, there is a painful phase that eventually stabilizes into something useful.

                                      2. 2

                                        Plus, with constant idiotic updates, the USB cables are becoming the issue they were attempting to solve.

                                        This, exactly! The U stands for Universal, the idea that any device could connect to another. If I recall correctly, even before USB 1.0 was released there were two incompatible plug types in widespread use: A and B. Supposedly this was to separate the host and client, but as devices quickly appeared that could be either host or client (think of plugging a camera directly into a printer) the mess because apparent. It’s only gotten worse from there, with USB C, mini- then micro-USB, and the micro versions of USB B and 3 (I still daily drive a Note 3 with the Micro USB 3 I think it is).

                                        1. 1

                                          What are you doing that requires more than six keys being pushed down at one time?

                                          1. 3

                                            In my case, hotseat multiplayer games like Liero (think realtime Worms). Playing with two kids on one keyboard is super fun!

                                            1. 2

                                              Nothing, but it’s still a thing that limits the use of a keyboard and is stupidly low number for a key buffer. It should be at least 24 keys, preferable 4000 keys. Pointless to have so small buffer.

                                              1. 1

                                                I don’t know about you, but I only have ten fingers, and I only really use eight of them for typing.

                                                Probably should’ve made the limit 8 instead of 6. You could fit the full set of keycodes (assuming I’m reading this correctly and all USB scan codes are one byte) evenly into four 16bit registers, or, nowadays, one 64bit register.

                                                1. 3

                                                  FWIW it’s not actually 6 keys total; modifier keys don’t count towards the limit.

                                          1. 1

                                            I like Alpine and appreciate its extremely small image size compared to something like Debian. My main annoyance with it is there is no specified update policy with respect to packages (specifically, whether each release keeps packages to a major and minor version and only updates point releases or patchsets). hadolint really wants you to pin packages and Alpine removes old versions of packages from the mirrors upon publishing new ones, so you have to use apk‘s ~= syntax for this to make any sense. Without clear guidance from the Alpine maintainers it’s hard to decide how specific to make the ~=. To be honest, I’m not sure why hadolint enforces this rule for apk at all…

                                            1. 9

                                              First off, congrats! You’ll do great! I made a list of things I’ve discovered over time, but I don’t want to stress you out thinking that you have to memorize all this stuff. You don’t have to any or all of it, I’ve just found that these have made my own speaking clearer and better-received.

                                              • Practice speaking at 80% speed. You want to train your brain to get used to a feeling of speaking almost uncomfortably slowly. When you’re in front of an audience you will likely tend to rush; forcing yourself to slow down will counteract that tendency and make you talk at a normal speed. This is also a natural counter to “ums” and “ahs”, which are usually the result of speaking faster than your brain can think.
                                              • Practice finding opportunities to stretch out words where possible, usually along vowels. When you need to give your brain time to think, instead of saying “um” or “ah” you can just stretch out the vowels in the words you are already speaking. Seriously, just walk around speaking to yourself in your head, except trail and hold the last vowel of the word you’re saying. Suddenlyyyyyyy you’ll souuuuuuuund like thiiiis, and if you practice stretching out your words you’ll be able to do it when you actually need it.
                                              • “Make eye contact.” I put this in quotes because each member of the audience isn’t expecting you to make personal eye contact with them – they just want to see your eyes flash up to look in the vague direction of the audience. All you have to do is flick your eyes up every now and then and scan the room a little bit. You can imagine trying to look at people’s foreheads instead of their eyes to make it less intimidating.
                                              • Put in more pictures than you think you need. Every time I finish a talk, I always look back and regret not adding more explanatory pictures, diagrams or charts. Even if they don’t add any new informational content, pictures give some visual variety to your presentation and give time for the audience’s eyes to rest. It may seem stupid, but even just putting the logos of the products/languages/tools you’re talking about can help.
                                              • Try not to read off your slides. This may be hard since you’re relying on your slides to guide what you’re saying, but I try to speak about the important parts of the topic and let the slide text be the more extended, complete version of the idea.
                                              • Make your font size way bigger than you think it should be.
                                              • If you have to show code, be minimal about it – with a large block of code, your eye isn’t drawn to any point and the audience will struggle to find where the code you’re speaking about is. Maybe only show a function and its call signature, or a single line to show off a cool operator in a language. If you really, really, really need to show a block of code, you might want to ghost it out and highlight each line of interest as a separate “slide.” This gives the audience a visual anchor to look at as you’re going through each line.
                                              • The audience wants you to do well! They are on your side, and are actively looking to forgive any mistakes you might make. If you do make a mistake, give yourself some time and space to recover and keep going! People will remember your talk, not the 5 second pause you took to remember where you were in your slides.
                                              • If you’re giving a longer talk (maybe 15 minutes or more), it can help to show a table of contents slide at the beginning and refer to it throughout your talk. Not only does this remind the audience how all the pieces fit together, but it can help you write the talk since you have an outline to work from.

                                              Good luck!

                                              1. 4

                                                Try not to read off your slides.

                                                This is super important! Reading your slides is one of the most common and most annoying presenter mistakes. I’ve taken to creating slides that don’t even have sentences on them in order to avoid this. A word or two at most; but mostly just images.

                                              1. 3
                                                • It would cache data off-machine into fault-tolerant storage.
                                                • If the machine broke I would like to go to any other machine and resume work within a few seconds without losing any data.
                                                • If my machine was stolen I would like it to be totally unusable after a short time - so nobody would bother to steal it.

                                                Back when I worked at Google, these problems were effectively solved with Chromebooks. I did my main development by SSHing from a desktop Chromebox into a relatively powerful workstation. When I traveled to different offices, they had loaner Chromebooks available. I would simply check out a loaner Chromebook, sign in, and after a few seconds Chrome Sync would provide me with a mobile version of my home setup. You had to accept a few compromises in your workflow, but once you did that the benefits were great.

                                                1. 2

                                                  My issue with chromebooks is the Google data concerns. If I could get something like a Chromebook but with my own server in the back, that’d be wonderful. Oh, and decent access to the computer itself would be nice (there’s only so much a browser can do). Butkfer other people, I recommend Chromebooks as the easiest consumed computer.

                                                  I wish that Plan 9 from Bell Labs had caught on more. A few people have pined after that in their interviews. What particularly stood out to me was Rob Pike’s comment:

                                                  it used to be that phones worked without you having to carry them around, but computers only worked if you did carry one around with you. The solution to this inconsistency was to break the way phones worked rather than fix the way computers work.

                                                  It would have been nice if the Plan 9 vision had come to fruition.

                                                1. 14

                                                  Seems like he completely missed Nix and this makes a whole article a bit more questionable

                                                  1. 5

                                                    This was what I was going to say. Switched to Nix and I never looked back. Ok, Darwin is definitely second-tier on macOS (because it has fewer active contributors), so you have to fix things once in a while. Especially combined with home-manager, Nix has large benefits: e.g. on a new Mac, I just clone the git repository with my home-manager configuration, run home-manager switch and my (UNIX-land) configuration is as it was before.

                                                    1. 2

                                                      I wasn’t aware that Nix could be used for this kind of purpose! I’ll have to look into it.

                                                      1. 1

                                                        I tried to live the Nix life on Mac, but a package I absolutely needed wasn’t available for Mac and creating a package turned out to be a lot more work than I was willing to put into it. The Linux version of the package actually modifies the binary, I guess to point it at the right path to find its libraries (which seems to be a fairly common practice) and doing the same thing on a Mac was… non-obvious. With Homebrew it’s a one-liner.

                                                        1. 1

                                                          Just out of curiosity: do you remember which package?

                                                          1. 2

                                                            Dart, the programming language. Here’s the nix file: https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/interpreters/dart/default.nix. The binary is patched on line 62. I have a branch where I added the latest versions of the interpreter for Linux but I had hoped to also support Mac since that’s what I use at work. I should probably go ahead and PR the Linux stuff at least, I suppose.

                                                            1. 1

                                                              FYI, here’s my PR for the Linux versions :-) https://github.com/NixOS/nixpkgs/pull/60607

                                                        2. 5

                                                          There’s also pkgsrc (macOS), though it’s very hard to say how comprehensive macOS support is there.

                                                          1. 5

                                                            The best thing about MacPorts are all the patches we can re-use for nixpkgs. The few times I had some troubles with packaging, there was an answer already in their package repository. Major props to their engineering skills.

                                                          1. 1

                                                            AWESOME. As someone who has made simple, mostly-text websites for a long time I’ve been looking for something like this.

                                                            1. 8

                                                              It’s really satisfying to fix up old, broken code and get it running again, especially when the results are as visible as a game.

                                                              1. 1

                                                                Totally! A while back I ported BSD rain to Linux (original source is here). I was surprised my distro didn’t have it. While it wasn’t broken (it obviously compiled on NetBSD), it was nice to have an old friend back.

                                                              1. 10

                                                                It’s going to be interesting to see how much this is going to affect the future of how the WWW functions. GDPR sure didn’t manage to be as severe of a measure as we’d hoped it be. Heck, I’m having troubles getting the relevant authorities to understand clear violations that I’ve forwarded to them, where they then end up just being dismissed.

                                                                But this law here is of course not for the people, no… This is here for the copyright holders, and they carry much more power. So will this actually result in the mess we expect it to be?

                                                                1. 25

                                                                  GDPR and the earlier cookie law have created a huge amount of pointless popup alert boxes on sites everywhere.

                                                                  1. 10

                                                                    The one thing I can say is that, due to the GDPR, you have the choice to reject many cookies which you couldn’t do before (without ad-blockers or such). That’s at least something.

                                                                    1. 10

                                                                      Another amazing part of GDPR is data exports. Before hardly any website had it to lock you in.

                                                                      1. 4

                                                                        You had this choice before though, it’s normal to make a cookies whitelist for example in firefox with no addons. The GDPR lets you trust the site that wants to track you to not give you the cookies instead of you having personal autonomy and choosing not to save the cookies with your own client.

                                                                        1. 26

                                                                          I think this attitude is a bit selfish since not every non-technical person wants to be tracked, and it’s also counter-productive, since even the way you block cookies is gonna be used to track you. The race between tracker and trackee can never be won by any of them if governments don’t make it illegal. I for one am very happy about the GDPR, and I’m glad we’re finally tackling privacy in scale.

                                                                          1. 2

                                                                            it’s not selfish it’s empowering

                                                                            if a non-technical person is having trouble we can volunteer to teach them and try to get browsers to implement better UX

                                                                            GDPR isn’t goverments making tracking illegal

                                                                            1. 15

                                                                              I admire your spirit, but I think it’s a bit naive to think that everyone has time for all kinds of empowerment. My friends and family want privacy without friction, without me around, and without becoming computers hackers themselves.

                                                                          2. 18

                                                                            It’s now illegal for the site to unnecessarily break functionality based on rejecting those cookies though. It’s also there responsibility to identify which cookies are actually necessary for functionality.

                                                                        2. 4

                                                                          On Europe we’re starting to sign GDPR papers for everything we do… even for buying glasses…

                                                                          1. 12

                                                                            Goes on to show how much information about us is being implicitly collected in my honest opinion, whether for advertisement or administration.

                                                                            1. 1

                                                                              Most of the time, you don’t even have a copy of the document, it’s mostly a tl;dr document full of legal jargon that nobody reads… it might be a good thing, but far from perfect.

                                                                        3. 4

                                                                          “The Net interprets censorship as damage, and routes around it.”

                                                                          1. 22

                                                                            That old canard is increasingly untrue as governments and supercorps like Google, Amazon, and Facebook seek to control as much of the Internet as they can by building walled gardens and exerting their influence on how the protocols that make up the internet are standardized.

                                                                            1. 13

                                                                              I believe John Gilmore was referring to old-fashioned direct government censorship, but I think his argument applies just as well to the soft corporate variety. Life goes on outside those garden walls. We have quite a Cambrian explosion of distributed protocols going on at the moment, and strong crypto. Supercorps rise and fall. I think we’ll be OK.

                                                                              Anyway, I’m disappointed by the ruling as well; I just doubt that the sky is really falling.

                                                                              1. 4

                                                                                I agree that it is not the sky falling. It is a burden for startups and innovation in Europe though. We need new business ideas for the news business. Unfortunately, we now committed to life support for the big old publishers like Springer.

                                                                                At least, we will probably have some startups applying fancy AI techniques to implement upload filters. If they become profitable enough then Google will start its own service which is for free (in exchange for sniffing all the data of course). Maybe some lucky ones get bought before they are bankrupt. I believe this decision is neutral or positive for Google.

                                                                                The hope is that creatives earn more, but Germany already tried it with the ancillary copyright for press publishers (German: Leistungsschutzrecht für Presseverleger) in 2013. It did not work.

                                                                                1. 2

                                                                                  Another idea for a nice AI startup I had: Summarizing of news with natural language processing. I do not see that writing news with an AI is illegal, only copying the words/sentences would be illegal.

                                                                                  Maybe however, you cannot make public from where you aggregated your original news that you feed into your AI :)

                                                                              2. 4

                                                                                Governments, corporations, and individual political activists are certainly trying to censor the internet, at least the most popularly-accessible portions of it. I think the slogan is better conceptualized as an aspiration for technologists interested in information freedom - we should interpret censorship as damage (rather than counting on the internet as it currently works to just automatically do it for us) and we should build technologies that make it possible for ordinary people to bypass it.

                                                                            2. 2

                                                                              I can see a really attitude shift coming when the EU finally gets around to imposing significant fines. I worked with quite a few organisations that’ve a taken ‘bare minimum and wait and see’ attitude who’d make big changes if the law was shown to have teeth. Obviously pure speculation though.

                                                                            1. 3

                                                                              Respectfully, is that something an org can brag about?

                                                                              The time-to-patch metric heavily depends on the nature of the bug to patch.

                                                                              I don’t know the complexity of fixing these two vulns, surely fixing things fast is something to be proud of, but if they don’t want people pointing fingers at Mozilla when a bug stays more than one week in the backlog, don’t brag about it when it doesn’t in the first place.

                                                                              1. 18

                                                                                Assuming that the title refers to fixing and successfully releasing a bugfix, a turnaround of less than 24 hours is a huge accomplishment for something like a browser. Don’t forget that a single CI run can take several hours, careful release management/canarying is required, and it takes time to measure crash rates to make sure you haven’t broken anything. The 24 hours is more a measure of the Firefox release pipeline than the developer fix time; it’s also a measure of its availability and reliability.

                                                                                1. 10

                                                                                  This. I remember a time when getting a release like this out took longer than a week. I think we’ve been able to do it this fast for a few years now, so still not that impressive.

                                                                                2. 6

                                                                                  As far as I can tell, the org isn’t bragging; the “less than 24h” boast is not present on the security advisory.

                                                                                  1. 1

                                                                                    To be fair, you’re right.

                                                                                  2. 2

                                                                                    also the bugs are not viewable - even if logging in

                                                                                    so its hard to get any context on this

                                                                                    1. 2

                                                                                      Is it possible to check the revisions between both versions, and they do not seem so trivial.

                                                                                      These are the revisions (without the one that blocks some extensions):
                                                                                      https://hg.mozilla.org/mozilla-unified/rev/e8e770918af7
                                                                                      https://hg.mozilla.org/mozilla-unified/rev/eebf74de1376
                                                                                      https://hg.mozilla.org/mozilla-unified/rev/662e97c69103

                                                                                      1. 1

                                                                                        Well, sorta-the-same but with the context is them fixing pwn2own security vulnerabilties with less than 24 hours 12 months ago

                                                                                        https://hacks.mozilla.org/2018/03/shipping-a-security-update-of-firefox-in-less-than-a-day/

                                                                                      2. 2

                                                                                        Respectfully, is that something an org can brag about?

                                                                                        I always assume it’s P.R. stunt. Double true if the product is in a memory-unsafe language without lots of automated tooling to catch vulnerabilities before they ship. Stepping back from that default, Mozilla is also branding themselves on privacy. This fits into that, too.

                                                                                        EDIT: Other comments indicate the 24 hrs part might be editorializing. If so, I stand by the claim as a general case for “we patched fast after unsafe practices = good for PR.” The efforts that led to it might have been sincere.

                                                                                      1. 1

                                                                                        @pushcx / @alynpost / @Irene, does this seem like enough support to add the tags?

                                                                                        1. 3

                                                                                          Whenever you learn something new, take this mental model: Never do things for their own sake. Which translate to: Never learn Rust just because you want to learn Rust.

                                                                                          This is great advice to follow! I have a related rule for personal projects: I can either write something I know in a language I don’t know, or I can write something I don’t know in a language I know. Mixing the two means bad news.

                                                                                          (side-note: I just signed up for Rust and Tell Berlin! see you there)

                                                                                          1. 15

                                                                                            After the recent announcement of the F5 purchase of NGINX we decided to move back to Lighttpd.

                                                                                            Would be interesting to know why instead of just a blog post which is basically an annotated lighthttpd configuration.

                                                                                            1. 6

                                                                                              If history has taught us anything, the timeline will go a little something like this. New cool features will only be available in the commercial version, because $$. The license will change, because $$. Dead project.

                                                                                              And it’s indeed an annotated lighttpd configuration as this roughly a replication of the nginx config we were using and… the documentation of lighttpd isn’t that great. :/

                                                                                              1. 9

                                                                                                The lighttpd documentation sucks. Or at least it did three years ago when https://raymii.org ran on it. Nginx is better, but still missing comprehensive examples. Apache is best, on the documentation font.

                                                                                                I wouldn’t move my entire site to another webserver anytime soon (it runs nginx) but for new deployments I regularly just use Apache. With 2.4 being much much faster and just doing everything you want, it being open source and not bound to a corporation helps.

                                                                                                1. 1

                                                                                                  Whatever works for you. We used to run our all websites on lighttpd, before the project stalled. So seemed a good idea to move back, before nginx frustration kicked in. :)

                                                                                                  1. 3

                                                                                                    Im a bit confused. You’re worried about Nginx development stalling or going dead in the future. So, you switched to one that’s already stalled in the past? Seems like the same problem.

                                                                                                    Also, I thought Nginx was open source. If it is, people wanting to improve it can contribute to and/or fork it. If not, the problem wouldn’t be the company.

                                                                                                    1. 2

                                                                                                      The project is no longer stalled and if it stalls again going to move, again. Which open source project did well after the parent company got acquired?

                                                                                                      1. 3

                                                                                                        I agree with you that there’s some risk after a big acquisition. I didnt know lighttpd was active again. That’s cool.

                                                                                                        1. 2

                                                                                                          If it was still as dead as it was a couple of years ago I would have continued my search. :)

                                                                                                          1. 1

                                                                                                            Well, thanks for the tip. I was collecting lightweight servers and services in C language to use for tests on analysis and testing tools later. Lwan was main one for web. Lighttpd seems like a decent one for higher-feature server. I read Nginx was a C++ app. That means I have less tooling to use on it unless I build a C++ to C compiler. That’s… not happening… ;)

                                                                                                            1. 3

                                                                                                              nginx is 97% C with no C++ so you’re good.

                                                                                                              1. 1

                                                                                                                Thanks for correction. What’s other 3%?

                                                                                                                1. 2

                                                                                                                  Mostly vim script with a tiny bit of ‘other’ (according to github so who knows how accurate that is).

                                                                                                                  1. 1

                                                                                                                    Alright. I’ll probably run tools on both then.

                                                                                                                    1. 2

                                                                                                                      Nginx was “heavily influenced” by apache 1.x; a lot of the same arch, like memory pools etc. fyil

                                                                                                        2. 2

                                                                                                          SuSE has been going strong, and has been acquired a few times.

                                                                                                          1. 1

                                                                                                            SuSE is not really an open-source project though, but a distributor.

                                                                                                            1. 3

                                                                                                              They do have plenty of open-source projects on their own, though. Like OBS, used by plenty outside of SuSE too.

                                                                                                  2. 5

                                                                                                    It’s a web proxy with a few other features, in at least 99% of all cases.

                                                                                                    What cool new features are people using?

                                                                                                    Like, reading a few books on the topic suggested to me that despite the neat things Nginx can do we only use a couple workhorses in our daily lives as webshits:

                                                                                                    • Virtual hosts
                                                                                                    • Static asset hosting
                                                                                                    • Caching
                                                                                                    • SSL/Let’s Encrypt
                                                                                                    • Load balancing for upstream servers
                                                                                                    • Route rewriting and redirecting
                                                                                                    • Throttling/blacklisting/whitelisting
                                                                                                    • Websocket stuff

                                                                                                    Like, sure you can do streaming media, weird auth integration, mail, direct database access, and other stuff, but the vast majority of devs are using a default install or some Docker image. But the bread and butter features? Those aren’t going away.

                                                                                                    If the concern is that new goofy features like QUIC or HTTP3 or whatever will only be available under a commercial license…maaaaaybe we should stop encouraging churn in protocols that work well enough?

                                                                                                    It just seems like much ado about nothing to me.

                                                                                                    1. 6

                                                                                                      maaaaaybe we should stop encouraging churn in protocols that work well enough?

                                                                                                      They don’t work well enough on mobile networks. In particular, QUIC’s main advantage over TCP is it directly addresses the issues caused by TCP’s congestion-avoidance algorithm on links with rapidly fluctuating capacities. I share your concern that things seem like they’re changing faster than they were before, but it’s not because engineers are bored and have nothing better to do.

                                                                                                    2. 4

                                                                                                      New cool features will only be available in the commercial version, because $$.

                                                                                                      Isn’t that already the case with nginx?