1. 42

    I hope there’s an uproar about the name.

    Really shitty move for a giant company to create a competing library with such a similar name to an existing project. Bound to cause confusion and potentially steal libcurl users because so many people associate Google with networking and the internet.

    1. 20

      I wonder how long it takes for google autosuggest to correct libcurl to libcrurl.

      1. 9

        Looks like crurl was just an internal working name for the library[0]. They’ve changed it already in their bug tracker to libcurl_on_cronet[1].

        [0] https://news.ycombinator.com/item?id=20228237

        [1] https://chromium-review.googlesource.com/c/chromium/src/+/1652540

        1. 5

          Holy shit! It’s with a Ru in the middle instead of a Ur! I actually missed that until I read your comment and reread the whole thing letter-by-letter. Google knows full well that this will cause confusion since they added a feature to chrome for this exact problem. Egregious and horrible.

          1. 11

            Google knows full well that this will cause confusion

            I’m not part of the team anymore and have no connection to this project, but my guess is that some engineers thought it was a funny pun/play on words and weren’t trying to “trick” people into downloading their library. I’m not saying you shouldn’t be careful about perceptions when your company has such an outsized influence, but I highly doubt this was an intentional, malicious act.

            1.  

              I’d bet this is exactly what happened. I’ve given projects dumb working names before release, and had them snowball out of my control before.

          2.  

            Honestly, I had to double check that I wasn’t reading libcrule.

            1.  

              Honestly, their lack of empathy here, and the need to extend rather than collaborate indicates in my opinion a concerning move away from OSS. I hope to be corrected though.

            1. 2

              USB - A shitty problems-introducing half-baked solution, designed in the terms of the shittiest version of everything, to a problem that could have been perhaps left unsolved for a little longer.

              Now we’re going to go with this for who knows how long, with all the mess it lugs behind. 6-simultaneous-key-press-limit on keyboards and everything.

              Plus, with constant idiotic updates, the USB cables are becoming the issue they were attempting to solve. Great job!

              1. 10

                The 6-key limit is a myth. Competently designed USB keyboards can support NKRO fine. The problem seems more that a lot of keyboard makers don’t actually understand the the HID standard, or don’t care.

                There’s plenty about USB that’s crap though.

                1. 1

                  Did look on and found ergodox drivers firmware that have NKRO. Will look on it when I’m more pissed about the limit than what I’m now. Thank you.

                2. 3

                  You really think leaving the problem unsolved for longer would have resulted in a better solution?

                  1. 0

                    It’s more about whether anybody was needed to solve it in the first place. I’m sure they already thought of universal connection for peripherals in 1960s but they couldn’t make it yet back then. Also the existing serial ports would have been getting smaller and faster in any case. Possibly we could have handled without USB perfectly well.

                    The answer to your question is yes though. You can use Internet protocol suite for communication between small devices as well. By now it could be extended to all peripherals. Instead of USB we could have had yet another entry on the link-layer.

                    1. 7

                      I think it’s important to view USB in the context of where it came from, rather than comparing it to current technology and evaluating it only in hindsight.

                      It’s more about whether anybody was needed to solve it in the first place.

                      The experience of using USB today completely outclasses the ISA, PCI, Parallel Port, and PS/2 connections of the day. I used to have to set physical jumpers on a sound card to make sure that the IRQ and DMA settings matched what my motherboard/OS supported and didn’t conflict with other installed cards. 20 minutes on my knees with a manual and screwdriver in hand, every time, only knowing if you got it right after booting up the OS each time and testing it with some software. Yes, I think someone needed to solve this.

                      Possibly we could have handled without USB perfectly well.

                      I honestly feel that we had to go through a painful phase (non-flippable connectors, manual jumpers, plethora of cable types, screwed-in vs non-screwed in connectors, manually setting non-conflicting IRQs, power distribution) before we could get to a decent one, and I’d rather that painful phase be in the past than the future. Same as with Bluetooth – there was a bad time, and now things “generally” work unless you’re doing something at the fringes. Waiting for the next thing would have just delayed any lessons the industry could have learned.

                      Did you know the USB spec required the ‘trident’ logo to be on the top side of the connector, meaning you always knew which way to plug it in? This seems like a great solution, until you witness millions of people messing it up every time (without even knowing this was part of the standard), compounded by dubious manufacturers flooding the market and ignoring the spec (sometimes making cables without any trident, let alone on the wrong side). You only witness these things by having a product in the wild or having seen another products/specs suffer these problems in the wild. In either case, there is a painful phase that eventually stabilizes into something useful.

                  2. 2

                    Plus, with constant idiotic updates, the USB cables are becoming the issue they were attempting to solve.

                    This, exactly! The U stands for Universal, the idea that any device could connect to another. If I recall correctly, even before USB 1.0 was released there were two incompatible plug types in widespread use: A and B. Supposedly this was to separate the host and client, but as devices quickly appeared that could be either host or client (think of plugging a camera directly into a printer) the mess because apparent. It’s only gotten worse from there, with USB C, mini- then micro-USB, and the micro versions of USB B and 3 (I still daily drive a Note 3 with the Micro USB 3 I think it is).

                    1. 1

                      What are you doing that requires more than six keys being pushed down at one time?

                      1. 3

                        In my case, hotseat multiplayer games like Liero (think realtime Worms). Playing with two kids on one keyboard is super fun!

                        1. 2

                          Nothing, but it’s still a thing that limits the use of a keyboard and is stupidly low number for a key buffer. It should be at least 24 keys, preferable 4000 keys. Pointless to have so small buffer.

                          1. 1

                            I don’t know about you, but I only have ten fingers, and I only really use eight of them for typing.

                            Probably should’ve made the limit 8 instead of 6. You could fit the full set of keycodes (assuming I’m reading this correctly and all USB scan codes are one byte) evenly into four 16bit registers, or, nowadays, one 64bit register.

                            1. 3

                              FWIW it’s not actually 6 keys total; modifier keys don’t count towards the limit.

                      1. 1

                        I like Alpine and appreciate its extremely small image size compared to something like Debian. My main annoyance with it is there is no specified update policy with respect to packages (specifically, whether each release keeps packages to a major and minor version and only updates point releases or patchsets). hadolint really wants you to pin packages and Alpine removes old versions of packages from the mirrors upon publishing new ones, so you have to use apk‘s ~= syntax for this to make any sense. Without clear guidance from the Alpine maintainers it’s hard to decide how specific to make the ~=. To be honest, I’m not sure why hadolint enforces this rule for apk at all…

                        1. 9

                          First off, congrats! You’ll do great! I made a list of things I’ve discovered over time, but I don’t want to stress you out thinking that you have to memorize all this stuff. You don’t have to any or all of it, I’ve just found that these have made my own speaking clearer and better-received.

                          • Practice speaking at 80% speed. You want to train your brain to get used to a feeling of speaking almost uncomfortably slowly. When you’re in front of an audience you will likely tend to rush; forcing yourself to slow down will counteract that tendency and make you talk at a normal speed. This is also a natural counter to “ums” and “ahs”, which are usually the result of speaking faster than your brain can think.
                          • Practice finding opportunities to stretch out words where possible, usually along vowels. When you need to give your brain time to think, instead of saying “um” or “ah” you can just stretch out the vowels in the words you are already speaking. Seriously, just walk around speaking to yourself in your head, except trail and hold the last vowel of the word you’re saying. Suddenlyyyyyyy you’ll souuuuuuuund like thiiiis, and if you practice stretching out your words you’ll be able to do it when you actually need it.
                          • “Make eye contact.” I put this in quotes because each member of the audience isn’t expecting you to make personal eye contact with them – they just want to see your eyes flash up to look in the vague direction of the audience. All you have to do is flick your eyes up every now and then and scan the room a little bit. You can imagine trying to look at people’s foreheads instead of their eyes to make it less intimidating.
                          • Put in more pictures than you think you need. Every time I finish a talk, I always look back and regret not adding more explanatory pictures, diagrams or charts. Even if they don’t add any new informational content, pictures give some visual variety to your presentation and give time for the audience’s eyes to rest. It may seem stupid, but even just putting the logos of the products/languages/tools you’re talking about can help.
                          • Try not to read off your slides. This may be hard since you’re relying on your slides to guide what you’re saying, but I try to speak about the important parts of the topic and let the slide text be the more extended, complete version of the idea.
                          • Make your font size way bigger than you think it should be.
                          • If you have to show code, be minimal about it – with a large block of code, your eye isn’t drawn to any point and the audience will struggle to find where the code you’re speaking about is. Maybe only show a function and its call signature, or a single line to show off a cool operator in a language. If you really, really, really need to show a block of code, you might want to ghost it out and highlight each line of interest as a separate “slide.” This gives the audience a visual anchor to look at as you’re going through each line.
                          • The audience wants you to do well! They are on your side, and are actively looking to forgive any mistakes you might make. If you do make a mistake, give yourself some time and space to recover and keep going! People will remember your talk, not the 5 second pause you took to remember where you were in your slides.
                          • If you’re giving a longer talk (maybe 15 minutes or more), it can help to show a table of contents slide at the beginning and refer to it throughout your talk. Not only does this remind the audience how all the pieces fit together, but it can help you write the talk since you have an outline to work from.

                          Good luck!

                          1. 4

                            Try not to read off your slides.

                            This is super important! Reading your slides is one of the most common and most annoying presenter mistakes. I’ve taken to creating slides that don’t even have sentences on them in order to avoid this. A word or two at most; but mostly just images.

                          1. 3
                            • It would cache data off-machine into fault-tolerant storage.
                            • If the machine broke I would like to go to any other machine and resume work within a few seconds without losing any data.
                            • If my machine was stolen I would like it to be totally unusable after a short time - so nobody would bother to steal it.

                            Back when I worked at Google, these problems were effectively solved with Chromebooks. I did my main development by SSHing from a desktop Chromebox into a relatively powerful workstation. When I traveled to different offices, they had loaner Chromebooks available. I would simply check out a loaner Chromebook, sign in, and after a few seconds Chrome Sync would provide me with a mobile version of my home setup. You had to accept a few compromises in your workflow, but once you did that the benefits were great.

                            1. 2

                              My issue with chromebooks is the Google data concerns. If I could get something like a Chromebook but with my own server in the back, that’d be wonderful. Oh, and decent access to the computer itself would be nice (there’s only so much a browser can do). Butkfer other people, I recommend Chromebooks as the easiest consumed computer.

                              I wish that Plan 9 from Bell Labs had caught on more. A few people have pined after that in their interviews. What particularly stood out to me was Rob Pike’s comment:

                              it used to be that phones worked without you having to carry them around, but computers only worked if you did carry one around with you. The solution to this inconsistency was to break the way phones worked rather than fix the way computers work.

                              It would have been nice if the Plan 9 vision had come to fruition.

                            1. 14

                              Seems like he completely missed Nix and this makes a whole article a bit more questionable

                              1. 5

                                This was what I was going to say. Switched to Nix and I never looked back. Ok, Darwin is definitely second-tier on macOS (because it has fewer active contributors), so you have to fix things once in a while. Especially combined with home-manager, Nix has large benefits: e.g. on a new Mac, I just clone the git repository with my home-manager configuration, run home-manager switch and my (UNIX-land) configuration is as it was before.

                                1. 2

                                  I wasn’t aware that Nix could be used for this kind of purpose! I’ll have to look into it.

                                  1. 1

                                    I tried to live the Nix life on Mac, but a package I absolutely needed wasn’t available for Mac and creating a package turned out to be a lot more work than I was willing to put into it. The Linux version of the package actually modifies the binary, I guess to point it at the right path to find its libraries (which seems to be a fairly common practice) and doing the same thing on a Mac was… non-obvious. With Homebrew it’s a one-liner.

                                    1. 1

                                      Just out of curiosity: do you remember which package?

                                      1. 2

                                        Dart, the programming language. Here’s the nix file: https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/interpreters/dart/default.nix. The binary is patched on line 62. I have a branch where I added the latest versions of the interpreter for Linux but I had hoped to also support Mac since that’s what I use at work. I should probably go ahead and PR the Linux stuff at least, I suppose.

                                        1. 1

                                          FYI, here’s my PR for the Linux versions :-) https://github.com/NixOS/nixpkgs/pull/60607

                                    2. 5

                                      There’s also pkgsrc (macOS), though it’s very hard to say how comprehensive macOS support is there.

                                      1. 5

                                        The best thing about MacPorts are all the patches we can re-use for nixpkgs. The few times I had some troubles with packaging, there was an answer already in their package repository. Major props to their engineering skills.

                                      1. 1

                                        AWESOME. As someone who has made simple, mostly-text websites for a long time I’ve been looking for something like this.

                                        1. 8

                                          It’s really satisfying to fix up old, broken code and get it running again, especially when the results are as visible as a game.

                                          1. 1

                                            Totally! A while back I ported BSD rain to Linux (original source is here). I was surprised my distro didn’t have it. While it wasn’t broken (it obviously compiled on NetBSD), it was nice to have an old friend back.

                                          1. 10

                                            It’s going to be interesting to see how much this is going to affect the future of how the WWW functions. GDPR sure didn’t manage to be as severe of a measure as we’d hoped it be. Heck, I’m having troubles getting the relevant authorities to understand clear violations that I’ve forwarded to them, where they then end up just being dismissed.

                                            But this law here is of course not for the people, no… This is here for the copyright holders, and they carry much more power. So will this actually result in the mess we expect it to be?

                                            1. 25

                                              GDPR and the earlier cookie law have created a huge amount of pointless popup alert boxes on sites everywhere.

                                              1. 10

                                                The one thing I can say is that, due to the GDPR, you have the choice to reject many cookies which you couldn’t do before (without ad-blockers or such). That’s at least something.

                                                1. 10

                                                  Another amazing part of GDPR is data exports. Before hardly any website had it to lock you in.

                                                  1. 4

                                                    You had this choice before though, it’s normal to make a cookies whitelist for example in firefox with no addons. The GDPR lets you trust the site that wants to track you to not give you the cookies instead of you having personal autonomy and choosing not to save the cookies with your own client.

                                                    1. 26

                                                      I think this attitude is a bit selfish since not every non-technical person wants to be tracked, and it’s also counter-productive, since even the way you block cookies is gonna be used to track you. The race between tracker and trackee can never be won by any of them if governments don’t make it illegal. I for one am very happy about the GDPR, and I’m glad we’re finally tackling privacy in scale.

                                                      1. 2

                                                        it’s not selfish it’s empowering

                                                        if a non-technical person is having trouble we can volunteer to teach them and try to get browsers to implement better UX

                                                        GDPR isn’t goverments making tracking illegal

                                                        1. 15

                                                          I admire your spirit, but I think it’s a bit naive to think that everyone has time for all kinds of empowerment. My friends and family want privacy without friction, without me around, and without becoming computers hackers themselves.

                                                      2. 18

                                                        It’s now illegal for the site to unnecessarily break functionality based on rejecting those cookies though. It’s also there responsibility to identify which cookies are actually necessary for functionality.

                                                    2. 4

                                                      On Europe we’re starting to sign GDPR papers for everything we do… even for buying glasses…

                                                      1. 12

                                                        Goes on to show how much information about us is being implicitly collected in my honest opinion, whether for advertisement or administration.

                                                        1. 1

                                                          Most of the time, you don’t even have a copy of the document, it’s mostly a tl;dr document full of legal jargon that nobody reads… it might be a good thing, but far from perfect.

                                                    3. 4

                                                      “The Net interprets censorship as damage, and routes around it.”

                                                      1. 22

                                                        That old canard is increasingly untrue as governments and supercorps like Google, Amazon, and Facebook seek to control as much of the Internet as they can by building walled gardens and exerting their influence on how the protocols that make up the internet are standardized.

                                                        1. 13

                                                          I believe John Gilmore was referring to old-fashioned direct government censorship, but I think his argument applies just as well to the soft corporate variety. Life goes on outside those garden walls. We have quite a Cambrian explosion of distributed protocols going on at the moment, and strong crypto. Supercorps rise and fall. I think we’ll be OK.

                                                          Anyway, I’m disappointed by the ruling as well; I just doubt that the sky is really falling.

                                                          1. 4

                                                            I agree that it is not the sky falling. It is a burden for startups and innovation in Europe though. We need new business ideas for the news business. Unfortunately, we now committed to life support for the big old publishers like Springer.

                                                            At least, we will probably have some startups applying fancy AI techniques to implement upload filters. If they become profitable enough then Google will start its own service which is for free (in exchange for sniffing all the data of course). Maybe some lucky ones get bought before they are bankrupt. I believe this decision is neutral or positive for Google.

                                                            The hope is that creatives earn more, but Germany already tried it with the ancillary copyright for press publishers (German: Leistungsschutzrecht für Presseverleger) in 2013. It did not work.

                                                            1. 2

                                                              Another idea for a nice AI startup I had: Summarizing of news with natural language processing. I do not see that writing news with an AI is illegal, only copying the words/sentences would be illegal.

                                                              Maybe however, you cannot make public from where you aggregated your original news that you feed into your AI :)

                                                          2. 4

                                                            Governments, corporations, and individual political activists are certainly trying to censor the internet, at least the most popularly-accessible portions of it. I think the slogan is better conceptualized as an aspiration for technologists interested in information freedom - we should interpret censorship as damage (rather than counting on the internet as it currently works to just automatically do it for us) and we should build technologies that make it possible for ordinary people to bypass it.

                                                        2. 2

                                                          I can see a really attitude shift coming when the EU finally gets around to imposing significant fines. I worked with quite a few organisations that’ve a taken ‘bare minimum and wait and see’ attitude who’d make big changes if the law was shown to have teeth. Obviously pure speculation though.

                                                        1. 3

                                                          Respectfully, is that something an org can brag about?

                                                          The time-to-patch metric heavily depends on the nature of the bug to patch.

                                                          I don’t know the complexity of fixing these two vulns, surely fixing things fast is something to be proud of, but if they don’t want people pointing fingers at Mozilla when a bug stays more than one week in the backlog, don’t brag about it when it doesn’t in the first place.

                                                          1. 18

                                                            Assuming that the title refers to fixing and successfully releasing a bugfix, a turnaround of less than 24 hours is a huge accomplishment for something like a browser. Don’t forget that a single CI run can take several hours, careful release management/canarying is required, and it takes time to measure crash rates to make sure you haven’t broken anything. The 24 hours is more a measure of the Firefox release pipeline than the developer fix time; it’s also a measure of its availability and reliability.

                                                            1. 10

                                                              This. I remember a time when getting a release like this out took longer than a week. I think we’ve been able to do it this fast for a few years now, so still not that impressive.

                                                            2. 6

                                                              As far as I can tell, the org isn’t bragging; the “less than 24h” boast is not present on the security advisory.

                                                              1. 1

                                                                To be fair, you’re right.

                                                              2. 2

                                                                also the bugs are not viewable - even if logging in

                                                                so its hard to get any context on this

                                                                1. 2

                                                                  Is it possible to check the revisions between both versions, and they do not seem so trivial.

                                                                  These are the revisions (without the one that blocks some extensions):
                                                                  https://hg.mozilla.org/mozilla-unified/rev/e8e770918af7
                                                                  https://hg.mozilla.org/mozilla-unified/rev/eebf74de1376
                                                                  https://hg.mozilla.org/mozilla-unified/rev/662e97c69103

                                                                  1. 1

                                                                    Well, sorta-the-same but with the context is them fixing pwn2own security vulnerabilties with less than 24 hours 12 months ago

                                                                    https://hacks.mozilla.org/2018/03/shipping-a-security-update-of-firefox-in-less-than-a-day/

                                                                  2. 2

                                                                    Respectfully, is that something an org can brag about?

                                                                    I always assume it’s P.R. stunt. Double true if the product is in a memory-unsafe language without lots of automated tooling to catch vulnerabilities before they ship. Stepping back from that default, Mozilla is also branding themselves on privacy. This fits into that, too.

                                                                    EDIT: Other comments indicate the 24 hrs part might be editorializing. If so, I stand by the claim as a general case for “we patched fast after unsafe practices = good for PR.” The efforts that led to it might have been sincere.

                                                                  1. 1

                                                                    @pushcx / @alynpost / @Irene, does this seem like enough support to add the tags?

                                                                    1. 3

                                                                      Whenever you learn something new, take this mental model: Never do things for their own sake. Which translate to: Never learn Rust just because you want to learn Rust.

                                                                      This is great advice to follow! I have a related rule for personal projects: I can either write something I know in a language I don’t know, or I can write something I don’t know in a language I know. Mixing the two means bad news.

                                                                      (side-note: I just signed up for Rust and Tell Berlin! see you there)

                                                                      1. 15

                                                                        After the recent announcement of the F5 purchase of NGINX we decided to move back to Lighttpd.

                                                                        Would be interesting to know why instead of just a blog post which is basically an annotated lighthttpd configuration.

                                                                        1. 6

                                                                          If history has taught us anything, the timeline will go a little something like this. New cool features will only be available in the commercial version, because $$. The license will change, because $$. Dead project.

                                                                          And it’s indeed an annotated lighttpd configuration as this roughly a replication of the nginx config we were using and… the documentation of lighttpd isn’t that great. :/

                                                                          1. 9

                                                                            The lighttpd documentation sucks. Or at least it did three years ago when https://raymii.org ran on it. Nginx is better, but still missing comprehensive examples. Apache is best, on the documentation font.

                                                                            I wouldn’t move my entire site to another webserver anytime soon (it runs nginx) but for new deployments I regularly just use Apache. With 2.4 being much much faster and just doing everything you want, it being open source and not bound to a corporation helps.

                                                                            1. 1

                                                                              Whatever works for you. We used to run our all websites on lighttpd, before the project stalled. So seemed a good idea to move back, before nginx frustration kicked in. :)

                                                                              1. 3

                                                                                Im a bit confused. You’re worried about Nginx development stalling or going dead in the future. So, you switched to one that’s already stalled in the past? Seems like the same problem.

                                                                                Also, I thought Nginx was open source. If it is, people wanting to improve it can contribute to and/or fork it. If not, the problem wouldn’t be the company.

                                                                                1. 2

                                                                                  The project is no longer stalled and if it stalls again going to move, again. Which open source project did well after the parent company got acquired?

                                                                                  1. 3

                                                                                    I agree with you that there’s some risk after a big acquisition. I didnt know lighttpd was active again. That’s cool.

                                                                                    1. 2

                                                                                      If it was still as dead as it was a couple of years ago I would have continued my search. :)

                                                                                      1. 1

                                                                                        Well, thanks for the tip. I was collecting lightweight servers and services in C language to use for tests on analysis and testing tools later. Lwan was main one for web. Lighttpd seems like a decent one for higher-feature server. I read Nginx was a C++ app. That means I have less tooling to use on it unless I build a C++ to C compiler. That’s… not happening… ;)

                                                                                        1. 3

                                                                                          nginx is 97% C with no C++ so you’re good.

                                                                                          1. 1

                                                                                            Thanks for correction. What’s other 3%?

                                                                                            1. 2

                                                                                              Mostly vim script with a tiny bit of ‘other’ (according to github so who knows how accurate that is).

                                                                                              1. 1

                                                                                                Alright. I’ll probably run tools on both then.

                                                                                                1. 2

                                                                                                  Nginx was “heavily influenced” by apache 1.x; a lot of the same arch, like memory pools etc. fyil

                                                                                    2. 2

                                                                                      SuSE has been going strong, and has been acquired a few times.

                                                                                      1. 1

                                                                                        SuSE is not really an open-source project though, but a distributor.

                                                                                        1. 3

                                                                                          They do have plenty of open-source projects on their own, though. Like OBS, used by plenty outside of SuSE too.

                                                                              2. 5

                                                                                It’s a web proxy with a few other features, in at least 99% of all cases.

                                                                                What cool new features are people using?

                                                                                Like, reading a few books on the topic suggested to me that despite the neat things Nginx can do we only use a couple workhorses in our daily lives as webshits:

                                                                                • Virtual hosts
                                                                                • Static asset hosting
                                                                                • Caching
                                                                                • SSL/Let’s Encrypt
                                                                                • Load balancing for upstream servers
                                                                                • Route rewriting and redirecting
                                                                                • Throttling/blacklisting/whitelisting
                                                                                • Websocket stuff

                                                                                Like, sure you can do streaming media, weird auth integration, mail, direct database access, and other stuff, but the vast majority of devs are using a default install or some Docker image. But the bread and butter features? Those aren’t going away.

                                                                                If the concern is that new goofy features like QUIC or HTTP3 or whatever will only be available under a commercial license…maaaaaybe we should stop encouraging churn in protocols that work well enough?

                                                                                It just seems like much ado about nothing to me.

                                                                                1. 6

                                                                                  maaaaaybe we should stop encouraging churn in protocols that work well enough?

                                                                                  They don’t work well enough on mobile networks. In particular, QUIC’s main advantage over TCP is it directly addresses the issues caused by TCP’s congestion-avoidance algorithm on links with rapidly fluctuating capacities. I share your concern that things seem like they’re changing faster than they were before, but it’s not because engineers are bored and have nothing better to do.

                                                                                2. 4

                                                                                  New cool features will only be available in the commercial version, because $$.

                                                                                  Isn’t that already the case with nginx?

                                                                              1. 5

                                                                                It’s common to see beginners spending lots of energy switching back and forth between their editor and a terminal to run rustc, and then scrolling around to find the next error that they want to fix.

                                                                                For a while I’ve been really skeptical of the I in IDEs, but I’ve had a great experience with the rust-enhanced Sublime plugin. Admittedly I’ve only worked in small codebases, but I’ve found it to be extremely fast. I’m not sure precisely how Sublime plugins work, but they don’t seem to cause the main render thread to block so you can still scroll without jank while cargo check runs in the background. Additionally, having inline compiler suggestions with one-click acceptance really helps while you’re learning (that’s a clickable button that automatically replaces your text with the suggestion).

                                                                                1. 5

                                                                                  Berlin made International Women’s Day a public holiday, so I’ll probably celebrate by reading some Margaret Atwood. Other than that I have a side project in Rust that I’d like to complete. It’s nice to get back into “stack and heap” languages after writing Python for years.

                                                                                  1. 0

                                                                                    I suggest watching Angel Number 9 by Roberta Findlay instead. It’s fucking wild.

                                                                                  1. 7
                                                                                    • Professionalism. This means treating coworkers with respect, treating customers with respect, treating yourself with respect, and providing adult feedback when you don’t feel respected. Respect comes in many forms.
                                                                                    • Service to your team. Typically this comes in the form of volunteering to do tasks that don’t yield immediate gains, but indirectly help your surroundings. This can mean signing up to do hiring interviews, delving into an unpopular cleanup task, or agreeing to cover for someone on paternity leave. Note: going back to “treating yourself with respect,” you can’t let these tasks consume you (especially if you keep doing them in lieu of “what you get paid for”). A proper balance between scheduled work, unscheduled work, and life is critical.
                                                                                    • Keeping your eyes open. It’s extremely easy to become established and set in your ways, especially if you’ve been at the same company or working on the same codebase for a long time. I’m not saying you should pay attention to every new fad, but it’s good to keep tabs on general trends in the industry. Outside of switching jobs, good sources are: Lobsters itself, conferences, new hires (especially ones new to the industry), and trying new programming languages/tooling “just to see what it’s like.”
                                                                                    • Keeping in touch with the world outside of programming. Maintaining hobbies or interests outside of programming spawns creativity and keeps you out of bubbles. Steve Jobs’ classic example was the calligraphy class he took that inspired him to have great typography on the original Mac. But you can’t force it! You never know when these things will intersect, so don’t take up photography because it will help your programming. You need to find things which genuinely drive you, and the connections will form later.
                                                                                    • Avoiding Second System Effect. The second system effect pops up everywhere, and I’m sure you will see it kill a promising project sometime during your career. Avoiding it means identifying when “perfection” is preventing a project from shipping, and also identifying when “perfection” isn’t actually perfection at all – sometimes it can just be a strong preference and orthogonal to the actual problem at hand. Rearranging deck chairs on the Titanic is perhaps a bit dramatic, because this kind of waste happens even for successful projects.
                                                                                    • Not being a hero. Hero developers are the ones who didn’t heed the advice given in the “service to your team” section. They always jump in and fix emergencies, often gleefully so. It isn’t even limited to fixing application errors, heroes can swoop in and fix planning emergencies or fill in gaps in documentation and training. It’s awesome that they have capabilities to do that, but as the linked article says: “crisis management is not the same as crisis prevention,” and “having a hero on call makes those real problems seem less urgent than they really are.”
                                                                                    • Properly handling Unicode. Yes it’s hard. We need to do it anyway.
                                                                                    • Service to your community. There are a lot of imbalances in the tech industry: minorities, women, and LGBT people face a lot of structural imbalances (in addition to outright discrimination and harassment). If you’re in a position of privilege, it’s on you to read up about these imbalances and how to correct them. In addition to being the right thing to do, not doing so means keeping qualified people out of jobs and driving qualified people away from the jobs they have. It’s a long road but we can get there.
                                                                                    1. 3

                                                                                      Thanks for such a thoughtful list. Even when good is defined qualitatively, as you have, it’s interesting to consider how to measure success towards manifesting those qualities. I think it’s interesting not as a way of measurement for the sake of measurement, or measuring to keep score, but as a way to identify unrealized potential or areas to improve.

                                                                                    1. 2

                                                                                      If you’re interested in things like this, you may want to check out the excellently-written Statistics Done Wrong by Alex Reinhart.

                                                                                      1. 2

                                                                                        The company I work at has semiregular talks for developers and I’ve been meaning to write “How to Not Be Afraid of Large (and Small) Numbers” for a while. I want to explain back-of-the-envelope estimation “at the edges” and why it’s useful. “At the edges” means what might happen if you take some variable of a problem to the limit. A practical example for my team would be asking “how many servers could we ever possibly use?,” then seeing what it would cost to actually do that. Thinking along these lines reveals hidden bottlenecks and contours in the problem, and being comfortable doing these thought experiments lets you brainstorm and navigate scaling issues better.

                                                                                        1. 12

                                                                                          Author here, happy to be thoroughly corrected on German or linguistics in general.

                                                                                          1. 2

                                                                                            Not a correction, but you may want to learn the reason why prepositions are so difficult: They aren’t indoeuropean. Most of the nouns and verbs we use have some root in Indoeuropean, but the prepositions were mostly (entirely?) created after the great divisions, so there’s less reason for them to pair nicely with prepositions in other indoeuropean languages.

                                                                                            1. 2

                                                                                              The claim that “prepositions aren’t indoeuropean” is poorly-defined and incorrect in most reasonable more-specific senses. Many prepositions in English, German, and in other modern Indo-European languages are straightforwardly traceable to Proto-Indo-European roots. The English prepositions off and of and their German cognate ab, for instance, are reflexes of the reconstructed PIE root *apo, which also yields Greek απο and Latin ab (and then Spanish a). The common English preposition in, which is cognate with similarly-pronounced German in and Latin in (and then Spanish en) are reflexes of a PIE root *en meaning, more or less, “in”.

                                                                                              It’s true that not every single preposition in English or any other modern Indo-European language is traceable to a PIE root, and that some roots that yield prepositions in modern IE languages were not necessarily prepositions in PIE (if PIE even had a distinct syntactic category of prepositions), and that some prepositions in English or German are cognate with morphemes in other Indo-European languages that are not necessarily prepositions (German um for instance is cognate with the Latinate prefix ambi-, which is not a preposition in Latin). But I don’t think any of these facts are inconsistent with the claim that prepositions in modern Indo-European languages by and large are shared Indo-European vocabulary, traceable to the proto-language.

                                                                                              1. 1

                                                                                                I took a random set of prepositions now (the German accusative prepositions durch für gegen ohne um, for no particular reaon other than having the Kluge dictionary on a shelf in front of me) and looked them up. They all are traceable a little over a thousand years back, one has much older roots and another may have, but those much older roots don’t seem to be indoeuropean prepositions.

                                                                                                When you write “not every… IE root”, are you suggesting that most prepositions are traceable to an IE preposition?

                                                                                              2. 1

                                                                                                Wow! Super interesting, thank you for the info.

                                                                                                1. 2

                                                                                                  I saw this really cool diagram once somewhere with prepositions in different languages, including Finnish which doesn’t have prepositions. The idea was, iirc, to demonstrate conceptualization.

                                                                                                  Couldn’t find it now, but this one on dativ/akkusativ is pretty neat too ;)

                                                                                            1. 3

                                                                                              I heard that homebrew was an awful package managers by some compared to, say, apt. Is this true, if so, why?

                                                                                              1. 11

                                                                                                It’s fucking ridiculous how bad this thing is, and the issues around how it’s run are almost as bad as the technical ones.

                                                                                                For years it was a source only tool - it did all compilation locally. Then they caught up to 1998 and realised that the vast majority of people want binary distribution. So they added in pre-compiled binaries, but never bothered to adapt the dependency management system to take that into account.

                                                                                                So for instance, if you had two packages that provide the same binary - e.g. mysql-server and percona-server (not sure if that’s their exact names in Homebrew), and then wanted to install say “percona-toolkit” as well, which has a source requirement of “something that provides mysql libs/client” - the actual dependency in the binary package would be whatever had been installed on the machine it was built on. This manifested itself in an issue where you couldn’t install both percona-server and percona-toolkit from binaries.

                                                                                                When issues like this were raised - even by employees of the vendor (e.g. as in https://github.com/Homebrew/homebrew-core/issues/8717) the official response was “not our problem buddy”.

                                                                                                No fucks given, just keep hyping the archaic design to the cool kids.

                                                                                                I haven’t even got into the issue of permissions (what could go wrong installing global tools with user permissions) or the ridiculous way package data is handled on end-user machines (git is good for some things, this is not one of them)

                                                                                                If you get too vocal about the problems the tool has, someone (in my case, the founder of the project) will ask you to stop contacting them (these were public tweets with the homebrew account referenced) about the issues.

                                                                                                1. 4

                                                                                                  it’s good, easy to use and has a big community with well maintained list of packages. It’s the main package manager for macos. It’s been there for a long time in the macos ecosystem and is much better and easier to use than the older solutions we had such as macport. A cool thing is it has a distinction between command line tool and libraries vs desktop applications (called casks)

                                                                                                  example; you can install wget with brew install wget, but you’d install firefox with brew cask install firefox.

                                                                                                  I would stick to linux system’s default package manager, but maybe it’s worth giving it a try I guess.

                                                                                                  1. 3

                                                                                                    A cool thing is it has a distinction between command line tool and libraries vs desktop applications (called casks)

                                                                                                    Why is that cool? It seems pretty pointless to me.

                                                                                                    1. 2

                                                                                                      Yeah distinction between them at install tine isn’t that cool, but the fact it does support installing desktop apps is nice. No need for a different tooling like snap does. And you get to know where it’s going to be installed according to the command used. Desktop apps are usually located in /Applications on macos and cli tools are in linked in /usr/local/bin

                                                                                                  2. 4

                                                                                                    Pro:

                                                                                                    • Has every package imaginable (on Mac)
                                                                                                    • Writing your own formulae is stupidly easy

                                                                                                    Con:

                                                                                                    • You can only get the latest version of packages due to how the software repo works.
                                                                                                    • It’s slower than other package managers

                                                                                                    Meh:

                                                                                                    • Keeps every single package you have ever installed around, just in case you need to revert (because remember, you can only get the latest version of packages).
                                                                                                    • Might be too easy to add formulae. Everyone’s small projects are in homebrew.
                                                                                                    • The entire system is built on Ruby and Git, so it inherits any problems from them (esp Git).
                                                                                                    1. 1

                                                                                                      Someone told me that it doesn’t do dependency tracking, does that tie in with:

                                                                                                      Keeps every single package you have ever installed around, just in case you need to revert (because remember, you can only get the latest version of packages).

                                                                                                      Also, I’m not very knowledgeable on package managers, but not being able to get older versions of a package and basing everything on Git seems kind of a questionable choice to me. Also, I don’t like Ruby, but that’s a personal matter. Any reason they chose this?

                                                                                                      1. 1
                                                                                                        • You can only get the latest version of packages due to how the software repo works.
                                                                                                        • Keeps every single package you have ever installed around, just in case you need to revert (because remember, you can only get the latest version of packages).

                                                                                                        This is very similar to how Arch Linux’s pacman behaves. Personally, I would put both of these under the “pro” header.

                                                                                                      2. 4

                                                                                                        The author of Homebrew has repeatedly said this himself (e.g. in this quora answer). He usually says the dependency resolution in Homebrew is substantially less sophisticated than apt.

                                                                                                        Homebrew became successful because it didn’t try to be a Linux package manager. Instead it generally tries to build on top of MacOS rather than build a parallel ecosystem. The MacOS base distribution is pretty big, so it’s dependency resolution doesn’t need to be that sophisticated. On my system I have 78 Homebrew packages, of those 43 have no dependencies and 13 have just 1.

                                                                                                        Homebrew Cask also supports MacOS native Application / installer formats like .app, .pkg, and .dmg, rather than insisting on repackaging them. It then extends normal workflows by adding tooling around those formats.

                                                                                                        So, yes, Homebrew isn’t as good at package management compared to apt, because it didn’t originally try to solve all the same problems as apt. It’s more of a developer app store than a full system package manager.

                                                                                                        Linuxbrew still doesn’t try to solve the same problems. It focuses on the latest up to date versions of packages, and home dir installations. It doesn’t try to package an entire operating system, just individual programs. I doubt you could build a Linux distribution around the Linuxbrew packages, because it doesn’t concern itself with bootstrapping an operating system. Yes, it only depends on glibc and gcc on Linux, but that doesn’t mean any of the packages in Linuxbrew are set up to work together like they are on an actual Linux distribution.

                                                                                                        1. 2

                                                                                                          I don’t want to rag on the homebrew maintainers too much (it’s free software that’s important enough to me that it’s probably the second thing I install on a new mac), but I do have one big UX complaint: every time I run a homebrew command, I have no idea how long it will take. Even a simple brew update can take minutes because it’s syncing an entire git repo instead of updating just the list of packages.

                                                                                                          brew install might take 30 seconds or it might take two hours. I have no intuition how long it will take before I run it and am afraid to ctrl-c during the middle of a run. I’ll do something like brew install mosh and suddenly I’m compiling GNU coreutils. Huh?

                                                                                                          While I’d wish they’d fix this variance head-on, at minimum I’d appreciate if it did something like apt and warn you if you’re about to do a major operation. Admittedly apt only does this with disk size, but homebrew could store rough compile times somewhere and ask if I’d like to continue.

                                                                                                          1. -3

                                                                                                            I think it’s awful because it’s written in Ruby and uses Github as a CDN.

                                                                                                            1. 0

                                                                                                              This comment isn’t helpful. Please be more constructive.

                                                                                                              1. 0

                                                                                                                Who are you to judge? He wanted opinions, I gave mine.

                                                                                                                The Ruby VM is awfully slow and using Github as a CDN is so bad it requires no elaboration.

                                                                                                                1. 3

                                                                                                                  Saying it’s slow is much more helpful than what you said above.

                                                                                                                  1. 1

                                                                                                                    Yeah.