Threads for bitemyapp

  1. 57

    The way this PR was written made it almost seem like a joke

    Nobody really likes C++ or CMake, and there’s no clear path for getting off old toolchains. Every year the pain will get worse.

    and

    Being written in Rust will help fish continue to be perceived as modern and relevant.

    To me this read a lot like satire poking fun at the Rust community. Took me some digging to realize this was actually serious! I personally don’t care what language fish happens to be written in. As a happy user of fish I just really hope this doesn’t disrupt the project too much. Rewrites are hard!

    1. 52

      This is what it looks like when someone is self-aware :-)

      They looked at the tradeoffs, made a technical decision, and then didn’t take themselves too seriously.

      1. 14

        Poe’s Law is strong with this one. Not knowing the author of Fish, I genuinely can’t tell whether the commentary is 100% in earnest, or an absolutely brilliant satire.

        1. 30

          Given the almost 6,000 lines of seemingly high quality Rust code, I’m going to say it’s not a joke.

          1. 27

            Gotta commit to the bit.

            1. 3

              Oh, sure! I meant the explanation in the PR, not the code itself.

            2. 3

              Same. After doing some research into the PR though, I’m pretty sure it’s in earnest. XD

            3. 1

              For sure! After I looked deeper and found that this person is a main contributor to fish things made more sense. I totally respect their position and hope things go well. I just thought the way it was phrased made it hard to take seriously at first!

            4. 28

              The author understands some important but often underappreciated details. Since they aren’t paying anyone to work on the project, it has to be pleasant and attractive for new contributors to want to join in.

              1. 3

                It only “has to be” if the project wants to continue development at an undiminished pace. For something like a shell that seems like a problematic mindset, albeit an extremely common one.

                1. 13

                  For something like a shell that seems like a problematic mindset

                  Must it?

                  Fish seldom plays the role of “foundational scripting language”. More often it’s the interactive frontend to the rest of your system. This port enables further pursuit of UX and will allow for features I’ve been waiting for for ages

                  1. 2

                    For something like an interactive shell, I generally feel that consistency beats innovation when it comes to real usability. But if there are features that still need to be developed to satisfy the fish user base, I suppose more development is needed. What features have you been waiting for?

                    1. 11

                      https://github.com/fish-shell/fish-shell/pull/9512#issuecomment-1410820102

                      One large project has been to run multiple fish builtins and functions “at the same time”, to enable things like backgrounding functions (ideally without using “subshells” because those are an annoying environment boundary that shows up in surprising places in other shells), and to simply be able to pipe two builtins into each other and have them actually process both ends of the pipe “simultaneously”.

                      There have been multiple maintainer comments over the years in various issues alluding to the difficultly of adding concurrency features to the codebase. e.g. https://github.com/fish-shell/fish-shell/issues/238#issuecomment-150705108

              2. 24

                Nobody really likes C++ or CMake, and there’s no clear path for getting off old toolchains. Every year the pain will get worse.

                I think that the “Nobody” and “pain” there may have been referring to the dev team, not so much everyone in the world. In that context it’s a little less outlandish a statement.

                1. 28

                  It’s also not really outlandish in general. Nobody likes CMake. How terrible CMake is, is a common topic of conversation in the C++ world, and C++ itself doesn’t exactly have a reputation for being the language everyone loves to use.

                  I say as someone who does a whole lot of C++ development and would pick it above Rust for certain projects.

                  1. 13

                    Recent observation from Walter Bright on how C++ is perceived:

                    He then said that he had noticed in discussions on HN and elsewhere a tectonic shift appears to be going on: C++ appears to be sinking. There seems to be a lot more negativity out there about it these days. He doesn’t know how big this is, but it seems to be a major shift. People are realizing that there are intractable problems with C++, it’s getting too complicated, they don’t like the way code looks when writing C++, memory safety has come to the fore and C++ doesn’t deal with it effectively, etc.

                    From https://forum.dlang.org/post/uhcopuxrlabibmgrbqpe@forum.dlang.org

                    1. 9

                      That’s totally fine with me.

                      My retirement gig: maintaining and rescuing old C++ codebases that most devs are too scared/above working on. I expect it to be gross, highly profitable, and not require a ton of time.

                      1. 7

                        C programmers gonna have their COBOL programmer in 1999 moment by the time 2037 rolls around.

                      2. 4

                        C++ appears to be sinking

                        And yet, it was the ‘language of the year’ from TIOBE’s end-of-year roundup for 2022, because it showed the largest growth of all of the languages in their list, sitting comfortably at position 3 below Python and C. D shows up down at number 46, so might be subject to some wishful-thinking echo-chamber effects. Rust was in the top 20 again, after slipping a bit.

                        TIOBE’s rankings need to be taken with a bit of a grain of salt, because they’re tracking a lot of secondary factors, OpenHub tracks more objective things and they’re also showing a steady increase in the number of lines of code of C++ changed each month over the last few years.

                        1. 40

                          TIOBE has +/- 50% error margin and even if the data wasn’t unusable, it’s misrepresented (measuring mentions picked by search engine algorithms over a historical corpus, not just current year, not actual usage). It’s so bad that I think it’s wrong to even mention it with “a grain of salt”. It’s a developer’s horoscope.

                          TIOBE thinks C popularity has halved one year and tripled next year. It thinks a niche db query language from a commercial product discontinued in 2007 is more popular in 2023 than TypeScript. I can’t emphasize enough how garbage this data is, even the top 10. It requires overlooking so many grave errors that it exists only to reinforce preexisting beliefs.


                          Out of all flawed methods, I think RedMonk is the least flawed one: https://redmonk.com/rstephens/2022/10/20/top20-jun2022/ although both RedMonk and OpenHub are biased towards open-source, so e.g. we may never learn how much Ada DoD actually uses.

                          1. 10

                            My favourite part about the RedMonk chart is that it shows Haskell going out through the bottom of the chart, and Rust emerging shortly afterwards, but in a slightly darker shade of red which, erm, explains a lot of things.

                  2. 17

                    The rationale provided tracks for me as someone who is about to replace an unpopular C++ project at work with Rust. Picking up maintenance of someone else’s C++ project who is no longer at the company vs. picking up someone else’s Rust project have looked very different in terms of expected pain / risk IME.

                    “Getting better at C++” isn’t on my team’s dance card but “getting better at Rust” is which helps here. Few working programmers know anything about or understand native build tooling these days. I’m the resident expert because I know basics like why you provide a path argument to cmake. I’m not actually an expert but compared to most others in my engineering-heavy department I’m as good as it gets. Folks who do a lot of C++ at work or at home might not know how uncommon any thoroughgoing familiarity with C and C++ is getting these days. You might get someone who took one semester of C to say “yeah I know C!” but if you use C or C++ in anger you know how far that doesn’t go.

                    I’m 34 years old and got my start compiling C packages for Slackware and the like. I don’t know anyone under 30 that’s had much if any exposure unless they chose to work in embedded software. I barely know what I’m doing with C/C++ despite drips and drabs over the years. I know enough to resolve issues with native libraries, FFI, dylibs, etc. That’s about it beyond modest modifications though.

                    tl;dr it’s difficult getting paid employees to work on a C++ project. I can’t imagine what it’s like getting unpaid volunteers to do so.

                    1. 13

                      It does seem weird. We find it easier to hire C programmers than Rust programmers and easier to hire C++ programmers than either. On the other hand, there do seem to be a lot of people that want a project to hack on to help them learn Rust, which might be a good opportunity for an open source project (assuming that you are happy with the code quality of learning-project Rust contributions).

                      1. 27

                        The difficulty is that you need to hire good C++ programmers. Every time some vulnerability or footgun in C++ is discussed, people say it’s not C++’s fault, is just a crappy programmer.

                        OTOH my experience from hiring at Cloudflare is that it’s surprisingly easy to onboard new Rust programmers and have them productively contribute to complex projects. You tell them not to use unsafe, and they literally won’t be able to cause UB in the codebase.

                      2. 4

                        I personally don’t care what language fish happens to be written in

                        You might not, but a lot of people do.

                        I wrote an tool for myself on my own time that I used often at work. Folks really liked what it could do, there’s not a tool like it, and it handled “real” workloads being thrown at it. Not a single person wanted anything to do with it, since it was written in an esoteric language. I’m rewriting it in a “friendlier” language.

                        It seems like the Fish team thought it through, weighed risks and benefits, have a plan, and have made good progress, so I wish them the best.

                        1. 4

                          Not a single person wanted anything to do with it, since it was written in an esoteric language.

                          Oo which language?

                          1. 1

                            I’d rather not say, I don’t want anyone to feel bad. It’s sufficient to say, “As of today, not in the TIOBE Index top 20.”

                            The bigger point is that it was a tool I had been using for over a year, which significantly improved my efficiency and quality of life, and it got rejected for being an esoteric tech, even though I provided executable binaries.

                            1. 1

                              That sucks. Yeah, I don’t mean to ask to hurt anyone’s feelings, I’m just always curious to know what people think are “esoteric”, cuz esoteric on lobste.rs (Factor, J, one of the advent of code langs) is going to be very different than esoteric at my job (haskell, rust).

                        2. 4

                          As a happy user of fish I just really hope this doesn’t disrupt the project too much. Rewrites are hard!

                          Same here. As a user, it doesn’t bother me in which language it is written in. They should absolutely pick the language that allows them to be more productive and deliver more. I have been an happy fish user for 13 years, it is a software that proved useful from.day one. And every realease there are clear important improvements, often times new UX additions. I wish them a smoot migration.

                          1. 4

                            If you’re curious about the size of the rewriting project: I ran tokei on the repo and it counted 49k lines of C++ 8k lines of headers 1k lines of CMake (and 57k lines of Fish, so there’s also a lot that won’t need to be rewritten)

                            1. 3

                              They posted this little bit later:

                              Since this PR clearly escaped our little bubble, I feel like we should add some context, because I don’t think everyone caught on to the joking tone of the opening message (check https://fishshell.com/ for similar writing - we are the shell for the 90s, after all), and really got what the idea here is.

                              1. 3

                                The follow up contains:

                                Fish is a fairly old codebase. It was started in 2005

                                Which means I still can’t tell the degree to which he’s joking. The idea that a codebase from 2005 is old is mind boggling to me. It’s not even 20 years old. I’ve worked on a lot of projects with code more than twice that age.

                                1. 1

                                  To put things into perspective, 2005 to 2023 is 18 years — that is the entire lifespan of the classic MacOS.

                                  Or, to put things into perspective, the Mac has switches processor architectures twice since the Fish project was started.

                                  Most software projects just rot away in 18 years because needs or the surrounding ecosystems change.

                                  1. 2

                                    To put things into perspective, 2005 to 2023 is 18 years — that is the entire lifespan of the classic MacOS.

                                    Modern macOS is a direct descendent of NeXTSTEP though, which originally shipped in 1989 and was, itself, descended from 4BSD and CMU Mach, which are older. Most of the GNU tools are a similar age. Bash dates back to 1989.

                                    Most software projects just rot away in 18 years because needs or the surrounding ecosystems change.

                                    That’s probably true, but it’s a pretty depressing reflection on the state of the industry. There are a lot of counter examples and a lot of widely deployed software is significantly older. For example, all of the following have been in development for longer than fish:

                                    • The Linux kernel (1991)
                                    • *BSD (1991ish, depending on when you count, pre-x86 BSD is older)
                                    • Most of the GNU tools (1980s)
                                    • zsh (1990)
                                    • NeXTSTEP / OPENSTEP / macOS (1989)
                                    • Windows NT (1993)
                                    • MS Office (1990)
                                    • SQL Server (1989)
                                    • PostgreSQL (1996)
                                    • Apache (1995)
                                    • StarOffce / OpenOffice / LibreOffice (original release was 1985!)
                                    • MySQL (1995)
                                    • NetScape Navigator / Mozilla / Firefox (1994)
                                    • KHTML / WebKit / Blink (1998)
                              2. 2

                                This is the actual world we live in. This is what people really think.

                                1. 1

                                  Why does everyone hate CMake so much?

                                  I find it far easier to understand than Makefiles and automake.

                                  Plus it runs on ancient versions of Windows (like XP) and Linux, which is not something most build systems support. And it mostly “just works” with whatever compiler you have on your system.

                                  1. 20

                                    Makefiles and automake are a very low bar.

                                    Cargo can’t do 90% of the things that CMake can, but it’s so loved, because most projects don’t need to write any build script at all. You put your files in src/ and they build, on every Rust-supported platform. You put #[test] on unit tests, and cargo test runs them, in parallel. You can’t write your own doxygen workflow, but cargo doc gives you generated reference out of the box for every project. The biggest criticism Cargo gets about dependency management is that it’s too easy to use dependencies.

                                    This convention-over-configuration makes any approach requiring maintaining a DIY snowflake build script a chore. It feels archaic like writing header files by hand.

                                    1. 15

                                      I find it far easier to understand than Makefiles and automake.

                                      Why does everyone hate being punched in the face? I find it far more pleasant than being ritually disemboweled.

                                      And it mostly “just works” with whatever compiler you have on your system.

                                      CMake is three things:

                                      • A set of core functionality for running some build tasks.
                                      • A truly awful macro language that’s been extended to be a merely quite bad configuration language.
                                      • A set of packages built on the macro language.

                                      If the things that you want to do are well supported by the core functionality then CMake is fairly nice. If it’s supported by existing packages, then it’s fine. If it isn’t, then extending it is horrible. For example, when using clang-cl, I was bitten by the fact that there’s hard-coded logic in CMake that adds the /TC or /TP flags to override the language detection based on the filename and tell it to use C or C++. This made it impossible to compile Objective-C. A few releases later, CMake got support for Objective-C, but I can’t use that support to build the Objective-C runtime because it has logic in the core packages that checks that it can compile and link an Objective-C program, and it can’t do that without the runtime already existing.

                                      I’ve tried to use CMake for our RTOS project, but adding a new kind of target is incredibly hard because CMake’s language is really just a macro language and so you can’t add a new kind of object with properties of it, you are just using a macro language to set strings in a global namespace.

                                      I’ve been using xmake recently and, while there’s a lot I’ve struggled with, at least targets are objects and you can set and get custom properties on them trivially.

                                      1. 3

                                        Plus it runs on ancient versions of Windows (like XP)

                                        Only versions no one wants to run anymore (i.e. 3.5 and older).

                                        1. 3

                                          its an entire set of new things to learn and it generates a makefile so I worry that I’ll still have to deal with the problems of makefiles as well as the new problems cmake brings

                                      1. 16

                                        GATs are pretty huge right? I feel like I’ve seen “we could do X if we had GATs” all over the place.

                                        1. 9

                                          It will allow us to specialize our callbacks with their owning type and therefore rely on static dispatch instead of dynamic dispatch.

                                          1. 4

                                            That’s pretty sick, in what context if you can share?

                                            1. 1

                                              Callback evaluation for asynchronous I/O through a bespoke I/O runtime. Think something like Socket<Delegate> where Delegate is your callback-handling trait, in places where you have a HttpProtocol that specializes on Socket and needs to self register as the Delegate. Impossible without HKT, but GATs enable this with a little trait tomfoolery.

                                          2. 7

                                            The funny thing is at last job I needed GATs to do something tricky. Now for the life of me I can’t remember the details, but it’s a pretty big deal to have associated types that are easily generic. Just the lending iterator alone can allow things that are rather simple in scripting languages but restricted in earlier versions of rust.

                                            1. 7

                                              I need GATs so the Presto client library can go stable instead of nightly only.

                                              1. 3

                                                This is the client I’m talking about: https://github.com/nooberfsh/prusto

                                          1. 13

                                            Wow, this I must not’ve been paying attention to the libc portion of this, because it’s growed up into a really interesting cross-platform library, including some unexpected features like RAII/defer:

                                            _gc: Frees memory when function returns.

                                            This garbage collector overwrites the return address on the stack so that the RET instruction calls a trampoline which calls free(). It’s loosely analogous to Go’s defer keyword rather than a true cycle gc.

                                            Good golly miss Molly.

                                            My next question is how feasible it is to use this in higher level code that sits atop other C/++ libraries — I suspect those libraries will have to be tweaked for Cosmo compatibility first. (Especially big/complex ones like, in my case, Cap’nProto.)

                                            1. 3

                                              I agree that the cross-platform library component is pretty cool, and keeps getting more powerful.

                                              I bet that porting another language runtime to run on top of Cosmopolitan (like GNU or LLVM C++) is challenging. I think you’d have to recompile the C++ runtime of your choice from source against cosmopolitan libc, patching the sources to eliminate glibc dependencies. Like what is done to host LLVM on top of musl libc, for example. So probably somebody has to do a C++ Cosmopolitan implementation before it’s ready for Cap’nProto.

                                              1. 7

                                                Cosmopolitan Libc has libcxx now. You can give those things a try with our new CXX=cosmoc++ toolchain script. https://github.com/jart/cosmopolitan/blob/master/tool/scripts/cosmoc%2B%2B

                                                1. 4

                                                  Mike Burrows wrote *NSYNC

                                                  Also the Burrows of the Burrows–Wheeler transform which is how I know the name because I work in data platforms.

                                                  1. 2

                                                    Thanks! Does this work because Cosmopolitan Libc is ABI compatible with Musl Libc on Linux? (Does cosmoc++ only work on Linux?)

                                                    1. 2

                                                      Cosmo only builds on Linux right now, but you can run the binaries you create anywhere. Cosmopolitan Libc is not ABI compatible with Musl or Glibc. For example, we’ve added fields like st_birthtim to struct stat to preserve data from different platforms like BSDs. The cosmoc++ toolchain script uses -nostdlib -nostdinc for that reason.

                                              1. 19

                                                This certainly puts a lot more weight behind https://lobste.rs/s/d0lh6w/we_want_make_nix_better, which, in typical lobste.rs fashion, was shot dead on arrival by a small subset of its userbase.

                                                1. 31

                                                  Riff would’ve made an excellent first post.

                                                  1. 4

                                                    “small subset” yet it has over 20 flags. That’s a lot of engagement in my experience here.

                                                    1. 1

                                                      I’m pretty sure that any group of Lobsters users is still a small subset of Nix’s user base. 🙂

                                                      1. 5

                                                        Sure, but that’s not the takeaway from the parent commenter. “in typical lobste.rs fashion, was shot dead on arrival by a small subset of [lobst.er’s] userbase”

                                                  1. 4

                                                    Fair enough. I wish they would have a shorter turnaround time though - time from reporting to a fix being available in production seems to be typically in the order of months or years. Of my nine reported issues:

                                                    • one was auto-closed after a year with no feedback at all except thumbs up,
                                                    • one was fixed with a couple lines documentation update after three months,
                                                    • one is open since a year with only thumbs up,
                                                    • three unrelated issues are open since 10 months with only thumbs up,
                                                    • one was closed as a duplicate of an issue which is open since a year,
                                                    • one was “solved” with a horrible hack after four months, and
                                                    • the last one is open four months with no feedback.

                                                    Are they understaffed, or are they underestimating how much the architecture is holding them back from change?

                                                    1. 4

                                                      Are they understaffed, or are they underestimating how much the architecture is holding them back from change?

                                                      The much more likely possibility would seem to be simply that the bugs you’ve reported aren’t considered to be priorities internally.

                                                      1. 2

                                                        IME people are pretty myopic about how much faster they could be going. GitLab has plenty of employees.

                                                        1. 7

                                                          IMHO people are pretty good at whining from a safe distance and a high horse.

                                                          No affiliations to gitlab, but all the commentary in this thread is a bit … “meh”.

                                                          1. 2

                                                            True. Still they are a company. And as a user of their product (and admin maintaining a mid-sized instance) I can say that their product feels unbearably slow and annoying to navigate, while their competition (gitea and definitely github) are a much better experience. Sure gitlab has features for everything and the kitchen sink (except when it doesn’t, like required amount of MR approvals). But they’re still the company that is known in my environment for breaking 700 repos with a stable release update.

                                                      1. 8

                                                        I suppose it depends on the company, time, and luck, and “YMMV” as always. However, my experience working in staff roles was quite miserable, and many of my friends had the same experience.

                                                        Your manager may report to the COO (or the CEO in smaller companies), but it may not mean anything for either of you. If executives see you as a cost center that steals money from the real business, you will have to fight tooth and nail to keep your department funded. You may not even win: at quite a few places I’ve seen, such internal departments were staffed mainly by inexperienced people who would leave for a better job as soon as they could find one. But when disaster happens, you will be blamed for everything.

                                                        I’m pretty sure there are companies that don’t mistreat their staff IT personnel, but no assumption is universal.

                                                        1. 10

                                                          IME: the harder it is for execs to see that “person/group X does their job which directly leads to profit” the more of an uphill battle it is. Even a single hop can have a big effect: note the salary differences between skilled sales people and skilled engineers.

                                                          1. 6

                                                            Can confirm. This is particularly challenging for “developer experience” or “productivity” teams, where all of the work is definitionally only an indirect contribution to the bottom line—even if an incredibly important and valuable one.

                                                            1. 2

                                                              Gotta be able to sell everything you do. It’s hard when metrics are immaterial but in those specific areas, you have to be showing “oh, I save business line X this many person-hours daily/weekly/etc.” constantly in order to advance

                                                              1. 5

                                                                As an idea that sounds good, but in practice no one knows how to even estimate that in a lot of categories of important tech investment for teams like mine. I have spent a non-trivial amount of time with both academic and industrial literature on the subject, and… yeah, nobody blows how to measure or even guesstimate this stuff in a way that I could remotely sign my name to in good conscience.

                                                            2. 1

                                                              note the salary differences between skilled sales people and skilled engineers.

                                                              The latter usually have a higher salary or total compensation so I’m not sure if I understood your point. Maybe sales make more in down-market areas of the industry that don’t pay more than $100k for programmers if they can help it?

                                                              1. 6

                                                                $100k for programmers exists in the companies that have effectively scaled up their sales pipeline. Most programmers work on some kind of B2B software (like the example in the article, internal billing for an electricity company), where customers don’t number in the millions, engineer salaries have five digits, and trust me, their package can’t touch the compensation of the skilled sales person who manages the expectations of a few very important customers.

                                                                1. 3

                                                                  I can confirm that I have never worked for companies where the sales people were paid less than the engineers. At least not to my knowledge.

                                                                  In fact, in most companies I worked for, the person I reported to had a sales role.

                                                                  1. 2

                                                                    I think a good discriminant for this might be software-as-plumbing vs. software-is-the-product. I suspect SaaS has driven down the costs a lot of glue type stuff like this.

                                                              2. 5

                                                                I’ve had exactly the opposite experience. Being in staff roles has been the most enjoyable because we could work on things that had longer term payoffs. When I’ve been a line engineer we weren’t allowed to discuss anything unless it would increase revenue that quarter. The staff roles paid slightly less but not too much less.

                                                                1. 2

                                                                  I had a similar experience. I worked on a devops team at a small startup, and we did such a good job that when covid hit and cuts needed to be made, our department was first on the chopping block. I landed on my feet just fine, finding a job that paid 75% more (and have since received a promotion and a couple of substantial raises), but I was surprised to learn that management may keep a floundering product/dev org over an excellent supporting department (even though our department could’ve transitioned to dev and done a much better job).

                                                                1. 14

                                                                  This reads like a puff piece. It’s an interesting project but I wouldn’t say there was a real takeaway except that you have YC funding now.

                                                                  1. 11

                                                                    Ouch; this is very unconstructive criticism.

                                                                    1. 4

                                                                      I liked the article as an experience report - you can build something Erlang-ish in Rust on wasm and end up at least convincing yourself (and YC?) that it works. I agree that the article doesn’t have a strong central thesis, but I found it interesting.

                                                                    2. 11

                                                                      Sadly I believe you’re correct, especially given the post history here.

                                                                      For folks that quibble with this dismissal as a “puff piece”: for me at least if this post had any code at all showing how the APIs changed, how this mirrored GenServers or other BEAM idioms, how various approaches like the mentioned channels approach changed the shape of could, or anything like that I wouldn’t be so dismissive. Alas, it seems like a growth-hacking attempt with lots of buzzwords (I mean christ, look at the tags here).

                                                                      Marketing spam and bad actors still exist folks.

                                                                      1. 2

                                                                        Hi friendlysock, I do mention in the post “Check out the release notes for code examples”. Here is a direct link to them: https://github.com/lunatic-solutions/rust-lib/releases/tag/v0.9.0

                                                                        1. 6

                                                                          From (successful) personal experience: you can get away with promoting your stuff if you offer people something of real value in exchange for taking their time & attention. Nobody cares what’s in your GitHub: make content that is on the page you are posting that is worth reading.

                                                                          1. 5

                                                                            Friend, your only contributions to this site have been entirely self-promotion for your Lunatic project. It’s a neat project, but you are breaking decorum and exhibiting poor manners by using us in a fashion indistinguishable from a growth hacker. Please stop.

                                                                            1. 1

                                                                              I don’t think it’s fair to call a blog that has 3 posts in 2 years “marketing spam”. This submission is currently #1, so it’s obviously of interest to the community. But with this backlash in the comments I’m definitely going to refrain from posting in the future.

                                                                              1. 19

                                                                                I don’t think it’s fair to call a blog that has 3 posts in 2 years “marketing spam”.

                                                                                In one year, as I write this comment, you have:

                                                                                • Submitted 3 stories, all self promotion.
                                                                                • Made 5 comments, all on stories that you submitted, promoting your own project.

                                                                                That is not engaging with this community, that is using the community for self promotion, which is actively contrary to the community norms, and has been the reason for a ban from the site in the past.

                                                                                This submission is currently #1, so it’s obviously of interest to the community.

                                                                                The rankings are based on the number of votes, comments, and clicks. At the moment, all of the comments in this article are either by you, or are complaining about the submission. This will elevate the post but not in a good way.

                                                                                But with this backlash in the comments I’m definitely going to refrain from posting in the future.

                                                                                I would say that you have two choices:

                                                                                1. Stop posting altogether.
                                                                                2. Engage with the community, comment on other stories, submit things that are not just your own work.

                                                                                The general rule of thumb that I’ve seen advocated here is that posts of your own things should make up no more than 10% of your total contributions to the site. At the moment, for you, they are 100%. If they were under 50%, you’d probably see a lot fewer claims that you were abusing lobste.rs for self promotion.

                                                                                1. 4

                                                                                  I don’t know how to resolve the problem that this is both an interesting project but only being posted by you, and that there’s a business wrapped around it, where you’re the ‘CEO’ - which just makes it a bit awkward when people are interested in the tech but opposed to ‘spam’.

                                                                                  I’m certainly interested in following the project, so I’d prefer that you keep posting!

                                                                        1. 3

                                                                          I kind of get what the article is talking about but “Why TDD?” isn’t discussed at all. It just ends with a completely non-sensical conclusion that doesn’t appear to have any connection to the article.

                                                                          1. 1

                                                                            I proofed this with two folks that most people would consider thought-leaders in TDD. Both have written books and appeared as experts. They got it.

                                                                            Testing is where these two common cognition threads in coding and thinking intersect. We experience both kinds of errors whether talking or coding, and we use tests to get ourselves out of them.

                                                                            But like it says, if you don’t feel like testing is the intersection, what difference would it make to provide more detail about testing? After all, it’s subtitled “Why TDD?”, not “How to do TDD” By the time you get to the end, you should be nodding, not confused.

                                                                            If you have some specific criticism, I would love the opportunity to make the essay better. Let me know! (And thanks for the feedback)

                                                                            1. 4

                                                                              I proofed this with two folks that most people would consider thought-leaders in TDD. Both have written books and appeared as experts. They got it.

                                                                              But your audience isn’t TDD thought leaders, it’s regular developers. Experts are terrible at knowing what’s understandable to non-experts!

                                                                              But like it says, if you don’t feel like testing is the intersection, what difference would it make to provide more detail about testing?

                                                                              It’d make a difference by helping me understand why testing is the intersection.

                                                                              1. 1

                                                                                This is an interesting dilemma.

                                                                                Typically, folks who have never used TDD don’t see the reason for it. After all, they code, they debug, they refactor, they write unit test, it all works.

                                                                                Once they start using it, however, they being to realize the massive assumptions they’re making mentally when they code. TDD catches most of these immediately. It can be a very painful experience at first, at least until you get used to it.

                                                                                If you’ve experienced this direct code-to-feedback pain, you might want to wonder why TDD is required. Why can’t we just code without this constant stream of tests telling us that we’re making coding assumptions that are not entirely true? I believe I’ve made a first stab at an answer.

                                                                                If you want to know “Why TDD?” as somebody who has tried it and sees that it works but doesn’t know why, this essay is for you. If you want to know “Why TDD?” as somebody who wants to know why they should fool with it in the first place, it is not.

                                                                                Oddly, the confusion over the title is kinda the point here. People clicked on “Why TDD?” and expected something they didn’t get. The rule here is that in communications, it’s always the sender’s fault. I will change the subtitle. Maybe “Why Is TDD Necessary?” (also thanks)

                                                                              2. 3

                                                                                My specific criticism based on your comments here is you say a lot of words, but you’re not providing enough context. This will not be fixed by more words!

                                                                                I see a lot of ideas dumped but they don’t feel thought through or supported by the ideas around them. I call this “writing about the story” when beta reading fiction.

                                                                                What this means is you’re putting your ideas down but you’re not organizing them in a way that flows from one paragraph to another.

                                                                                A lot of people do this because it easy to look at your own words and understand what you’re thinking. This problem also extends to the people you ask to review your writing. If they know you too well or are experts in the field, they will be able to mentally fill in the gaps with their own knowledge.

                                                                                It’s not enough to put words on a page. Ideas need to be organized and supported. Otherwise, people are just reading your private diary.

                                                                                1. 1

                                                                                  Place my mind went after skimming the article to try to find out how your content pointed to TDD was https://timecube.2enp.com/

                                                                                  I think I vaguely get the idea it has something to do with the disconnect between computers and the human mind. I don’t think anyone denies there’s a mismatch there. I think readers would want to know why TDD is the best or foremost way to overcome the challenges of understanding code and not being surprised by abstractions.

                                                                                  1. 1

                                                                                    Great site! Let’s not forget TempleOS! https://tech.slashdot.org/story/14/11/25/1847254/the-schizophrenic-programmer-who-built-an-os-to-talk-to-god

                                                                                    Like it starts, we programmers suck at sitting down and writing bug-free code in the same way we’d order a pizza. Humans in general seem to have a very tough time talking about anything but the weather. If you’ve wondered if there any commonalities between these two things, I try to explain them.

                                                                                    I get the fact that many don’t get it. That doesn’t make it any less important. This is why I’ve been flailing away for the magic analogy or metaphor to help folks with the realization. No delusions, speaking lizards, or supernatural creatures involved. All of this is stuff that has been worked on in separate fields for centuries. Like I said, we programmers are just the schmucks who are stuck with daily joining together a bunch of separate disciplines. If software hadn’t taken over the world, we would have never had to think about so many varied disciplines and keep everything together in a consistent and logical manner.

                                                                                    It has nothing to do with me. All of us have a unique insight here just by the nature of our jobs being involved with solving problems for others. I figured by loading this up with code the message would have been clearer. This is important. You would want me to explain this to you. I’ll keep working on it.

                                                                              1. 45

                                                                                I learned AWS on the job.

                                                                                1. 8

                                                                                  I am learning on the job at the moment, and this youtube channel is the best resource I’ve found.

                                                                                  He’s got a nice style and a knack for cutting through irrelevant details to expose the essence of the services. You can try this excellent:

                                                                                  18 minute overview of the core AWS services

                                                                                  to see if you like it.

                                                                                  I’ve also done (or partially done) a few on LinkedIn learning and somewhere else I forgot, and they weren’t as good as this free resource.

                                                                                  1. 1

                                                                                    very strange list of “most important” IMO. But what is most important to one will not be most important to another.

                                                                                  2. 1

                                                                                    +1. I did a 3 day course offered by AWS. It was helpful to learn the foundational blocks like access management, ARNs and course taught us how to build a web service.

                                                                                    For any new services, I’ve found their docs helpful and there are usually tutorial style documents that teach you how to do a specific thing with the service.

                                                                                    1. 1

                                                                                      I did this as well but I recommend not having to learn how to deploy, configure, and maintain Hadoop clusters at the same time as in my case.

                                                                                    1. 4

                                                                                      Pic in tweet: https://twitter.com/bitemyapp/status/1479945582764560384

                                                                                      Work machine:

                                                                                      Ubuntu LTS, Dell Precision 5560 i7-11850H 32gb RAM hooked up to a Razer Core X Chroma with a GTX 1070 in it

                                                                                      Keyboard: Topre Typeheaven Mouse: Razer Deathadder V2 Monitors: two Dell P2415Q 4k/60 eGPU enclosure: Razer Core X Chroma The Chroma part is important, it has two TB3 chips for driving the GPU and ethernet/USB separately to avoid cut-outs that GPU enclosures are known for.

                                                                                      The eGPU actually solved a lot of problems for me. This laptop came with just the Intel iGPU and a single 4k external display, nothing else running, was putting the GPU usage at 70-80%. Dual monitors, even 1440p, was over-saturating the GPU and pegging it at 100%. I was getting hard-freezes that would clear up after 30-300 seconds from running builds during Zoom meetings and the like. Zoom calls and screen-sharing were very difficult too.

                                                                                      With the GTX 1070 hooked up via the TB3 eGPU (the laptop actually has TB4), the GPU usage with 2 external 4k displays and nothing else going on is 5%. In a Zoom call, 10%. Screensharing one of the 4k displays, 50%. 50% isn’t ideal but the important part is it isn’t eating up my RAM or CPU or making the system unstable. Chrome was contributing significantly to GPU load in some cases too.

                                                                                      1. 16

                                                                                        Cool! And it was thoughtful to include & highlight the disclaimers.

                                                                                        To deter using QuickServ in production, it runs on port 42069.

                                                                                        Another good safety technique is to bind only to the loopback interface (127.0.0.1) by default. That means only processes on the same host can connect, which is what you’re mostly doing in development. By requiring an extra arg or config setting to allow access over the network, you make it less likely someone can accidentally run something that can compromise their machine.

                                                                                        1. 10

                                                                                          Thanks for the kind words!

                                                                                          I actually considered only binding to the loopback interface, but in the end decided not to. I wanted to ensure the server is visible to other devices on the local network specifically for the use case of Raspberry Pi projects. I was concerned it would be hard for a user who didn’t know about that configuration option to figure out why they couldn’t see the running server from other devices on the network, so I compromised in favor of more usable defaults over more secure defaults.

                                                                                          1. 3

                                                                                            Have you considered also announcing the service via Avahi (mDNS)? That would help with local discovery, no need to mess with IP addresses, just, hostname.local:port.

                                                                                            1. 2

                                                                                              I have a sorta-functional prototype of an Airdrop knockoff that announces via mDNS here if that’s of use to anyone: https://gitlab.com/bitemyapp/coilgun

                                                                                              I’ve been thinking about tightening it up, daemonizing it, and making a systray icon for it.

                                                                                        1. 7

                                                                                          you have to produce custom error types to use in a function or method that can error in more than one way.

                                                                                          This is an area where Zig really shines. Automatic error unions, required error handling, errdefer, and the try keyword really give it my favorite error handling feel of any language.

                                                                                          1. 14

                                                                                            You don’t have to actually, just use anyhow or any of the other libs that do Box<dyn Error> automatically for you. It’s meant for applications and I’m happily using that.

                                                                                            Edit: Also you can get things like backtraces for free on top with crates like stable-eyre

                                                                                            1. 8

                                                                                              Yeah this is what my team does. We use anyhow for applications, thiserror for libraries. It’s nicer than what I had in Haskell.

                                                                                              Only other thing we had to make was an actix-compatible error helper.

                                                                                              This:

                                                                                              
                                                                                              pub trait IntoHttpError<T> {
                                                                                                  fn http_error(
                                                                                                      self,
                                                                                                      message: &str,
                                                                                                      status_code: StatusCode,
                                                                                                  ) -> core::result::Result<T, actix_web::Error>;
                                                                                              
                                                                                                  fn http_internal_error(self, message: &str) -> core::result::Result<T, actix_web::Error>
                                                                                                  where
                                                                                                      Self: std::marker::Sized,
                                                                                                  {
                                                                                                      self.http_error(message, StatusCode::INTERNAL_SERVER_ERROR)
                                                                                                  }
                                                                                              }
                                                                                              
                                                                                              impl<T, E: std::fmt::Debug> IntoHttpError<T> for core::result::Result<T, E> {
                                                                                                  fn http_error(
                                                                                                      self,
                                                                                                      message: &str,
                                                                                                      status_code: StatusCode,
                                                                                                  ) -> core::result::Result<T, actix_web::Error> {
                                                                                                      match self {
                                                                                                          Ok(val) => Ok(val),
                                                                                                          Err(err) => {
                                                                                                              error!("http_error: {:?}", err);
                                                                                                              Err(error::InternalError::new(message.to_string(), status_code).into())
                                                                                                          }
                                                                                                      }
                                                                                                  }
                                                                                              }
                                                                                              

                                                                                              Lets us do this:

                                                                                                  let conn = app_state
                                                                                                      .db
                                                                                                      .get()
                                                                                                      .http_internal_error("Could not get database connection")?;
                                                                                              ...
                                                                                                  let job_events = models::get_recent_job_events(&conn)
                                                                                                      .http_internal_error("Could not query datasets from the database")?;
                                                                                              
                                                                                              1. 1

                                                                                                yeah it’s a bit sad they settled on failure in an incompatible way, but oh well - they can’t really change that until the next major version and back then it was a sensible approach - and to be fair you may just want to manually decide what happens when a specific error reaches the actix stack, to give some different response. I actually use that on purpose to return specific json / status codes when for example a user isn’t existing.

                                                                                                1. 2

                                                                                                  I’m not sure what they should’ve done differently. I wouldn’t want anyhow errors silently turning into 500 errors with no explicit top-level message for API consumers and end-users to receive. I also wouldn’t want API concerns to infect the rest of the crate.

                                                                                          1. 8

                                                                                            I was a little surprised they mentioned KSUID but not Snowflake or Flake which are how a lot of teams learned about k-ordered ids originally. Perhaps I’m just old. I re-implemented flake in Haskell for fun awhile back.

                                                                                            Doesn’t talk about the motivations for k-ordering or sortability as it pertains to database indexes either. UUIDv4 has a habit of spraying btree indexes in unpleasant ways. This summarizes the motivations for Flake’s design: http://yellerapp.com/posts/2015-02-09-flake-ids.html

                                                                                            1. 67

                                                                                              I don’t understand how heat maps are used as a measuring tool, it seems pretty useless on its own. If something is little clicked, does it mean people don’t need the feature or people don’t like how it’s implemented? Or how do you know if people would really like something that’s not there to begin with?

                                                                                              It reminds about the Feed icon debacle: it’s been neglected for years and fell out of active use, which lead Mozilla to say “oh look, people don’t need the Feed icon, let’s move it away from the toolbar”. And then after a couple of versions they said “oh look, even less people use the Feed functionality, let’s remove it altogether”. Every time I see a click heatmap as a means to drive UI decisions I can’t shake the feeling that it’s only used to rationalize arbitrary product choices already made.

                                                                                              (P.S. I’ve been using Firefox since it was called Netscape and never understood why so many people left for Chrome, so no, I’m not just a random hater.)

                                                                                              1. 11

                                                                                                Yeah, reminds me of some old Spiderman game where you could “charge” your jump to jump higher. They removed the visible charge meter in a sequel but kept the functionality, then removed the functionality in the sequel after that because nobody was using it (because newcomers didn’t know it was there, because there was no visible indication of it!).

                                                                                                1. 8

                                                                                                  It’s particularly annoying that the really cool things, which might actually have a positive impact for everyone – if not now, at least in a later release – are buried at the end of the announcement. Meanwhile, some of the things gathered through metrics would be hilarious were it not for the pretentious marketing language:

                                                                                                  There are many ways to get to your preferences and settings, and we found that the two most popular ways were: 1) the hamburger menu – button on the far right with three equal horizontal lines – and 2) the right-click menu.

                                                                                                  Okay, first off, this is why you should proofread/fact-check even the PR and marketing boilerplate: there’s no way to get to your preferences and settings through the right-click menu. Not in a default state at least, maybe you can customize the menu to include these items but somehow I doubt that’s what’s happening here…

                                                                                                  Anyway, assuming “get to your preferences and settings” should’ve actually been “do things with the browser”: the “meatball” menu icon has no indication that it’s a menu, and a fourth way – the old-style menu bar – is hidden by default on two of the three desktop platforms Firefox supports, and isn’t even available on mobile. If you leave out the menubar through sheer common sense, you can skip the metrics altogether, a fair dice throw gets you 66% accuracy.

                                                                                                  People love or intuitively believe what they need is in the right click menu.

                                                                                                  I bet they’ll get the answer to this dilemma if they:

                                                                                                  • Look at the frequency of use for the “Copy” item in the right-click menu, and
                                                                                                  • For a second-order feature, if they break down right-click menu use by input device type and screen size

                                                                                                  And I bet the answer has nothing to do with love or intuition ;-).

                                                                                                  I have also divined in the data that the frequency of use for the right-click menu will further increase. The advanced machine learning algorithms I have employed to make this prediction consist of the realisation that one menu is gone, and (at least the screenshots show) that the Copy item is now only available in the right-click menu.

                                                                                                  Out of those 17 billion clicks, there were three major areas within the browser they visited:

                                                                                                  A fourth is mentioned in addition to the three in the list and, as one would expect, these four (out of… five?) areas are: the three areas with the most clickable widgets, plus the one you have to click in order to get to a new website (i.e. the navigation bar).

                                                                                                  1. 12

                                                                                                    They use their UX experts & measurements to rationalize decisions done to make Firefox more attractive to (new) users as they claim, but … when do we actually see the results?

                                                                                                    The market share has kept falling for years, whatever they claim to be doing, it is exceedingly obvious that they are unable to deliver.

                                                                                                    Looking back, the only thing I remember Mozilla doing in the last 10 years is

                                                                                                    • a constant erosion of trust
                                                                                                    • making people’s lives miserable
                                                                                                    • running promising projects into the ground at full speed

                                                                                                    I would be less bitter about it if Mozilla peeps wouldn’t be so obnoxiously arrogant about it.


                                                                                                    Isn’t this article pretty off-topic, considering how many stories are removed for being “business analysis”?

                                                                                                    This is pretty much “company losing users posts this quarter’s effort to attract new users by pissing off existing ones”.

                                                                                                    1. 14

                                                                                                      The whole UI development strategy seems to be upside down: Firefox has been hermorrhaging users for years, at a rate that the UI “improvements” have, at best, not influenced much, to the point where a good chunk of the browser “market” consists of former Firefox users.

                                                                                                      Instead of trying to get the old users back, Firefox is trying to appeal to a hypothetical “new user” who is technically illiterate to the point of being confused by too many buttons, but somehow cares about tracking through 3rd-party cookies and has hundreds of tabs open.

                                                                                                      The result is a cheap Chrome knock-off that’s not appealing to anyone who is already using Chrome, alienates a good part of their remaining user base who specifically want a browser that’s not like Chrome, and pushes the few remaining Firefox users who don’t specifically care about a particular browser further towards Chrome (tl;dr if I’m gonna use a Chrome-like thing, I might as well use the real deal). It’s not getting anyone back, and it keeps pushing people away at the same time.

                                                                                                      1. 16

                                                                                                        The fallacy of Firefox, and quite a few other projects and products, seems to be:

                                                                                                        1. Project X is more popular than us.
                                                                                                        2. Project X does Y.
                                                                                                        3. Therefore, we must do Y.

                                                                                                        The fallacy is that a lot of people are using your software is exactly because it’s not X and does Z instead of Y.

                                                                                                        It also assumes that the popularity is because of Y, which may be the case but may also not be the case.

                                                                                                        1. 3

                                                                                                          You’re not gonna win current users away from X by doing what X does, unless you do it much cheaper (not an option), or 10x better (hard to see how you could do more of chrome better than chrome).

                                                                                                          1. 1

                                                                                                            You might; however, stop users from switching to X by doing what X does, even if you don’t do it quite as well.

                                                                                                        2. 4

                                                                                                          The fundamental problem with Firefox is: It’s just slow. Slower than Chrome for almost everything. Slower at games (seriously, its canvas performance is really bad), slower at interacting with big apps like Google Docs, less smooth scrolling, even more latency between you hit a key on the keyboard and the letter shows up in the URL bar. This stuff can’t be solved with UI design changes.

                                                                                                          1. 3

                                                                                                            Well, but there are reasons why it’s slow - and at least one good one.

                                                                                                            Most notably, because Firefox makes an intentionally different implementation trade-off than Chrome. Mozilla prioritizes lower memory usage in FF, while Google prioritizes lower latency/greater speed.

                                                                                                            (I don’t have a citation on me at the moment, but I can dig one up later if anyone doesn’t believe me)

                                                                                                            That’s partially why you see so many Linux users complaining about Chrome’s memory usage.

                                                                                                            These people are getting exactly what they asked for, and in an age where low CPU usage is king (slow mobile processors, limited battery life, more junk shoved into web applications, and plentiful RAM for people who exercise discipline and only do one thing at once), Chrome’s tradeoff appears to be the better one. (yes, obviously that’s not the only reason that people use Chrome, but I do see people noticing it and citing it as a reason)

                                                                                                            1. 2

                                                                                                              I rarely use Google Docs; basically just when someone sends me some Office or Spreadsheet that I really need to read. It’s easiest to just import that in Google Docs; I never use this kind of software myself and this happens so infrequently that I can’t be bothered to install LibreOffice (my internet isn’t too fast, and downloading all updates for it takes a while and not worth it for the one time a year I need it). But every time it’s a frustrating experience as it’s just so darn slow. Actually, maybe it would be faster to just install LibreOffice.

                                                                                                              I haven’t used Slack in almost two years, but before this it was sometimes so slow in Firefox it was ridiculous. Latency when typing could be in the hundreds or thousands of ms. It felt like typing over a slow ssh connection with packet loss.

                                                                                                              CPU vs. memory is a real trade-off with a lot of various possible ways to do this and it’s a hard problem. But it doesn’t change that the end result is that for me, as a user, Firefox is sometimes so slow to the point of being unusable. If I had a job where they used Slack then this would be a problem as I wouldn’t be able to use Firefox (unless it’s fixed now, I don’t know if it is) and I don’t really fancy having multiple windows.

                                                                                                              That being said, I still feel Firefox gives a better experience overall. In most regular use it’s more than fast enough; it’s just a few exceptions where it’s so slow.

                                                                                                              1. 1

                                                                                                                That being said, I still feel Firefox gives a better experience overall. In most regular use it’s more than fast enough; it’s just a few exceptions where it’s so slow.

                                                                                                                I agree. I absolutely prefer Firefox to Chrome - it’s generally a better browser with a much better add-on ecosystem (Tree Style Tabs, Container Tabs, non-crippled uBlock Origin) and isn’t designed to allow Google to advertise to you. My experience with it is significantly better than with Chome.

                                                                                                                It’s because I like Firefox so much that I’m so furious about this poor design tradeoff.

                                                                                                                (also, while it contributes, I don’t blame all of my slowdowns on Firefox’s design - there are many cases where it’s crippled by Google introducing some new web “standard” that sites started using before Firefox could catch up (most famously, the ShaddowDOM v0 scandal with YouTube))

                                                                                                              2. 1

                                                                                                                I don’t have a citation on me at the moment, but I can dig one up later if anyone doesn’t believe me

                                                                                                                I’m interested in your citations :)

                                                                                                                1. 1

                                                                                                                  Here’s one about Google explicitly trading off memory for CPU that I found on the spot: https://tech.slashdot.org/story/20/07/20/0355210/google-will-disable-microsofts-ram-saving-feature-for-chrome-in-windows-10

                                                                                                          2. 4

                                                                                                            I remember more things from Mozilla. One is also a negative (integration of a proprietary application, Pocket, into the browser; it may be included in your “constant erosion of trust” point), but the others are more positive.

                                                                                                            Mozilla is the organization that let Rust emerge. I’m not a Rust programmer myself but I think it’s clear that the language is having a huge impact on the programming ecosystem, and I think that overall this impact is very positive (due to some new features of its own, popularizing some great features from other languages, and a rather impressive approach to building a vibrant community). Yes, Mozilla is also the organization that let go of all their Rust people, and I think it was a completely stupid idea (Rust is making it big, and they could be at the center of it), but somehow they managed to wait until the project was mature enough to make this stupid decision, and the project is doing okay. (Compare to many exciting technologies that were completely destroyed by being shut out too early.) So I think that the balance is very positive: they grew an extremely positive technology, and then they messed up in a not-as-harmful-as-it-could-be way.

                                                                                                            Also, I suspect that Mozilla is doing a lot of good work participating to the web standards ecosystem. This is mostly a guess as I’m not part of this community myself, so it could have changed in the last decade and I wouldn’t know. But this stuff matters a lot to everyone, we need to have technical people from several browsers actively participating, it’s a lot of work, and (despite the erosion of trust you mentioned) I still trust the Mozilla standard engineers to defend the web better than Google (surveillance incentives) or Apple (locking-down-stuff incentives). (Defend, in the sense that I suspect I like their values and their view of the web, and I guess that sometimes this makes a difference during standardization discussion.) Unfortunately this part of Mozilla’s work gets weaker as their market share shrinks.

                                                                                                            1. 3

                                                                                                              Agreed. I consider Rust a positive thing in general (though some of the behavioral community issues there seem to clearly originate from the Mozilla org), but it’s a one-off – an unexpected, pleasant surprise that Rust didn’t end in the premature death-spiral that Mozilla projects usually end up in.

                                                                                                              Negative things I remember most are Persona, FirefoxOS and the VPN scam they are currently running.

                                                                                                              1. 4

                                                                                                                I consider Rust a positive thing in general (though some of the behavioral community issues there seem to clearly originate from the Mozilla org), but it’s a one-off

                                                                                                                Hard disagree there. Pernosco is a revolution in debugging technology (a much, much bigger revolution than what Rust is to programming languages) and wouldn’t exist without Mozilla spending engineering resources on RR. I don’t know much about TTS/STT but the Deepspeech work Mozilla has done also worked quite nicely and seemed to make quite an impact in the field. I think I also recall them having some involvement in building a formally-proven crypto stack? Not sure about this one though.

                                                                                                                Mozilla has built quite a lot of very popular and impressive projects.

                                                                                                                Negative things I remember most are Persona, FirefoxOS and the VPM scam they are currently running.

                                                                                                                None of these make me as angry as the Mister Robot extension debacle they caused a few years ago.

                                                                                                                1. 2

                                                                                                                  To clarify, I didn’t mean it’s a one-off that it was popular, but that it’s a one-off that it didn’t get mismanaged into the ground. Totally agree otherwise.

                                                                                                                2. 4

                                                                                                                  the VPM [sic] scam they are currently running

                                                                                                                  Where have you found evidence that Mozilla is not delivering what they promise - a VPN in exchange for money?

                                                                                                                  1. 0

                                                                                                                    They are trying to use the reputation of their brand to sell a service to a group of “customers” that has no actual need for it and barely an understanding what it does or for which purposes it would be useful.

                                                                                                                    What they do is pretty much the definition of selling snake oil.

                                                                                                                    1. 7

                                                                                                                      I am a Firefox user and I’m interested in their VPN. I have a need for it, too - to prevent my ISP from selling information about me. I know how it works and what it’s useful for. I can’t see how they’re possibly “selling snake oil” unless they’re advertising something that doesn’t work or that they won’t actually deliver…

                                                                                                                      …which was my original question, which you sidestepped. Your words seem more like an opinion disguised as fact than actual fact.

                                                                                                            2. 2

                                                                                                              It’s a tool like a lot of other things. Sure, you can abuse it in many ways, but unless we know how the results are used we can’t tell if it’s a good or bad scenario. A good usage for a heatmap could be for example looking at where people like to click on a menu item and how far should the “expand” button go.

                                                                                                              As an event counter, they’re not great - they can get that info in better/cheaper ways.

                                                                                                              1. 2

                                                                                                                This is tricky and also do for survey. I often am in a situation where it asks me “What do you have the hardest time with it” or “What prevents you from using language X on your current project?” and when the answer essentially boils down to “I am doing scripting and not systems programming” or something similar, I don’t intend to tell them that they should make a scripting language out of a systems language or vice versa.

                                                                                                                And I know these are often taken wrongly, by reading the results and interpretation. There rarely is a “I like it how it is” option or a “Doesn’t need changes” or even “Please don’t change this!”.

                                                                                                                I am sure this is true about other topics too, but programming language surveys seem to be a trend so that’s where I often see it.

                                                                                                                1. 1

                                                                                                                  I feel like they’re easily gamed, too. I feel like this happened with Twitter and the “Moments” tab. When they introduced it, it was in the top bar to the right of the “Notifications” tab. Some time after introduction, they swapped the “Notifications” and “Moments” tab, and the meme on Twitter was how the swap broke people’s muscle memory.

                                                                                                                  I’m sure a heat map would’ve shown that after the swap, the Moments feature suddenly became a lot more popular. What that heat map wouldn’t show was user intent.

                                                                                                                  1. 1

                                                                                                                    from what I understand, the idea behind heat maps is not to decide about which feature to kill, but to measure what should be visible by default. The more stuff you add to the screen, the more cluttered and noisy the browser becomes. Heat maps help Mozilla decide if a feature should be moved from the main visible UX to some overflowing menu.

                                                                                                                    Most things they moved around can be re-arranged by using the customise toolbar feature. In that sense, you do have enough bits to make your browser experience yours to some degree.


                                                                                                                    The killing of feed icon was not decided with heat maps alone. From what I remember, that feature was seldom used (something they can get from telemetry and heat maps) but also was some legacy bit rot that added friction to maintenance and whatever they wanted to do. Sometimes features that are loved by few are simply in the way of features that will benefit more people, it is sad but it is true for codebases that are as old as Firefox.

                                                                                                                    Anyway, feed reading is one WebExtension away from any user, and those add-ons usually do a much better job than the original feature ever did.

                                                                                                                    1. 1

                                                                                                                      I’m wondering how this whole heatmaps/metrics thing works for people who have customized their UI.

                                                                                                                      I’d assume that the data gained from e. g. this is useless at best and pollution at worst to Mozilla’s assumption of a perfectly spherical Firefox user.

                                                                                                                      1. 1

                                                                                                                        @soc, I expect the browser to know it’s own UI and mark heat maps with context so that clicking on a tab is flagged the same way regardless if tabs are on top or the side. Also, IIRC the majority of Firefox users do not customise their UI. We live in a bubble of devs and power users who do, but that is a small fraction of the user base. Seeing what the larger base is doing is still beneficial.

                                                                                                                        worst to Mozilla’s assumption of a perfectly spherical Firefox user.

                                                                                                                        I’m pretty sure they can get meaningful results without assuming everyone is the same ideal user. Heat maps are just a useful way to visualise something, specially when you’re doing a blog post.

                                                                                                                    2. 1

                                                                                                                      never understood why so many people left for Chrome,

                                                                                                                      The speed difference is tangible.

                                                                                                                      1. 2

                                                                                                                        I don’t find it that tangible. If I was into speed, I’d be using Safari here which is quite fast. There are lots of different reasons to choose a browser. A lot of people switched to Chrome because of the constant advertising in Google webapps and also because Google has a tendency of breaking compatibility or reducing compatibility and performance with every other browser, thus making Google stuff work better on Chrome.

                                                                                                                    1. 4

                                                                                                                      All these compiler errors make me worry that refactoring anything reasonably large will get brutal and demoralizing fast. Does anyone have any experience here?

                                                                                                                      1. 20

                                                                                                                        I’ve got lots of experience refactoring very large rust codebases and I find it to be the opposite. I’m sure it helps that I’ve internalized a lot of the rules, so most of the errors I’m expecting, but even earlier in my rust use I never found it to be demoralizing. Really, I find it rather freeing. I don’t have to think about every last thing that a change might affect, I just make the change and use the list of errors as my todo list.

                                                                                                                        1. 6

                                                                                                                          That’s my experience as well. Sometimes it’s a bit inconvenient because you need to update everything to get it to compile (can’t just test an isolated part that you updated) but the confidence it gives me when refactoring that I updated everything is totally worth it.

                                                                                                                        2. 9

                                                                                                                          In my experience (more with OCaml, but they’re close), errors are helpful because they tell you what places in the code are affected by the refactoring. The ideal scenario is one where you make the initial change, then fix all the places that the compilers errors at, and when you’re done it all works again. If you used the type system to its best this scenario can actually happen in practice!

                                                                                                                          1. 4

                                                                                                                            I definitely agree. Lots of great compiler errors make refactoring a joy. I somewhat recently wanted to add line+col numbers to my error messages and simply made the breaking change of defining the location field on my error type, then fixed compile errors for about 6h. When the code compiled for the first time it worked! (save a couple of off-by-one errors) I have to say that it is so powerful that you can trust the compiler to let you know the places that you need to make changes when doing a refactoring, and catching a lot of other errors that you may make as you quickly rip through the codebase. (For example even if you get similar errors for the missing arguments in C++ quickly jumping to random places in the codebase makes it easy to introduce lifetime issues as you don’t always successfully grasp the lifetime constraints of the surrounding code as quickly as you think you have.) It is definitely wat nicer than dynamic languages where you get hundreds of rest failures and have to map those back to the actual location where the problem occured.

                                                                                                                          2. 7

                                                                                                                            In my experience refactoring is one of the strong points of Rust. I can “break” my code anywhere I need it (e.g. make a field optional, or remove a method, etc.), and then follow the errors until it works again. It sure beats finding undefined is not a function at run time instead.

                                                                                                                            The compiler takes care to avoid displaying multiple redundant errors that have the same root cause. The auto-fix suggestions are usually correct. Rust-analyzer’s refactoring actions are getting pretty good too.

                                                                                                                            1. 3

                                                                                                                              Yes. My favourite is when a widely-used struct suddenly gains a generic parameter and there are now a hundred function signatures and trait bounds that need updating, along with possibly infecting any other structs that contained it. CLion has some useful refactoring tools but it can only take you so far. I don’t mean to merely whinge - it’s all a trade-off. The requirement for functions to fully specify types permits some pretty magical type inference within function bodies. As sibling says, you just treat it as a todo list and you can be reasonably sure it will work when you’re done.

                                                                                                                              1. 2

                                                                                                                                I think generics are kind of overused in rust tbh.

                                                                                                                              2. 2

                                                                                                                                I just pick one error at a time and fix them. Usually its best to comment out as much broken code as possible until you get a clean compile then work one at a time.

                                                                                                                                It is a grind, but once you finish, the code usually works immediately with few if any problems.

                                                                                                                                1. 2

                                                                                                                                  No it makes refactors much better. Part of the reason my coworkers like Rust is because we can change our minds later.

                                                                                                                                  All those compile errors would be runtime exceptions or race conditions or other issues that fly under the radar in a different language. You want the errors. Some experience is involved in learning how to grease the rails on a refactor and set the compiler up to create a checklist for you. My default strategy is striking the root by changing the core datatype or function and fixing all the code that broke as a result.

                                                                                                                                  1. 1

                                                                                                                                    As a counterpoint to what most people are saying here…

                                                                                                                                    In theory the refactoring is “fine”. But the lack of a GC (meaning that object lifetimes are a core part of the code), combined with the relatively few tools you have to nicely monkeypatch things mean that “trying out” a code change is a lost more costly than, say, in Python (where you can throw a descriptor onto an object to try out some new functionality quickly, for example).

                                                                                                                                    I think this is alleviated when you use traits well, but raw structs are a bit of a pain in the butt. I think this is mostly limited to modifying underlying structures though, and when refactoring functions etc, I’ve found it to be a breeze (and like people say, error messages make it easier to find the refactoring points).

                                                                                                                                  1. 4

                                                                                                                                    For the query planning issue, is there a reason I shouldn’t be using https://github.com/ossc-db/pg_hint_plan to work around that problem?

                                                                                                                                    1. 3

                                                                                                                                      You really don’t want to use Nomad in production.

                                                                                                                                      Aggressive feature gating was mentioned. I also just found it bafflingly flaky. An experience we never had with any of our K8S clusters.

                                                                                                                                      1. 2

                                                                                                                                        You really don’t want to use Nomad in production.

                                                                                                                                        Would you please share your experience that leads you to say this?

                                                                                                                                        1. 1

                                                                                                                                          Yet I do. I find it delightfully easy to operate. I know others who run it in prod at larger scale too.

                                                                                                                                        1. 1

                                                                                                                                          Speaking as a professional user of Haskell (5 years) and Rust (2 years) Rust isn’t a functional programming language but that’s okay.

                                                                                                                                          1. 2

                                                                                                                                            The note about reference counting is something I’d never thought of before and kind of mind blowing if true. I’m not convinced it’s true: a language like Go uses traditional GC but is quite memory efficient also.

                                                                                                                                            1. 10

                                                                                                                                              a language like Go uses traditional GC but is quite memory efficient also.

                                                                                                                                              Not especially, it only seems so in contrast with Java, Python, and Ruby.

                                                                                                                                              1. 7

                                                                                                                                                The main difference is that Go can reduce heap consumption because it also has a stack. Java, Python, and Ruby only have a heap. This removes a lot of pressure on the GC for smaller objects.

                                                                                                                                                1. 4

                                                                                                                                                  The other responses seem to be people misinterpreting what your trying to say. I assume that what you’re trying to say is that go has value types in addition to heap allocate objects, which ruby, etc do not.

                                                                                                                                                  However once you get beyond low performance interpreters (ruby, python, etc) languages that are ostensibly based on heap only allocation are very good at lowering values. The core value types in Java, and some of the primitives on JS engines are all essentially lowered to value types that live on the stack or directly in objects.

                                                                                                                                                  Enterprise (ugh) JVM setups, the ones that have long runtimes, are very good at lowering object allocations (and inlining ostensibly virtual method calls), so in many cases “heap allocated” objects do in fact end up on the stack.

                                                                                                                                                  The killer problem with GC is that pauses are unavoidable, unless you take a significant performance, both in CPU time and memory usage.

                                                                                                                                                  1. 1

                                                                                                                                                    are very good at lowering object allocations […]

                                                                                                                                                    Escape analysis – while being an improvement – can’t save the day here, it works – if it works – for single values. No amount of escape analysis is able to rewrite your array-of-references Array[Point] to a reference-less Array[(x, y)] for instance.

                                                                                                                                                    killer problem with GC is that pauses are unavoidable […]

                                                                                                                                                    That’s not really a “killer problem”, not even with the qualification you attached to it. GCs – especially those you mention explicitly – give you lots of tuning knobs to decide how GC should run and whether to minimize pauses, maximize throughput, etc.

                                                                                                                                                    With reference counting pauses are unavoidable: when the reference count to the head of that 1 million element singly-linked list hits zero, things are getting deallocated until the RC is done.

                                                                                                                                                    (Note that things like deferred RC and coalesced RC refer to techniques that try to avoid writing the count, not to the deallocation.)

                                                                                                                                                    (And no, “strategically” keeping references alive is not a solution – if you had such a good track of your alive and dead references, you wouldn’t need to use RC in the first place.)

                                                                                                                                                  2. 5

                                                                                                                                                    Because the other replies try hard to misunderstand you: Yes, you are right about the main difference.

                                                                                                                                                    The main difference is that in Go most stuff can be a value type which mostly keeps GC out of the equation, while in Java, Python and Ruby the GC is pretty much involved everywhere except for a small selection of special-cased types.

                                                                                                                                                    And no, escape analysis – while being an improvement – can’t save the day here, it works – if it works – for single values. No amount of escape analysis is able to rewrite your array-of-references Array[Point] to a reference-less Array[(x, y)] for instance.

                                                                                                                                                    Go’s GC is decades behind e.g. Hotspot, but it’s not that big of an issue for Go, because it doesn’t need GC for everything, unlike Java.

                                                                                                                                                    1. 3

                                                                                                                                                      Java, Python, and Ruby only have a heap.

                                                                                                                                                      Java does have a stack. This is also part of the Java VM specification:

                                                                                                                                                      https://docs.oracle.com/javase/specs/jvms/se15/html/jvms-2.html#jvms-2.5.2

                                                                                                                                                      Some implementations even have stack overflows (though growable stacks are also in-spec).

                                                                                                                                                      1. 2

                                                                                                                                                        I meant in terms of allocations. I should have been more precise.

                                                                                                                                                        Most OOP languages only allocate on the heap. It’s a nice simplification in terms of language design, but it also means that more garbage gets generated. I am sure that advanced JVMs can use static analysis and move some of the allocations to the stack as well but it’s not a default feature of the language like in Go.

                                                                                                                                                        1. 1

                                                                                                                                                          Thanks for the clarification.

                                                                                                                                                          and move some of the allocations to the stack as well but it’s not a default feature of the language like in Go.

                                                                                                                                                          Sorry for being a bit pedantic ;), but it’s not a feature of the Go language, but the default implementation. The Go specification does not mandate a stack or heap (it never uses those terms). It’s a feature of the implementation and it only works if the compiler can prove through escape analysis that a value does not escape (when the value is used as a pointer or pointer in an interface value). This differs from languages which have specifications that separate stack and heap memory and have clear rules about stack vs. heap allocation.

                                                                                                                                                          When I last looked at a large amount machine code output of the Go compiler 3 years ago or so, escape analysis was pretty terrible and ‘objects’ that people would consider to be value types would be allocated on the heap as a result. One of the problems was that Go did not perform any or much mid-stack inlining, so it’s not clear to the compiler whether pointers that are passed around persist beyond the function call scope.

                                                                                                                                                          So, I am not sure whether there is a big difference here between Go and heavily JIT’ed Java in practice.

                                                                                                                                                          What does help in Go is that the little generics that it has (array, slice, maps) are not implemented through type erasure in the default implementation. So a []FooBar is actually a contiguous block of memory on the stack or heap, whereas e.g. ArrayList<FooBar> in Java is just an ArrayList<Object> after erasure, requiring much more pointer chasing.

                                                                                                                                                      2. 0

                                                                                                                                                        Ruby very much has a stack.

                                                                                                                                                    1. 6

                                                                                                                                                      I solved this problem by lifting their query DSL into types and making it (as much as I could, anyhow) impossible to construct an invalid query: https://github.com/bitemyapp/bloodhound

                                                                                                                                                      1. 2

                                                                                                                                                        My problem was less invalid queries and more that code which built queries to ES from user’s input wasn’t very clear. In fact, it was hard to read and hard to update.

                                                                                                                                                        So I think types may help, but the overall approach is what’s more important.

                                                                                                                                                        1. 2

                                                                                                                                                          more that code which built queries to ES from user’s input wasn’t very clear. In fact, it was hard to read and hard to update.

                                                                                                                                                          Yes, that’s why I wrote Bloodhound. I know people that don’t use Haskell that still use Bloodhound anyway to generate complicated queries, using the Haskell code as a nicer, more maintainable template in effect. Look at the tests for example:

                                                                                                                                                          https://github.com/bitemyapp/bloodhound/blob/master/tests/Test/Query.hs#L24-L27

                                                                                                                                                          I mentioned invalid queries because that’s the harder problem to solve. Just making something that’ll at least tidy up the API is the first step. Tightening it up so you eliminate opportunities for users to make query structures that don’t make sense is where it starts to really come together.

                                                                                                                                                          https://github.com/bitemyapp/bloodhound/blob/master/src/Database/Bloodhound/Internal/Query.hs#L513-L529

                                                                                                                                                          The types are the interface that make it self-documenting and easier to maintain. I’ve been using ES off and on since pre-1.0 and I hated the string blob templates that I had in Python before. After I learned Haskell, I had the idea that you could use types to reify the query DSL into an interface. And I was right, it works great.

                                                                                                                                                          1. 2

                                                                                                                                                            Sorry, I fail to see how types are more maintainable than just plain maps. This thing:

                                                                                                                                                                  let query = TermQuery (Term "user" "bitemyapp") Nothing
                                                                                                                                                            

                                                                                                                                                            is not better than just {:term {"user" "bitemyapp"}}. I’d argue it’s worse since you have to know the mapping rather than just writing this stuff directly.

                                                                                                                                                            What I’m talking about is one step higher: it’s a design of a query builder. Not of a query itself, this thing is awful and ElasticSearch receives a lot of heat online for its query language design, and not for nothing. But building those queries has nothing to do with types. Invalid queries were not my problem.

                                                                                                                                                            1. 4

                                                                                                                                                              is not better than just {:term {“user” “bitemyapp”}}.

                                                                                                                                                              It is better because Elasticsearch changes their API with some regularity and with types when we update the type in Bloodhound to match the new API structure, you’ll get a list of type errors everywhere in your code where you need to fix it before your stuff will work. You can get migrations done a lot faster. This isn’t hypothetical: we have production users that love this. This is also a facility of types in general.

                                                                                                                                                              You probably aren’t aware but I was a Clojure user before Haskell and maintained some libraries like Korma. I know what it’s like to maintain production Clojure code.