1. 16

    Cool! And it was thoughtful to include & highlight the disclaimers.

    To deter using QuickServ in production, it runs on port 42069.

    Another good safety technique is to bind only to the loopback interface (127.0.0.1) by default. That means only processes on the same host can connect, which is what you’re mostly doing in development. By requiring an extra arg or config setting to allow access over the network, you make it less likely someone can accidentally run something that can compromise their machine.

    1. 10

      Thanks for the kind words!

      I actually considered only binding to the loopback interface, but in the end decided not to. I wanted to ensure the server is visible to other devices on the local network specifically for the use case of Raspberry Pi projects. I was concerned it would be hard for a user who didn’t know about that configuration option to figure out why they couldn’t see the running server from other devices on the network, so I compromised in favor of more usable defaults over more secure defaults.

      1. 3

        Have you considered also announcing the service via Avahi (mDNS)? That would help with local discovery, no need to mess with IP addresses, just, hostname.local:port.

        1. 2

          I have a sorta-functional prototype of an Airdrop knockoff that announces via mDNS here if that’s of use to anyone: https://gitlab.com/bitemyapp/coilgun

          I’ve been thinking about tightening it up, daemonizing it, and making a systray icon for it.

    1. 7

      you have to produce custom error types to use in a function or method that can error in more than one way.

      This is an area where Zig really shines. Automatic error unions, required error handling, errdefer, and the try keyword really give it my favorite error handling feel of any language.

      1. 14

        You don’t have to actually, just use anyhow or any of the other libs that do Box<dyn Error> automatically for you. It’s meant for applications and I’m happily using that.

        Edit: Also you can get things like backtraces for free on top with crates like stable-eyre

        1. 8

          Yeah this is what my team does. We use anyhow for applications, thiserror for libraries. It’s nicer than what I had in Haskell.

          Only other thing we had to make was an actix-compatible error helper.

          This:

          
          pub trait IntoHttpError<T> {
              fn http_error(
                  self,
                  message: &str,
                  status_code: StatusCode,
              ) -> core::result::Result<T, actix_web::Error>;
          
              fn http_internal_error(self, message: &str) -> core::result::Result<T, actix_web::Error>
              where
                  Self: std::marker::Sized,
              {
                  self.http_error(message, StatusCode::INTERNAL_SERVER_ERROR)
              }
          }
          
          impl<T, E: std::fmt::Debug> IntoHttpError<T> for core::result::Result<T, E> {
              fn http_error(
                  self,
                  message: &str,
                  status_code: StatusCode,
              ) -> core::result::Result<T, actix_web::Error> {
                  match self {
                      Ok(val) => Ok(val),
                      Err(err) => {
                          error!("http_error: {:?}", err);
                          Err(error::InternalError::new(message.to_string(), status_code).into())
                      }
                  }
              }
          }
          

          Lets us do this:

              let conn = app_state
                  .db
                  .get()
                  .http_internal_error("Could not get database connection")?;
          ...
              let job_events = models::get_recent_job_events(&conn)
                  .http_internal_error("Could not query datasets from the database")?;
          
          1. 1

            yeah it’s a bit sad they settled on failure in an incompatible way, but oh well - they can’t really change that until the next major version and back then it was a sensible approach - and to be fair you may just want to manually decide what happens when a specific error reaches the actix stack, to give some different response. I actually use that on purpose to return specific json / status codes when for example a user isn’t existing.

            1. 2

              I’m not sure what they should’ve done differently. I wouldn’t want anyhow errors silently turning into 500 errors with no explicit top-level message for API consumers and end-users to receive. I also wouldn’t want API concerns to infect the rest of the crate.

      1. 8

        I was a little surprised they mentioned KSUID but not Snowflake or Flake which are how a lot of teams learned about k-ordered ids originally. Perhaps I’m just old. I re-implemented flake in Haskell for fun awhile back.

        Doesn’t talk about the motivations for k-ordering or sortability as it pertains to database indexes either. UUIDv4 has a habit of spraying btree indexes in unpleasant ways. This summarizes the motivations for Flake’s design: http://yellerapp.com/posts/2015-02-09-flake-ids.html

        1. 67

          I don’t understand how heat maps are used as a measuring tool, it seems pretty useless on its own. If something is little clicked, does it mean people don’t need the feature or people don’t like how it’s implemented? Or how do you know if people would really like something that’s not there to begin with?

          It reminds about the Feed icon debacle: it’s been neglected for years and fell out of active use, which lead Mozilla to say “oh look, people don’t need the Feed icon, let’s move it away from the toolbar”. And then after a couple of versions they said “oh look, even less people use the Feed functionality, let’s remove it altogether”. Every time I see a click heatmap as a means to drive UI decisions I can’t shake the feeling that it’s only used to rationalize arbitrary product choices already made.

          (P.S. I’ve been using Firefox since it was called Netscape and never understood why so many people left for Chrome, so no, I’m not just a random hater.)

          1. 11

            Yeah, reminds me of some old Spiderman game where you could “charge” your jump to jump higher. They removed the visible charge meter in a sequel but kept the functionality, then removed the functionality in the sequel after that because nobody was using it (because newcomers didn’t know it was there, because there was no visible indication of it!).

            1. 8

              It’s particularly annoying that the really cool things, which might actually have a positive impact for everyone – if not now, at least in a later release – are buried at the end of the announcement. Meanwhile, some of the things gathered through metrics would be hilarious were it not for the pretentious marketing language:

              There are many ways to get to your preferences and settings, and we found that the two most popular ways were: 1) the hamburger menu – button on the far right with three equal horizontal lines – and 2) the right-click menu.

              Okay, first off, this is why you should proofread/fact-check even the PR and marketing boilerplate: there’s no way to get to your preferences and settings through the right-click menu. Not in a default state at least, maybe you can customize the menu to include these items but somehow I doubt that’s what’s happening here…

              Anyway, assuming “get to your preferences and settings” should’ve actually been “do things with the browser”: the “meatball” menu icon has no indication that it’s a menu, and a fourth way – the old-style menu bar – is hidden by default on two of the three desktop platforms Firefox supports, and isn’t even available on mobile. If you leave out the menubar through sheer common sense, you can skip the metrics altogether, a fair dice throw gets you 66% accuracy.

              People love or intuitively believe what they need is in the right click menu.

              I bet they’ll get the answer to this dilemma if they:

              • Look at the frequency of use for the “Copy” item in the right-click menu, and
              • For a second-order feature, if they break down right-click menu use by input device type and screen size

              And I bet the answer has nothing to do with love or intuition ;-).

              I have also divined in the data that the frequency of use for the right-click menu will further increase. The advanced machine learning algorithms I have employed to make this prediction consist of the realisation that one menu is gone, and (at least the screenshots show) that the Copy item is now only available in the right-click menu.

              Out of those 17 billion clicks, there were three major areas within the browser they visited:

              A fourth is mentioned in addition to the three in the list and, as one would expect, these four (out of… five?) areas are: the three areas with the most clickable widgets, plus the one you have to click in order to get to a new website (i.e. the navigation bar).

              1. 12

                They use their UX experts & measurements to rationalize decisions done to make Firefox more attractive to (new) users as they claim, but … when do we actually see the results?

                The market share has kept falling for years, whatever they claim to be doing, it is exceedingly obvious that they are unable to deliver.

                Looking back, the only thing I remember Mozilla doing in the last 10 years is

                • a constant erosion of trust
                • making people’s lives miserable
                • running promising projects into the ground at full speed

                I would be less bitter about it if Mozilla peeps wouldn’t be so obnoxiously arrogant about it.


                Isn’t this article pretty off-topic, considering how many stories are removed for being “business analysis”?

                This is pretty much “company losing users posts this quarter’s effort to attract new users by pissing off existing ones”.

                1. 14

                  The whole UI development strategy seems to be upside down: Firefox has been hermorrhaging users for years, at a rate that the UI “improvements” have, at best, not influenced much, to the point where a good chunk of the browser “market” consists of former Firefox users.

                  Instead of trying to get the old users back, Firefox is trying to appeal to a hypothetical “new user” who is technically illiterate to the point of being confused by too many buttons, but somehow cares about tracking through 3rd-party cookies and has hundreds of tabs open.

                  The result is a cheap Chrome knock-off that’s not appealing to anyone who is already using Chrome, alienates a good part of their remaining user base who specifically want a browser that’s not like Chrome, and pushes the few remaining Firefox users who don’t specifically care about a particular browser further towards Chrome (tl;dr if I’m gonna use a Chrome-like thing, I might as well use the real deal). It’s not getting anyone back, and it keeps pushing people away at the same time.

                  1. 16

                    The fallacy of Firefox, and quite a few other projects and products, seems to be:

                    1. Project X is more popular than us.
                    2. Project X does Y.
                    3. Therefore, we must do Y.

                    The fallacy is that a lot of people are using your software is exactly because it’s not X and does Z instead of Y.

                    It also assumes that the popularity is because of Y, which may be the case but may also not be the case.

                    1. 3

                      You’re not gonna win current users away from X by doing what X does, unless you do it much cheaper (not an option), or 10x better (hard to see how you could do more of chrome better than chrome).

                      1. 1

                        You might; however, stop users from switching to X by doing what X does, even if you don’t do it quite as well.

                    2. 4

                      The fundamental problem with Firefox is: It’s just slow. Slower than Chrome for almost everything. Slower at games (seriously, its canvas performance is really bad), slower at interacting with big apps like Google Docs, less smooth scrolling, even more latency between you hit a key on the keyboard and the letter shows up in the URL bar. This stuff can’t be solved with UI design changes.

                      1. 3

                        Well, but there are reasons why it’s slow - and at least one good one.

                        Most notably, because Firefox makes an intentionally different implementation trade-off than Chrome. Mozilla prioritizes lower memory usage in FF, while Google prioritizes lower latency/greater speed.

                        (I don’t have a citation on me at the moment, but I can dig one up later if anyone doesn’t believe me)

                        That’s partially why you see so many Linux users complaining about Chrome’s memory usage.

                        These people are getting exactly what they asked for, and in an age where low CPU usage is king (slow mobile processors, limited battery life, more junk shoved into web applications, and plentiful RAM for people who exercise discipline and only do one thing at once), Chrome’s tradeoff appears to be the better one. (yes, obviously that’s not the only reason that people use Chrome, but I do see people noticing it and citing it as a reason)

                        1. 2

                          I rarely use Google Docs; basically just when someone sends me some Office or Spreadsheet that I really need to read. It’s easiest to just import that in Google Docs; I never use this kind of software myself and this happens so infrequently that I can’t be bothered to install LibreOffice (my internet isn’t too fast, and downloading all updates for it takes a while and not worth it for the one time a year I need it). But every time it’s a frustrating experience as it’s just so darn slow. Actually, maybe it would be faster to just install LibreOffice.

                          I haven’t used Slack in almost two years, but before this it was sometimes so slow in Firefox it was ridiculous. Latency when typing could be in the hundreds or thousands of ms. It felt like typing over a slow ssh connection with packet loss.

                          CPU vs. memory is a real trade-off with a lot of various possible ways to do this and it’s a hard problem. But it doesn’t change that the end result is that for me, as a user, Firefox is sometimes so slow to the point of being unusable. If I had a job where they used Slack then this would be a problem as I wouldn’t be able to use Firefox (unless it’s fixed now, I don’t know if it is) and I don’t really fancy having multiple windows.

                          That being said, I still feel Firefox gives a better experience overall. In most regular use it’s more than fast enough; it’s just a few exceptions where it’s so slow.

                          1. 1

                            That being said, I still feel Firefox gives a better experience overall. In most regular use it’s more than fast enough; it’s just a few exceptions where it’s so slow.

                            I agree. I absolutely prefer Firefox to Chrome - it’s generally a better browser with a much better add-on ecosystem (Tree Style Tabs, Container Tabs, non-crippled uBlock Origin) and isn’t designed to allow Google to advertise to you. My experience with it is significantly better than with Chome.

                            It’s because I like Firefox so much that I’m so furious about this poor design tradeoff.

                            (also, while it contributes, I don’t blame all of my slowdowns on Firefox’s design - there are many cases where it’s crippled by Google introducing some new web “standard” that sites started using before Firefox could catch up (most famously, the ShaddowDOM v0 scandal with YouTube))

                          2. 1

                            I don’t have a citation on me at the moment, but I can dig one up later if anyone doesn’t believe me

                            I’m interested in your citations :)

                            1. 1

                              Here’s one about Google explicitly trading off memory for CPU that I found on the spot: https://tech.slashdot.org/story/20/07/20/0355210/google-will-disable-microsofts-ram-saving-feature-for-chrome-in-windows-10

                      2. 4

                        I remember more things from Mozilla. One is also a negative (integration of a proprietary application, Pocket, into the browser; it may be included in your “constant erosion of trust” point), but the others are more positive.

                        Mozilla is the organization that let Rust emerge. I’m not a Rust programmer myself but I think it’s clear that the language is having a huge impact on the programming ecosystem, and I think that overall this impact is very positive (due to some new features of its own, popularizing some great features from other languages, and a rather impressive approach to building a vibrant community). Yes, Mozilla is also the organization that let go of all their Rust people, and I think it was a completely stupid idea (Rust is making it big, and they could be at the center of it), but somehow they managed to wait until the project was mature enough to make this stupid decision, and the project is doing okay. (Compare to many exciting technologies that were completely destroyed by being shut out too early.) So I think that the balance is very positive: they grew an extremely positive technology, and then they messed up in a not-as-harmful-as-it-could-be way.

                        Also, I suspect that Mozilla is doing a lot of good work participating to the web standards ecosystem. This is mostly a guess as I’m not part of this community myself, so it could have changed in the last decade and I wouldn’t know. But this stuff matters a lot to everyone, we need to have technical people from several browsers actively participating, it’s a lot of work, and (despite the erosion of trust you mentioned) I still trust the Mozilla standard engineers to defend the web better than Google (surveillance incentives) or Apple (locking-down-stuff incentives). (Defend, in the sense that I suspect I like their values and their view of the web, and I guess that sometimes this makes a difference during standardization discussion.) Unfortunately this part of Mozilla’s work gets weaker as their market share shrinks.

                        1. 3

                          Agreed. I consider Rust a positive thing in general (though some of the behavioral community issues there seem to clearly originate from the Mozilla org), but it’s a one-off – an unexpected, pleasant surprise that Rust didn’t end in the premature death-spiral that Mozilla projects usually end up in.

                          Negative things I remember most are Persona, FirefoxOS and the VPN scam they are currently running.

                          1. 4

                            I consider Rust a positive thing in general (though some of the behavioral community issues there seem to clearly originate from the Mozilla org), but it’s a one-off

                            Hard disagree there. Pernosco is a revolution in debugging technology (a much, much bigger revolution than what Rust is to programming languages) and wouldn’t exist without Mozilla spending engineering resources on RR. I don’t know much about TTS/STT but the Deepspeech work Mozilla has done also worked quite nicely and seemed to make quite an impact in the field. I think I also recall them having some involvement in building a formally-proven crypto stack? Not sure about this one though.

                            Mozilla has built quite a lot of very popular and impressive projects.

                            Negative things I remember most are Persona, FirefoxOS and the VPM scam they are currently running.

                            None of these make me as angry as the Mister Robot extension debacle they caused a few years ago.

                            1. 2

                              To clarify, I didn’t mean it’s a one-off that it was popular, but that it’s a one-off that it didn’t get mismanaged into the ground. Totally agree otherwise.

                            2. 4

                              the VPM [sic] scam they are currently running

                              Where have you found evidence that Mozilla is not delivering what they promise - a VPN in exchange for money?

                              1. 0

                                They are trying to use the reputation of their brand to sell a service to a group of “customers” that has no actual need for it and barely an understanding what it does or for which purposes it would be useful.

                                What they do is pretty much the definition of selling snake oil.

                                1. 7

                                  I am a Firefox user and I’m interested in their VPN. I have a need for it, too - to prevent my ISP from selling information about me. I know how it works and what it’s useful for. I can’t see how they’re possibly “selling snake oil” unless they’re advertising something that doesn’t work or that they won’t actually deliver…

                                  …which was my original question, which you sidestepped. Your words seem more like an opinion disguised as fact than actual fact.

                        2. 2

                          It’s a tool like a lot of other things. Sure, you can abuse it in many ways, but unless we know how the results are used we can’t tell if it’s a good or bad scenario. A good usage for a heatmap could be for example looking at where people like to click on a menu item and how far should the “expand” button go.

                          As an event counter, they’re not great - they can get that info in better/cheaper ways.

                          1. 2

                            This is tricky and also do for survey. I often am in a situation where it asks me “What do you have the hardest time with it” or “What prevents you from using language X on your current project?” and when the answer essentially boils down to “I am doing scripting and not systems programming” or something similar, I don’t intend to tell them that they should make a scripting language out of a systems language or vice versa.

                            And I know these are often taken wrongly, by reading the results and interpretation. There rarely is a “I like it how it is” option or a “Doesn’t need changes” or even “Please don’t change this!”.

                            I am sure this is true about other topics too, but programming language surveys seem to be a trend so that’s where I often see it.

                            1. 1

                              I feel like they’re easily gamed, too. I feel like this happened with Twitter and the “Moments” tab. When they introduced it, it was in the top bar to the right of the “Notifications” tab. Some time after introduction, they swapped the “Notifications” and “Moments” tab, and the meme on Twitter was how the swap broke people’s muscle memory.

                              I’m sure a heat map would’ve shown that after the swap, the Moments feature suddenly became a lot more popular. What that heat map wouldn’t show was user intent.

                              1. 1

                                from what I understand, the idea behind heat maps is not to decide about which feature to kill, but to measure what should be visible by default. The more stuff you add to the screen, the more cluttered and noisy the browser becomes. Heat maps help Mozilla decide if a feature should be moved from the main visible UX to some overflowing menu.

                                Most things they moved around can be re-arranged by using the customise toolbar feature. In that sense, you do have enough bits to make your browser experience yours to some degree.


                                The killing of feed icon was not decided with heat maps alone. From what I remember, that feature was seldom used (something they can get from telemetry and heat maps) but also was some legacy bit rot that added friction to maintenance and whatever they wanted to do. Sometimes features that are loved by few are simply in the way of features that will benefit more people, it is sad but it is true for codebases that are as old as Firefox.

                                Anyway, feed reading is one WebExtension away from any user, and those add-ons usually do a much better job than the original feature ever did.

                                1. 1

                                  I’m wondering how this whole heatmaps/metrics thing works for people who have customized their UI.

                                  I’d assume that the data gained from e. g. this is useless at best and pollution at worst to Mozilla’s assumption of a perfectly spherical Firefox user.

                                  1. 1

                                    @soc, I expect the browser to know it’s own UI and mark heat maps with context so that clicking on a tab is flagged the same way regardless if tabs are on top or the side. Also, IIRC the majority of Firefox users do not customise their UI. We live in a bubble of devs and power users who do, but that is a small fraction of the user base. Seeing what the larger base is doing is still beneficial.

                                    worst to Mozilla’s assumption of a perfectly spherical Firefox user.

                                    I’m pretty sure they can get meaningful results without assuming everyone is the same ideal user. Heat maps are just a useful way to visualise something, specially when you’re doing a blog post.

                                2. 1

                                  never understood why so many people left for Chrome,

                                  The speed difference is tangible.

                                  1. 2

                                    I don’t find it that tangible. If I was into speed, I’d be using Safari here which is quite fast. There are lots of different reasons to choose a browser. A lot of people switched to Chrome because of the constant advertising in Google webapps and also because Google has a tendency of breaking compatibility or reducing compatibility and performance with every other browser, thus making Google stuff work better on Chrome.

                                1. 4

                                  All these compiler errors make me worry that refactoring anything reasonably large will get brutal and demoralizing fast. Does anyone have any experience here?

                                  1. 20

                                    I’ve got lots of experience refactoring very large rust codebases and I find it to be the opposite. I’m sure it helps that I’ve internalized a lot of the rules, so most of the errors I’m expecting, but even earlier in my rust use I never found it to be demoralizing. Really, I find it rather freeing. I don’t have to think about every last thing that a change might affect, I just make the change and use the list of errors as my todo list.

                                    1. 6

                                      That’s my experience as well. Sometimes it’s a bit inconvenient because you need to update everything to get it to compile (can’t just test an isolated part that you updated) but the confidence it gives me when refactoring that I updated everything is totally worth it.

                                    2. 9

                                      In my experience (more with OCaml, but they’re close), errors are helpful because they tell you what places in the code are affected by the refactoring. The ideal scenario is one where you make the initial change, then fix all the places that the compilers errors at, and when you’re done it all works again. If you used the type system to its best this scenario can actually happen in practice!

                                      1. 4

                                        I definitely agree. Lots of great compiler errors make refactoring a joy. I somewhat recently wanted to add line+col numbers to my error messages and simply made the breaking change of defining the location field on my error type, then fixed compile errors for about 6h. When the code compiled for the first time it worked! (save a couple of off-by-one errors) I have to say that it is so powerful that you can trust the compiler to let you know the places that you need to make changes when doing a refactoring, and catching a lot of other errors that you may make as you quickly rip through the codebase. (For example even if you get similar errors for the missing arguments in C++ quickly jumping to random places in the codebase makes it easy to introduce lifetime issues as you don’t always successfully grasp the lifetime constraints of the surrounding code as quickly as you think you have.) It is definitely wat nicer than dynamic languages where you get hundreds of rest failures and have to map those back to the actual location where the problem occured.

                                      2. 7

                                        In my experience refactoring is one of the strong points of Rust. I can “break” my code anywhere I need it (e.g. make a field optional, or remove a method, etc.), and then follow the errors until it works again. It sure beats finding undefined is not a function at run time instead.

                                        The compiler takes care to avoid displaying multiple redundant errors that have the same root cause. The auto-fix suggestions are usually correct. Rust-analyzer’s refactoring actions are getting pretty good too.

                                        1. 3

                                          Yes. My favourite is when a widely-used struct suddenly gains a generic parameter and there are now a hundred function signatures and trait bounds that need updating, along with possibly infecting any other structs that contained it. CLion has some useful refactoring tools but it can only take you so far. I don’t mean to merely whinge - it’s all a trade-off. The requirement for functions to fully specify types permits some pretty magical type inference within function bodies. As sibling says, you just treat it as a todo list and you can be reasonably sure it will work when you’re done.

                                          1. 2

                                            I think generics are kind of overused in rust tbh.

                                          2. 2

                                            I just pick one error at a time and fix them. Usually its best to comment out as much broken code as possible until you get a clean compile then work one at a time.

                                            It is a grind, but once you finish, the code usually works immediately with few if any problems.

                                            1. 2

                                              No it makes refactors much better. Part of the reason my coworkers like Rust is because we can change our minds later.

                                              All those compile errors would be runtime exceptions or race conditions or other issues that fly under the radar in a different language. You want the errors. Some experience is involved in learning how to grease the rails on a refactor and set the compiler up to create a checklist for you. My default strategy is striking the root by changing the core datatype or function and fixing all the code that broke as a result.

                                              1. 1

                                                As a counterpoint to what most people are saying here…

                                                In theory the refactoring is “fine”. But the lack of a GC (meaning that object lifetimes are a core part of the code), combined with the relatively few tools you have to nicely monkeypatch things mean that “trying out” a code change is a lost more costly than, say, in Python (where you can throw a descriptor onto an object to try out some new functionality quickly, for example).

                                                I think this is alleviated when you use traits well, but raw structs are a bit of a pain in the butt. I think this is mostly limited to modifying underlying structures though, and when refactoring functions etc, I’ve found it to be a breeze (and like people say, error messages make it easier to find the refactoring points).

                                              1. 4

                                                For the query planning issue, is there a reason I shouldn’t be using https://github.com/ossc-db/pg_hint_plan to work around that problem?

                                                1. 3

                                                  You really don’t want to use Nomad in production.

                                                  Aggressive feature gating was mentioned. I also just found it bafflingly flaky. An experience we never had with any of our K8S clusters.

                                                  1. 2

                                                    You really don’t want to use Nomad in production.

                                                    Would you please share your experience that leads you to say this?

                                                    1. 1

                                                      Yet I do. I find it delightfully easy to operate. I know others who run it in prod at larger scale too.

                                                    1. 1

                                                      Speaking as a professional user of Haskell (5 years) and Rust (2 years) Rust isn’t a functional programming language but that’s okay.

                                                      1. 2

                                                        The note about reference counting is something I’d never thought of before and kind of mind blowing if true. I’m not convinced it’s true: a language like Go uses traditional GC but is quite memory efficient also.

                                                        1. 10

                                                          a language like Go uses traditional GC but is quite memory efficient also.

                                                          Not especially, it only seems so in contrast with Java, Python, and Ruby.

                                                          1. 7

                                                            The main difference is that Go can reduce heap consumption because it also has a stack. Java, Python, and Ruby only have a heap. This removes a lot of pressure on the GC for smaller objects.

                                                            1. 4

                                                              The other responses seem to be people misinterpreting what your trying to say. I assume that what you’re trying to say is that go has value types in addition to heap allocate objects, which ruby, etc do not.

                                                              However once you get beyond low performance interpreters (ruby, python, etc) languages that are ostensibly based on heap only allocation are very good at lowering values. The core value types in Java, and some of the primitives on JS engines are all essentially lowered to value types that live on the stack or directly in objects.

                                                              Enterprise (ugh) JVM setups, the ones that have long runtimes, are very good at lowering object allocations (and inlining ostensibly virtual method calls), so in many cases “heap allocated” objects do in fact end up on the stack.

                                                              The killer problem with GC is that pauses are unavoidable, unless you take a significant performance, both in CPU time and memory usage.

                                                              1. 1

                                                                are very good at lowering object allocations […]

                                                                Escape analysis – while being an improvement – can’t save the day here, it works – if it works – for single values. No amount of escape analysis is able to rewrite your array-of-references Array[Point] to a reference-less Array[(x, y)] for instance.

                                                                killer problem with GC is that pauses are unavoidable […]

                                                                That’s not really a “killer problem”, not even with the qualification you attached to it. GCs – especially those you mention explicitly – give you lots of tuning knobs to decide how GC should run and whether to minimize pauses, maximize throughput, etc.

                                                                With reference counting pauses are unavoidable: when the reference count to the head of that 1 million element singly-linked list hits zero, things are getting deallocated until the RC is done.

                                                                (Note that things like deferred RC and coalesced RC refer to techniques that try to avoid writing the count, not to the deallocation.)

                                                                (And no, “strategically” keeping references alive is not a solution – if you had such a good track of your alive and dead references, you wouldn’t need to use RC in the first place.)

                                                              2. 5

                                                                Because the other replies try hard to misunderstand you: Yes, you are right about the main difference.

                                                                The main difference is that in Go most stuff can be a value type which mostly keeps GC out of the equation, while in Java, Python and Ruby the GC is pretty much involved everywhere except for a small selection of special-cased types.

                                                                And no, escape analysis – while being an improvement – can’t save the day here, it works – if it works – for single values. No amount of escape analysis is able to rewrite your array-of-references Array[Point] to a reference-less Array[(x, y)] for instance.

                                                                Go’s GC is decades behind e.g. Hotspot, but it’s not that big of an issue for Go, because it doesn’t need GC for everything, unlike Java.

                                                                1. 3

                                                                  Java, Python, and Ruby only have a heap.

                                                                  Java does have a stack. This is also part of the Java VM specification:

                                                                  https://docs.oracle.com/javase/specs/jvms/se15/html/jvms-2.html#jvms-2.5.2

                                                                  Some implementations even have stack overflows (though growable stacks are also in-spec).

                                                                  1. 2

                                                                    I meant in terms of allocations. I should have been more precise.

                                                                    Most OOP languages only allocate on the heap. It’s a nice simplification in terms of language design, but it also means that more garbage gets generated. I am sure that advanced JVMs can use static analysis and move some of the allocations to the stack as well but it’s not a default feature of the language like in Go.

                                                                    1. 1

                                                                      Thanks for the clarification.

                                                                      and move some of the allocations to the stack as well but it’s not a default feature of the language like in Go.

                                                                      Sorry for being a bit pedantic ;), but it’s not a feature of the Go language, but the default implementation. The Go specification does not mandate a stack or heap (it never uses those terms). It’s a feature of the implementation and it only works if the compiler can prove through escape analysis that a value does not escape (when the value is used as a pointer or pointer in an interface value). This differs from languages which have specifications that separate stack and heap memory and have clear rules about stack vs. heap allocation.

                                                                      When I last looked at a large amount machine code output of the Go compiler 3 years ago or so, escape analysis was pretty terrible and ‘objects’ that people would consider to be value types would be allocated on the heap as a result. One of the problems was that Go did not perform any or much mid-stack inlining, so it’s not clear to the compiler whether pointers that are passed around persist beyond the function call scope.

                                                                      So, I am not sure whether there is a big difference here between Go and heavily JIT’ed Java in practice.

                                                                      What does help in Go is that the little generics that it has (array, slice, maps) are not implemented through type erasure in the default implementation. So a []FooBar is actually a contiguous block of memory on the stack or heap, whereas e.g. ArrayList<FooBar> in Java is just an ArrayList<Object> after erasure, requiring much more pointer chasing.

                                                                  2. 0

                                                                    Ruby very much has a stack.

                                                                1. 6

                                                                  I solved this problem by lifting their query DSL into types and making it (as much as I could, anyhow) impossible to construct an invalid query: https://github.com/bitemyapp/bloodhound

                                                                  1. 2

                                                                    My problem was less invalid queries and more that code which built queries to ES from user’s input wasn’t very clear. In fact, it was hard to read and hard to update.

                                                                    So I think types may help, but the overall approach is what’s more important.

                                                                    1. 2

                                                                      more that code which built queries to ES from user’s input wasn’t very clear. In fact, it was hard to read and hard to update.

                                                                      Yes, that’s why I wrote Bloodhound. I know people that don’t use Haskell that still use Bloodhound anyway to generate complicated queries, using the Haskell code as a nicer, more maintainable template in effect. Look at the tests for example:

                                                                      https://github.com/bitemyapp/bloodhound/blob/master/tests/Test/Query.hs#L24-L27

                                                                      I mentioned invalid queries because that’s the harder problem to solve. Just making something that’ll at least tidy up the API is the first step. Tightening it up so you eliminate opportunities for users to make query structures that don’t make sense is where it starts to really come together.

                                                                      https://github.com/bitemyapp/bloodhound/blob/master/src/Database/Bloodhound/Internal/Query.hs#L513-L529

                                                                      The types are the interface that make it self-documenting and easier to maintain. I’ve been using ES off and on since pre-1.0 and I hated the string blob templates that I had in Python before. After I learned Haskell, I had the idea that you could use types to reify the query DSL into an interface. And I was right, it works great.

                                                                      1. 2

                                                                        Sorry, I fail to see how types are more maintainable than just plain maps. This thing:

                                                                              let query = TermQuery (Term "user" "bitemyapp") Nothing
                                                                        

                                                                        is not better than just {:term {"user" "bitemyapp"}}. I’d argue it’s worse since you have to know the mapping rather than just writing this stuff directly.

                                                                        What I’m talking about is one step higher: it’s a design of a query builder. Not of a query itself, this thing is awful and ElasticSearch receives a lot of heat online for its query language design, and not for nothing. But building those queries has nothing to do with types. Invalid queries were not my problem.

                                                                        1. 4

                                                                          is not better than just {:term {“user” “bitemyapp”}}.

                                                                          It is better because Elasticsearch changes their API with some regularity and with types when we update the type in Bloodhound to match the new API structure, you’ll get a list of type errors everywhere in your code where you need to fix it before your stuff will work. You can get migrations done a lot faster. This isn’t hypothetical: we have production users that love this. This is also a facility of types in general.

                                                                          You probably aren’t aware but I was a Clojure user before Haskell and maintained some libraries like Korma. I know what it’s like to maintain production Clojure code.

                                                                    1. 6

                                                                      I’d tl;dr as:

                                                                      • OO (defined ala Alan Kay: encapsulation / message passing / late binding) is a technical detail
                                                                      • OO has nothing to do with how “analyze stuff and the way we think”

                                                                      They then, in a very disjointed way, recapitulate the ’90s - ’00s agile and DDD movements with all their design principles whilst also haphazardly making references to Turing machines, lambdas, and Simula.

                                                                      I’m trying to be charitable.

                                                                      1. 2

                                                                        I would add that there’s some interesting back and forth around minute 22 distinguishing OOP proper from Object Oriented Design and possibly Object Oriented Analysis. I think it would be fair to summarize this as saying that Uncle Bob doesn’t really think there’s anything much to criticize about OOP but some of the things done at the design and analytic levels can be problematic. This goes from around the beginning of minute 22 into minute 26 but obviously the conversation drifts around a bit.

                                                                        There’s some interesting criticism of inheritance in minute 26.

                                                                        Just a rant about definitions here but, I don’t think those Youtube “transcripts” really qualify as a transcript: it’s just the subtitle file and doesn’t include punctuation or speaker attribution as a true transcript should. Subtitles are expected to be viewed on the screen so contextual information about who is speaking (from the visual) is available. Transcriptions should stand on their own and contain notes about that contextual information so that the transcript can be read as its own text.

                                                                        1. 2

                                                                          Also those auto-generated subtitles are awful in my experience.

                                                                      2. 2

                                                                        There’s an autogenerated transcript on YouTube.

                                                                      1. 22

                                                                        When I saw the text “At the beginning of 2030”, I thought this essay was going to use the narrative conceit of claiming to be a historical retrospective on the present day, published in the future. But the essay doesn’t do anything with that premise, so now I suspect it’s just a typo for 2020. I see that at the end it claims to be an existing talk about Smalltalk with Haskell and Rust substituted in appropriately. I’m not familiar with that original Smalltalk talk though.

                                                                        In any case, I don’t think the object-level point this essay is making holds. In general, I’m skeptical of arguments saying that “such and such language community is arrogant, therefore Y”. It’s easy for an essay-writer or internet commenter to remember a small number of particular interactions with a particular group of people who happened to be advocating for the merits of some programming language over another, and incorrectly attribute this to a property of that programming language. Note the argument in the comments about whether it’s actually Go users who are prototypical examples of arrogance. I don’t generally expect to agree with programming language essayists over whether some example of discourse actually constitutes arrogance - it’s a common trait for people who are arrogant themselves, to accuse others of being arrogant.

                                                                        Haskell also has some important differences from Rust in design and and purpose, that do a lot to explain why Rust is more popular among industry developers than Haskell is. Haskell is fundamentally a research project in lazy functional programming design, whereas Rust is fundamentally a project to make well-established ideas from programming language research readily available to industry programmers, particularly ideas for statically-checking memory safety (which isn’t a research concern of Haskell’s). Rust notably borrowed some ideas from Haskell, such as traits (Haskell typeclasses), and I don’t think anyone working on Haskell would consider it a failure of theirs that an industry-focused language that borrowed ideas from their work happens to be more widely used in industry.

                                                                        Still, I don’t think it’s accurate to say that Haskell is “killed” (certainly it’s a lot easier to get a job in 2020 writing Haskell than writing Smalltalk!). It’s not among the most popular programming languages, but software-writing organizations do in fact use it. The language is being actively developed, (better) tooling around the language is being actively developed. People who care about functional programming reference it quite a bit, even if it’s in the context of applying Haskell ideas to non-Haskell programming languages.

                                                                        1. 7

                                                                          In general, I’m skeptical of arguments saying that “such and such language community is arrogant, therefore Y”. It’s easy for an essay-writer or internet commenter to remember a small number of particular interactions with a particular group of people who happened to be advocating for the merits of some programming language over another, and incorrectly attribute this to a property of that programming language.

                                                                          Plus, of course, the trolls: There’s plenty of people on Reddit, for example, who are “advocating for Rust” in a way that makes Rust advocates seem unhinged, which is the point. Maybe some of them hate the language, but most of them are downvote trolls (“Look at me! Downvote me! LOOK! AT! ME!”) riding a fad.

                                                                          1. 13

                                                                            I’m not so blind as to confuse the arrogance of Rust users with a flaw in the language itself, but I have noticed a fair amount of arrogance from the Rust community. This isn’t just “a few bad experiences in a sea of good ones”: every time I’ve interacted with the Rust community, I’ve had at least one or two responses that would be considered unwelcoming and arrogant. It’s not just trolling, I don’t think.

                                                                            I’ve had this happen on Twitter, here on lobste.rs, Stack Overflow, and various other places.

                                                                            (I avoid Reddit for the most part; if I wanted to know that COVID-19 was a hoax designed to destroy Trump or find out that same-sex marriage is Satanism, I’d just talk to my mother’s side of the family.)

                                                                            I think the perceived arrogance of the Rust community has some basis in reality, and I also think that it comes from Rust being a relatively new community, a relatively small community, a relatively technically-superior community, and a relatively unsupported community.

                                                                            Rust is technically superior compared to many other languages. Rust is the new kid on the systems-programming block. Rust has a relatively small community. Rust is not supported by Google or Oracle or Microsoft or Apple (yeah, Mozilla, but Mozilla is peanuts compared to those guys). All of this adds up to a “siege” mentality. The Amiga community was and is like that, and for often very similar reasons. I think a lot of the Rust community’s arrogance is perhaps a degenerate case of Amiga Persecution Complex.

                                                                            That being said, while I like Rust and I’m learning it and I plan to make it my next big language for major projects…the community needs to get better. I think that will happen with time, as it grows and the ratio of True Believers to Just Want To Get Work Done people shifts.

                                                                            1. 7

                                                                              I think the perceived arrogance of the Rust community has some basis in reality, and I also think that it comes from Rust being a relatively new community, a relatively small community, a relatively technically-superior community, and a relatively unsupported community.

                                                                              I agree with all of this, but I think there’s another thing at play: Rust is a technically different community in a way that matters. It isn’t GC’d, it’s ownership-managed, and that isn’t unimpeachable yet. If Rust fails, ownership-management fails, too, at least in terms of mainstream adoption, so your intellectual investment in that paradigm begins to look like getting taken for a ride by a silver-tongued scammer. OTOH, if Go fails, nobody’s going to Seriously Reevaluate the value of GC’d programming languages.

                                                                              1. 5

                                                                                Don’t worry, linear types failed many times before they had their first glimpse of mainstream success in Rust. ;)

                                                                                And so did GC, it failed even more times before it finally succeeded.

                                                                                1. 2

                                                                                  I’m not waiting another 10-20 years.

                                                                              2. 4

                                                                                It’s interesting that you say this, as I’ve actually found the community very approachable. Almost every major project has a Discord (pour one out for IRC…) and I’ve yet to have a really negative experience in one of them. I contrast this with older communities like the extremely toxic C and Linux IRC channels, where it was more of a surprise not to have a bad experience (at least as a beginner). General discussion forums vary, of course, particularly on sites like Reddit, but I’ve also never seen anything in Rust to rival the little nucleus of horrible people who floated around the Haskell and Scala communities for a long time. Even on Github issues, I’ve had pretty good reception and discussion, whereas in more mainstream communities like Python or Javascript I’ve seen (or been on the receiving end of) very negative behavior.

                                                                                The only really bad community behavior I’ve seen was surrounding actix, where I actually think the community was too nice. The maintainer’s behavior was extremely abrasive and negative, but people in the community acted like the reactions to this toxicity were themselves problematic. It reminded me of someone getting suspended from school for punching their bully.

                                                                                1. 5

                                                                                  Rust is very interesting in that it is a language in which – in terms of community – there is a massive delta between those who use it and those who are hobbiests and even more strangely, non-user evalgenlists.

                                                                                  Those I have spoken to / worked with who either touch Rust core or ship real-world code have been extremely kind, aware of language pros and cons and more than willing to discuss both.

                                                                                  But, as you go outside of the that group you reach a group of people who are – borderline non-users. These people have never shipped Rust code anywhere, open-source or elsewhere. Yet they will promote it as being world-saving or viciously attack those who dare slander it.

                                                                                  What makes me suspect Rust will overcome is that delta, as the community of active users grows, hopefully, the voices of those who are more on the fringes will be less noticeable, or in the best case, they will start to emulate those they respect in the community.

                                                                                  1. 1

                                                                                    That’s interesting, and would explain my experience. I very rarely interact with “fringe” communities, instead primarily interfacing with the community as a question of practice while writing code (e.g. if I find a bug or non-obvious behavior in a library, I go and ask about it). In your assessment I would therefore be interacting only with the “core” of the community, and avoiding the “fringe.” Hopefully this core – which, again, I’ve had nothing but positive experiences with – can overcome whatever horribleness newcomers are encountering at the edges.

                                                                                    1. 1

                                                                                      Well – the risks are two-fold. The first is the “fringe” in this case might well be the majority (in raw numbers). The second is the “fringe” is often the first point of contact someone has with the community as they explore the idea of using Rust. Unfortunately, many people first explore a language, not via important projects, but via subreddits and various medium articles about the language.

                                                                                      1. 1

                                                                                        This honestly makes me think that we (e.g. people actually using Rust for things) need to write more articles focused on presenting Rust in a friendly/conceptual way besides just the usual beginner texts. It’s hard, though, because I think there’s a kind of analogue of the “explaining monads” problem: Rust changes the way you think about programming somewhat, but there isn’t a real language of practice around it yet. Developing those metaphors will be a challenge.

                                                                                        1. 2

                                                                                          I think you nailed it. The practice of being a Rust developer is still rather new. Additionally, Rust due to its differences with other languages makes it hard to tell someone just to “dive in and get involved” with a project they like. It can feel absolutely intractable at first. That is a benefit that Go tends to have – you find an interesting project, jumping in is easy and the distance from completely lost to enlightened tends to be a fairly straight line.

                                                                                2. 2

                                                                                  I’m not so blind as to confuse the arrogance of Rust users with a flaw in the language itself

                                                                                  that’s kinda like a foundational line of thinking in media studies in general; the notion that the specific affordances of a medium affect the users of that medium, and that the affordances of the medium are expressed through the content made in that medium. Hence the saying “the medium is the message”. McLuhan takes it pretty far (much farther and more literally than I would be thrilled to support), but the idea that a specific medium (a programming language, in this case) has a character and has its own message is … a widely studied concept worthy of consideration, and shouldn’t be disposed of so casually. For programming languages, the question then becomes: what is the language’s character and what is expressed by the language itself?

                                                                                  1. 1

                                                                                    This line of thinking permeates HCI, even back when it was just “human factors” research. But the oldest and perhaps deepest form I know is the venerable and well-worn Sapir-Worf hypothesis. It’s a difficult subject to study rigorously. Experiments are almost impossible to conduct, factors hard to control for… but there is nonetheless a rich body of perhaps-not-entirely-useless scholarship.

                                                                                    One need not be a scholar to observe that language and culture are everywhere intertwined. Although the diversity of “natural” human languages has suffered an epoch of mass extinction, we computer people are living in a tiny but ongoing explosion of artificial (and thus perhaps even more human) linguistic diversity, and inevitably dragging in all our predilections for cliques and status games, to enact amidst all the mathematical and mechanical concerns. Perhaps someday there will be a true sociolinguistics of programming languages. As it stands now, we barely have even scraps of historical consciousness… like this joking-not-joking article.

                                                                                3. 6

                                                                                  There’s plenty of people … who are “advocating for Rust” in a way that makes Rust advocates seem unhinged

                                                                                  that’s increasingly my experience of lobste.rs

                                                                                4. 2

                                                                                  In any case, I don’t think the object-level point this essay is making holds. In general, I’m skeptical of arguments saying that “such and such language community is arrogant, therefore Y”.

                                                                                  I think the original talk was using the demise of Ruby as a pretense to talk about professionalism in programming rather than earnestly attempting to analyze the demise of a language. Professionalism - Martin’s version of it - is something that he likes to pontificate about.

                                                                                1. 9

                                                                                  I would say especially in Go, since Go makes concurrency pretty hard compared to systems I’m used to…

                                                                                  1. 11

                                                                                    What systems are you used to?

                                                                                    1. 6

                                                                                      He’s probably thinking of Haskell, maybe secondarily Scala.

                                                                                      I mostly use Haskell, Rust, and Java and I have to concur.

                                                                                      1. 2

                                                                                        Yeah, Rust+Tokio is also pretty good.

                                                                                      2. 4

                                                                                        If I want to do concurrency I’ll always reach for Haskell, but also comfortable in Ruby+EventMachine

                                                                                        1. 2

                                                                                          This is super confusing to me. What do you use concurrency for?

                                                                                          1. 1

                                                                                            High-throughput network servers and clients, mostly.

                                                                                            1. 2

                                                                                              I think you might be the first person I’ve ever encountered who defaults to Haskell for network servers. This isn’t a criticism in any way, just an expression of mild astonishment.

                                                                                              1. 2

                                                                                                I certainly didn’t used to, but at this point I haven’t been able to find something else that even comes close in terms of concurrency abilities, especially with the GHC runtime. In something like Ruby+Eventmachine or Rust+Tokio you have to manage your async much more explicitly, whereas in GHC Haskell all IO operations are async all the time within the idiomatic programming model. With lower level systems like Go, you can have thread safety problems and non-atomic operations, wheras in Haskell all IO operations are atomic (unless you use the FFI in unsafe ways) and of course most code is pure and has no possible thread safety problems at all.

                                                                                                Probably more reasons, but that’s what comes to my mind.

                                                                                                1. 1

                                                                                                  What kind of RPS and p99 latency do you get with a Haskell service serving HTTP and doing some nontrivial work per request?

                                                                                                  1. 1

                                                                                                    Looks like the Haskell web server (warp) comes in at 20th on a webserver benchmark from four months ago.

                                                                                                    At my last job I did a lightning talk on Python vs Haskell for a simple webapp. I wanted to focus on simplicity of the code, but my coworkers wanted to see benchmarks. Haskell was much faster than Python for 99% of the requests, with laziness and garbage collection putting the last fraction of 1% of responses slower than Python. Python was slow, but consistently slow.

                                                                                                    1. 1

                                                                                                      Hmm, I have done some Haskell HTTP stuff, but not for high performance. If you’re really curious about HTTP I’d look up warp benchmarks.

                                                                                                      1. 1

                                                                                                        OK, then whatever you’ve done: I’m just trying to get a sense Haskell’s ballpark.

                                                                                                  2. 1

                                                                                                    I’m also a fan, lots of benefits to doing network servers in Haskell.

                                                                                                    Perhaps this tour of go in haskell would help illustrate some benefits?

                                                                                        1. 1

                                                                                          This is not the fully edited and final version of the text btw. Still usable, just be aware that there are errors.

                                                                                          e.g. Efficient compilation of pattern-matching and Transforming the enriched Lambda Calculus in the table of contents have the same number 5 leading them.

                                                                                          1. 5

                                                                                            That copy is a highly compressed djvu to pdf conversion and that error is actually a compression artifact from the djvu version, see: https://en.wikipedia.org/wiki/JBIG2#Disadvantages

                                                                                            For a copy without the compression artifact (but also uncropped), see: https://www.microsoft.com/en-us/research/publication/the-implementation-of-functional-programming-languages/

                                                                                            1. 1

                                                                                              That’s really interesting, I had no idea! Thank you.

                                                                                          1. 1

                                                                                            I mean, there are enough Linux distributions trying to be Solaris already. A different take would be welcome.

                                                                                            1. 9

                                                                                              If we’re gonna open that can of worms I’d also assert that rather than creating new distributions it’d be awesome if folks would pour their efforts into improving one of the umpteen skittledezillion that already exist.

                                                                                              However, that’s the beauty of open source right?

                                                                                              1. 5

                                                                                                The beauty and the tragedy of open source.

                                                                                                1. 4

                                                                                                  I don’t like distro proliferation, either.

                                                                                                  But this doesn’t fit the “New DE with distro centered around it” pattern nor the “Let’s make another Debian/Arch derivative and market it as user friendly so that some suckers donate to us” pattern.

                                                                                                  This one seems to have some merit to it, by going llvm/libc++/musl.

                                                                                                  1. 1

                                                                                                    Why not, say, an Ubuntu variant though? Or a spin of Fedora?

                                                                                                    SOMETHING to help these new developments broaden an existing community.

                                                                                                2. 2

                                                                                                  Which ones? I’m curious.

                                                                                                  1. 1

                                                                                                    That was poorly phrased. I was thrashing around for the idea that they’re a lot of distributions trying to do the same thing, not so much that they’re trying to be like Solaris.

                                                                                                1. 49

                                                                                                  I saw this described on the IRC channel as ‘what-color-are-your-underpants threads’ - lots of easy engagement stuff, crowding more interesting stuff off the front page. My perception is that there is now a lot less of the stuff that differentiated lobste.rs from the other hundredty-dillion tech sites - it was good at bridging computer-science-proper topics and real applications, e.g. someone’s experience report of using formal verification in a real product, or how property testing uncovered some tasty bug in some avionics, or how to synthesize gateware with Z3. That sort of thing.

                                                                                                  It doesn’t have to be the case that underwear-threads exist at the cost of quickcheck threads, but as they increasingly crowd the front page and stay there, it means the slightly more cerebral stuff has less chance to be seen, and new people get a different perception of what lobste.rs is about, and so the tone of the place gradually shifts. Some people might think that’s fine, I think it’s a shame. Differentiation is good.

                                                                                                  As for ‘if it gets upvotes then by definition it belongs’, I’ve always thought that ‘just leave it to the market’ attitude is total and utter cow-dung. Of course there should be regulation. If you applied that confusion everywhere you’d have sport be replaced by gladiatorial combat, McDonalds purchasing every farm until that was all you could eat, and other kinds of dystopia that unfortunately some americans are beginning to suffer a flavor of (choosing between insulin and death, $1000 toilet roll…). There is nothing inevitable about letting upvotes decide the tone of the site, it’s not a fundamental physical force. You’re allowed to intervene, complain, and so on. It should be encouraged, I think.

                                                                                                  1. 21

                                                                                                    crowding more interesting stuff off the front page

                                                                                                    Come on, there’s very rarely more than one of these threads on the front page, how is that crowding?

                                                                                                    1. 7

                                                                                                      Well I counted three at one point today, which is over 10% of the front page. I’d like to nip this virus in the bud! It’s too facile to make corona references but regardless, we can go from ‘15 deaths out of 300 million people, no big deal’ to We Have A Problem is a fairly short space of time.

                                                                                                      One of the more useful and formative talks I watched when helping to start my business was Ed Catmull [computer graphics pioneer and former Pixar president]’s talk entitled ‘Keep your crises small’*in which he makes the case that businesses are fundamentally unstable and it’s especially hard to notice the bad stuff during periods of high growth or profitability. He contends that you must always have your hand on the tiller and make steering corrections before problems get too big. I see an analogous situation on lobste.rs.

                                                                                                      Look at my posting history here. It’s crap. I am a consumer and not a contributor. I have no right to voice my opinion really because I have not done my bit to try and steer lobsters in the direction I want. I am a mechanical engineer with no formal CS background and I stayed here merely because I learned a great deal, and my industry is one built on MScs and PhDs committing abominations in Excel and Matlab, in which a bit of solid CS and solid industrial best-practice would reduce the friction in aerospace R&D by an order of magnitude. It took me five years to get one of my customers to switch to python. Now one of them is using Hypothesis (!) and advocating its usage more widely in a reasonably large aerospace company. I am a True Believer in the value of Advocating the fruits of Computer Science in a field where most participants think the low hanging fruit lies elsewhere. All I’ve been doing is sharing the good stuff that lobsters introduced me to. And this is why I lament the fact that it’s being muscled out by what vim colorscheme do we all prefer, and why I therefore am moved to leave a comment like the grandparent.

                                                                                                      Yes, I will make more effort to upvote and comment on the bits of lobsters I value from now on.

                                                                                                    2. 11

                                                                                                      is that there is now a lot less of the stuff that differentiated lobste.rs from the other hundredty-dillion tech sites

                                                                                                      There is and it’s due to less demand. What the audience wants has changed. I was still doing submissions like you described. They rarely hit the front page. The things getting upvoted were a mix of Lobsters-like content and stuff that gets high votes on other sites. Sometimes cross-posted from those sites. I stopped submitting as much for personal reasons (priority shift) but lack of demand/interest could’ve done it by itself.

                                                                                                      1. 8

                                                                                                        I stopped submitting as much for personal reasons (priority shift)…

                                                                                                        For what it’s worth, I noticed that you have been posting less. Hope all is well.

                                                                                                        1. 12

                                                                                                          I’ll message you the details if you want. It’s been a trip with me back to the Lord, surviving some Gone Girl shit, and facing COVID workload right after. Right now, Im focused on fighting COVID and problems it causes however I can. City-wide shortage on toilet paper, soap, cleaners, etc and nurses having no alcohol/masks made me source them first. Gotta block hoarders and unscrupulous resellers, though.

                                                                                                          Gonna have to find some web developers who can build a store or subscription service. Plan to let people pick up limited quantities that I order in bulk and resell just over cost. Might also scan what’s in local stores to reduce people’s time in them. After that, maybe a non-profit version of InstaCart with advantages that may or may not be easy to code. Got an anti-microbial scanner on the way for whatever.

                                                                                                          Once everything settles, I’ll get back to my security projects. I just go where needed the most. Worse, people arent social distancing here: enveloping around me constantly. COVID can kill me. So, Im tired after work from holding my breath and dodging people up to 14hrs a day. Had no energy for doing CompSci papers either.

                                                                                                          So, there’s a brief summary of some things Ive been up to for anyone wondering.

                                                                                                          1. 4

                                                                                                            I’m sorry to hear that. I assumed that you must be busy with other stuff or taking a break, but I wouldn’t have guessed how hard of a time you were having. I hope that things start looking up for you soon.

                                                                                                            1. 4

                                                                                                              I really appreciate it. Everyone supporting these comments, too, more than I thought I’d see. I’m good, though. (taps heart) Good where I need to be.

                                                                                                              The possibilities and challenges do keep coming, though. Hope and pray those of us fighting this keep making progress both inside ourselves and outside getting things done in the real world. I’ll be fine with that result. :)

                                                                                                        2. 2

                                                                                                          Speaking of which, where do you find your papers?

                                                                                                          1. 6

                                                                                                            I applied old-school methods of using search engines to paper discovery. I pay attention to good papers that cite other work. Each sub-field develops key words that are in most of the papers. I type them into DuckDuckGo and/or Startpage with quotation marks followed by a year. Maybe “pdf” with that. This produces a batch of papers. I open all of them glancing at abstracts, summaries, and related work. I’ll go 5-10 pages deep in search results. I repeat the process changing the terms and years to get a batch for that sub-field. Then, I used to post the better ones one by one over a week or two. Then, do a different sub- or sub-sub-field next.

                                                                                                            The Lobsters didn’t like seeing it that way. Too much on the same topic. So, I started getting the batches, saving them in a file, batch another topic when I have time/energy, and trickling out submissions over time with varying topics. So, I might have 50-100 papers across a half dozen to a dozen topics alternating between them. I just pick one, submit it, and mark it as submitted. Eventually, when there’s not much left, I would just grab some more batches.

                                                                                                            1. 2

                                                                                                              Wow that’s amazing! Thank you so much for doing this! I’ve seen some really nice papers here but I didn’t realize there would be this kind of work behind the posting.

                                                                                                              1. 2

                                                                                                                I thought people like you just happened to read extremely much.

                                                                                                                Equally impressed now, just for a different reason.

                                                                                                          2. 2

                                                                                                            I get the idea of that. I think that what makes the distinction between good and not-so-good ask threads is the length of the responses. For share your blog - what is there to say except a link to your blog? I didn’t bother looking at that one. On the other hand, the distro question generated a ton of long responses about various Linux distros and the pros and cons thereof, interesting stuff. I wonder if there’s some way we could discourage short responses on ask threads, or ask thread topics that tend to generate short responses.

                                                                                                            1. 1

                                                                                                              Of course there should be regulation. If you applied that confusion everywhere you’d have sport be replaced by gladiatorial combat, McDonalds purchasing every farm until that was all you could eat, and other kinds of dystopia that unfortunately some americans are beginning to suffer a flavor of (choosing between insulin and death, $1000 toilet roll…).

                                                                                                              It’s not that I’m looking forward to opening a discussion about this topic, but are you sure this would be the case? Lots of pathological actions done by monopolies are the result of regulating the market in a way which effectively removes competition, leaving the power to the monopolies (in fact, lots of megacorporations that exist nowadays wouldn’t be able to grow to such sizes if it wasn’t for the help from the government). I wouldn’t be so sure that the lack of regulations is the main problem.

                                                                                                            1. 11

                                                                                                              It’s curious how of the successful Haskell projects in the wild, few if any go crazy with the type system.

                                                                                                              (This observation is based primarily on PostgREST and Pandoc, and now ShellCheck. GHC might be a counter-example.)

                                                                                                              1. 3

                                                                                                                It’s curious how of the successful Haskell projects in the wild, few if any go crazy with the type system.

                                                                                                                Considering how popular e.g. lens, servant, and aeson are, I’m not sure this follows. What is your criteria for being a “successful Haskell project in the wild?” What’s your definition of “go crazy with the type system?”

                                                                                                                1. 6

                                                                                                                  Those are librairies though. PostgREST, Pandoc and ShellCheck are not meant to be used only by Haskell developers.

                                                                                                                  1. 0

                                                                                                                    So, let’s say “user-facing apps” then. Now what was meant by “go crazy with the type system” I wonder?

                                                                                                                  2. 4

                                                                                                                    In what sense does Aeson go crazy with the type system?

                                                                                                                1. 69

                                                                                                                  So, I love Rust, and all of the nice things they say about Rust are true. Having said that, I’m now going to completely ignore the Rust angle and focus on something else that occurred to me.

                                                                                                                  To summarize Discord’s problem - after extensively optimizing a Go service to produce minimal garbage, they found that the garbage collector was still triggering every two minutes (which appears to be a hard minimum frequency) regardless of the lack of garbage produced. Further, each GC sweep was expensive because it had to walk the entire heap full of live objects.

                                                                                                                  The interesting point to me is that this use case (latency-sensitive service with extremely low rates of garbage production) was pathological for a tracing GC, and that optimizing it to produce less garbage made it even more so. Tracing collectors operate on every live object and ignore dead objects, so a heap full of live objects and very few dead ones is a bad fit for a tracing collector. They solved their problem by switching to a reference counting system (well, “reference counting” where everything has exactly one owner and so you don’t actually need to count). Reference counting ignores live objects and operates on dead ones, so of course it would be a better fit for this service. If Go had a collector based on reference counting they probably could have gotten much of the same benefit without rewriting.

                                                                                                                  This reminded me of “A Unified Theory of Garbage Collection” by Bacon et. al, but it hadn’t occurred to me before how optimizing the app to produce less garbage could make the GC’s job harder in some ways. It’s still better to reduce garbage production than to not do so, but it may not give as much benefit as one might expect because of this.

                                                                                                                  1. 3

                                                                                                                    They solved their problem by switching to a reference counting system (well, “reference counting” where everything has exactly one owner and so you don’t actually need to count). Reference counting ignores live objects and operates on dead ones, so of course it would be a better fit for this service.

                                                                                                                    Aside from your wider point, it’s a little more subtle than that, because of abstractions like Rc which give a counted reference to a value, meaning multiple references. There’s also Arc which is an atomic reference counter for use in multiple threads. The first simple Rust program I wrote, I was guided to using Rc, so it’s not even uncommon. Without seeing their code, I’m willing to bet there are plenty of such cases in their code.

                                                                                                                    1. 6

                                                                                                                      The first simple Rust program I wrote, I was guided to using Rc, so it’s not even uncommon.

                                                                                                                      Do you mind sharing what you were trying to do? I’ve been writing Rust for a long time now, and I can count on one hand the number of times I’ve needed Rc. I’ve used Arc a fair number of times though. Still, I’d probably call both pretty uncommon. But there are certainly types of programs where they may be more common.

                                                                                                                      1. 4

                                                                                                                        I’m currently making a game engine in Rust (rewriting my old Idris stuff) and I use it all the time, from day one. Some of it may be due to the problem at hand necessitating it, but some of it is surely my lack of experience in Rust. I think some of the problems might be solved with a more refined use of lifetimes… but I’ve been burned by structs+lifetimes before so I’d rather opt for something I have a better grasp of even if it’s more inelegant a solution.

                                                                                                                        For example, my game has a Timeline object, which is basically the central source of truth about important game data (stuff that has to be saved). But it’s not a regular field, it’s an Rc, because I need to share it with Scene objects (which actually run the game logic). I could make a complex messaging system to telegraph the state changes between Server and multiple Scenes but again… I don’t really wanna.

                                                                                                                        1. 2

                                                                                                                          Yeah I’ve never made a game engine, so it’s hard for me to know whether Rc is truly beneficial there. In any case, I’m mostly just trying to push back against the notion that reference counting is common in Rust code. (And specifically, Rc.) I was just very curious about the “first simple Rust program” that someone wrote where they were guided towards using Rc.

                                                                                                                          This is important because if reference counting were very common, then that would diminish the usefulness of the borrow checker. e.g., “What good is the borrow checker if you wind up needing to use reference counting so much?” Well, you don’t wind up needing to use reference counting a lot. There are of course many cases where reference counting is very useful, but that doesn’t mean it’s common among the entire body of Rust code.

                                                                                                                        2. 1

                                                                                                                          Just as an off-hand example from my experience: you basically can’t get anything done with GTK and Rust without Rc. Cf. https://github.com/bitemyapp/boxcar-willie/blob/master/src/main.rs#L108

                                                                                                                          I wrote boxcar-willie with assistance from the gtk-rs people.

                                                                                                                          Some common web-app stuff will force you into that too.

                                                                                                                          There are other situations and libraries that force it but these are the ones that come to mind from my own background. GUI apps and web apps already touches >80% of programmers.

                                                                                                                          1. 2

                                                                                                                            What part of web apps use Rc in Rust? There is more nuance to this. A better metric might be “of all the types you define in your project, what proportion of them use reference counting?” If you have to have one reference counted type among dozens of other non-reference counting types, then I’d say it’s pretty uncommon. For example, if most web apps have a database handle and then database handle uses an Arc to be efficiently shared between multiple threads simultaneously, and since database handles are pretty common in web apps, would you then conclude that “reference counting is common” in Rust? I don’t think I would. Because it’s doesn’t pervade and infect everything else in your code. There’s still going to be a lot of other stuff that doesn’t use reference counting at all.

                                                                                                                            The GTK case is indeed known, and was on my mind when writing the above comments. But it’s not clear to me that this is a GTK problem or whether it generalizes to “GUI apps.”

                                                                                                                            1. 1

                                                                                                                              Well usually it’d be an Arc, particularly in cases where the framework doesn’t provide a way to share data between request handlers.

                                                                                                                              I was just proffering where I’d run into it. I’m not trying to make some kind of polemical point. I rather like using Rust.

                                                                                                                              then database handle uses an Arc to be efficiently shared between multiple threads simultaneously, and since database handles are pretty common in web apps, would you then conclude that “reference counting is common” in Rust?

                                                                                                                              I’m speaking to peoples’ subjective experience of it and how they’re going to react to your characterization of it being rare. We’re not taking a pointer head-count here. You get someone comfortable with but new to Rust and have them spin up a few typical projects they’re going to say, “but I kept running into situations where I needed ${X}” and it doesn’t feel rare because it occurred at least a couple times per project. I’m personally very comfortable and happy with the performance implications of a handful of reference-counted pointers and everything else being automatically allocated/de-allocated on the stack or heap. That being said, if you use the wording like you used above, you’re going to continue to elicit this reaction if you don’t qualify the statement.

                                                                                                                              Edited follow-up thought: I think part of what’s needed here perhaps is an education push about Rc/Arc, their frequency in Rust programs, when and why it’s okay, and how it isn’t going to ruin the performance of your program if a few are floating around.

                                                                                                                              1. 2

                                                                                                                                My initial reaction was to the use of Rc. If they had said Arc, I probably would not have responded at all.

                                                                                                                                1. 1

                                                                                                                                  I apologize for communicating and interpreting imprecisely. I mentally glob them together.

                                                                                                                                  I think GTK is in fact the only time I’ve really used Rc. Everything else has been Arc I’m pretty sure!

                                                                                                                      2. 1

                                                                                                                        The interesting point to me is that this use case (latency-sensitive service with extremely low rates of garbage production) was pathological for a tracing GC, and that optimizing it to produce less garbage made it even more so. Tracing collectors operate on every live object and ignore dead objects, so a heap full of live objects and very few dead ones is a bad fit for a tracing collector.

                                                                                                                        I wouldn’t draw a general conclusion from the behavior of the Go garbage collector.

                                                                                                                        Optimizing a system to produce less garbage is a standard optimization technique for the JVM and .NET.
                                                                                                                        It is effective on these platforms because they both use generational garbage collectors.
                                                                                                                        Long-lived or large objects are traced rarely or not at all.
                                                                                                                        The LMAX Disruptor, for example, allocates all memory at startup.

                                                                                                                        This technique isn’t effective for Go because the Go garbage collector forces a collection every 2 minutes.

                                                                                                                        Go uses a conservative non-generational GC.
                                                                                                                        Here are some of the tradeoffs of this design:

                                                                                                                        • Conservative - objects are never moved in memory and the heap is not compacted.
                                                                                                                          Interoperability with C is easier but heap fragmentation is a potential risk.
                                                                                                                          Memory usage is lower than with a generational GC.
                                                                                                                        • Non-generational - Generational garbage collectors can scan only recently allocated objects while ignoring old large objects. Most objects die young so this is often beneficial.
                                                                                                                        • Pause times may be lower than a generational GC.
                                                                                                                        • Lower implementation complexity.

                                                                                                                        Neither design is right or wrong but I would be leery of using Go for a high performance system.

                                                                                                                      1. 27

                                                                                                                        Was anyone else surprised Bram works on Google Calendar?

                                                                                                                        1. 14

                                                                                                                          I’ve been using Vim for almost 20 years and had absolutely no idea (1) what Bram looked like or (2) that he worked for Google.

                                                                                                                          1. 3

                                                                                                                            Definitely.

                                                                                                                            Though I shouldn’t be, it seems like they hired a ton of the previous generation of OSS devs: thinking of things like vim, afl (uncertain, though the repo is under Google’s name now), kismet, etc.

                                                                                                                            1. 2

                                                                                                                              It’s just not what I would’ve guessed would be the highest and best use of his talents.

                                                                                                                              I’m not saying I believed he was working on vim, I know better than that. I’m just surprised it was something so…ordinary and corporate.

                                                                                                                            2. 3

                                                                                                                              Yes! And that he sounds as if Google is still a start-up and not one of the biggest companies in the world. Had to check the date of the article. Of course it doesn’t feel like a startup, Bram…

                                                                                                                              1. 2

                                                                                                                                Maybe he means Google Zurich, which seems to have expanded by a lot lately?

                                                                                                                              2. 2

                                                                                                                                Me, honestly.

                                                                                                                              1. 6

                                                                                                                                The post author does not mention it, but there is also Haskell Programming From First Principles which she co-authored along with Chris Allen. Many beginner learners say good things about the book. Some, like me, who appreciate concise writing found it lacking. In the end, whatever book you use - you won’t make much inroads into Haskell (or any radically new tech for that matter) without actually trying to develop projects in it. That’s how we learn. Hands-on work.

                                                                                                                                There is also Real World Haskell. Although it is a bit outdated (but being revived by volunteers), it contains great practical recipes.

                                                                                                                                1. 9

                                                                                                                                  Totally agree with your point about actually getting your hands dirty. Haskell is no different from any other language in that regard. You’ll get nowhere simply flirting with the language and pondering poor monad analogies.

                                                                                                                                  The post author does not mention it

                                                                                                                                  I think there’s a reason for that, though I hope this thread doesn’t descend into an argument about that bit of history.

                                                                                                                                  1. 3

                                                                                                                                    Huh, did the two coauthors of that book have a falling out after it was published? I read it myself and liked it well enough, although I found it aimed at a level of Haskell understanding a little more basic than my own at the time I read it.

                                                                                                                                    1. 5

                                                                                                                                      Yes, though as I said I hope this thread doesn’t turn into all of us discussing it. There are statements from the authors and other discussions elsewhere, but let’s all leave it at that.

                                                                                                                                      Instead, we can talk about Julie’s subsequent work.


                                                                                                                                      I bought an early access copy of Finding Failure (and Success) in Haskell and I thought it was really good, especially for people new to the language. The exercises are practical, and help you understand the why behind the what. Motivating examples are so important. Otherwise, I think most people who see a tutorial like “Here’s how monad transformers work” would be like “Ok? But so what?”

                                                                                                                                      1. 2

                                                                                                                                        Chris Allen (the other co-author) has branched off on his own as well, looking to publish the “next in series” book titled Haskell Almanac. Sadly, however, there has been no update on this book, just as there is none on the much anticipated Intermediate Haskell. Though luckily there is Thinking in Types by the author of polysemy.

                                                                                                                                        As I see it, Haskell lacks intermediate level books more than beginners books.

                                                                                                                                        1. 2

                                                                                                                                          The final release of Haskell Programming from First Principles now has the OK. I’m releasing it by the end of this month. I’ll work on the print version after that. I have a printer that can do an offset print run ready to go. Just a matter of figuring out how many I should run and how to finance it. I have a climate controlled storage unit ready for the books. I never found a suitable solution for third party logistics so my wife and I will be shipping all the print books.

                                                                                                                                          As I see it, Haskell lacks intermediate level books more than beginners books.

                                                                                                                                          You’re right that this is the more immediate problem now. Over 5 years ago when I started on HPFFP making sure no beginner was left behind was the more pressing issue.

                                                                                                                                          I have work to do on https://lorepub.com before it’s ready to sell print books (~1-3 days of coding and ops work from seeing the deployment for digital sales). Once the HPFFP print version is available for sale and that situation is stable, I’ll get back to the Almanac. After the Almanac, I’ll be seeing if I can be more productively employed as a publisher than an author. I believe the process we hammered out for HPFFP can be well applied to other topics and educational goals.

                                                                                                                                  2. 4

                                                                                                                                    Speaking of getting your hands dirty… There is the https://github.com/qfpl/applied-fp-course/ where you actually build a small REST backend with Haskell. Sort of a fill in the blanks style, independent levels of increasing complexity thing. :)

                                                                                                                                    Disclaimer: I’m biased, I wrote it.