Threads for dark_grimoire

  1. 1

    This gets pretty complicated once there are additional requirements involved:

    • support for macOS :),
    • scaling issues, so this will need another layer, a scheduler, to be build on top on vmtest, and then “host machine” would be just one host of many,
    • kernel panic handling in case there is kernel development going on,
    • not sure if now it’s possible to write a test that requires restarting the VM?
    • some tests may require more than running a simple binary, so there should be a way of preparing the environment for the actual test (so that VMs have some Python version with some packages installed, some applications or libraries that are in conflict with our software are installed and we have tests for that),
    • Automatic way of creating new fresh VMs with a new OS version; for macOS this means that a new VM with a new update is created, on Linux that would probably mean a fresh copy of the kernel/distribution we can to test our software on.

    I’m maintaining a similar thing in my company for some years, but it only supports macOSes and is built on top on VMware Fusion. It’s working, but it’s not great, because of i.e. bugs and limitations of Fusion (and it’s hard to find an employee that cares, but that’s a different story).

    1. 2

      Sanitizers are developer tools, they’re not designed with security in mind. I wouldn’t be surprised if some sanitizer one day would introduce a feature that will aid developers with finding bugs, but would be classified as security hole if introduced in a “release” version of applications. I think using developer tools for security is a terrible idea.

      1. 4

        I feel like I might be taking this more seriously than the author intended but:

        Are there many mail clients that speak POP3 but not IMAP4? IMAP was standardised at around the time the Internet was opened to non-academic, non-government connections. Most of the older mail clients that I remember spoke weird and wonderful proprietary protocols.

        It feels like IMAP would be a good choice for this, since it’s authenticated and can handle both posting, threaded replies, and so on. Even in-place editing.

        NNTP is an even older protocol and is very simple (I wrote a client in about 100 lines of C) and the model for NNTP is quite similar to ActivityPub (multiple servers, can post on your own, your posts are pushed to other servers, comments are threaded).

        1. 4

          An AP to NNTP gateway would be cool (although I don’t think I have access to any textual NNTP servers any more) - could even add the reverse and use Usenet as a transport mechanism.

          It feels like IMAP would be a good choice for this

          From what I know, IMAP is a complex spec with a million gotchas that everyone implements slightly differently. POP3/NNTP are things you can lash up without going insane.

          1. 2

            An AP to NNTP gateway would be cool

            Years ago, I hacked together a NNTP gateway to Drupal. It reads articles and comments from the Drupal database and publishes them over NNTP. Posting new comments through this protocol is also possible.

            1. 2

              I’ve written a lobsters/reddit to NNTP proxy prototype (with posting support), but it quickly turned out that POP3/NNTP is not an ideal protocol for this, IMAP would be better; core reason is that NNTP doesn’t support syncing between multiple machines, and IMAP does support it.

              So I’m a little bit sad that MOP3 is not MIMAP.

              Although it’s true that one can create another layer with dovecot+fetchmail that would convert nntp/pop3 to imap. But this starts to be somewhat complicated.

            2. 3

              Oh yes, I forgot: I wrote a Lobsters to NNTP adapter a while back. I would not call NNTP simple - between the statefulness, MIME, and keeping a mapping between article IDs (which must be linear and monotonic) and your actual representation, it can be quite tricky to do right.

              1. 2

                Modern (past 15 years or so) GUI mail clients almost certainly speak IMAP.

                But terminal clients are probably another story – many of them historically have been developed by people who roll their eyes and sigh and re-explain “this is an MUA” over and over when asked about features like accessing mail stored on a remote server. Admittedly this was years and years ago, but the last time I tried mutt, even it didn’t have IMAP support and seemed like it had only grudgingly supported POP3.

                1. 1

                  Are there many mail clients that speak POP3 but not IMAP4? IMAP was standardised at around the time the Internet was opened to non-academic, non-government connections. Most of the older mail clients that I remember spoke weird and wonderful proprietary protocols.

                  A lot of 90s mail clients before ~1998 didn’t speak IMAP - POP was much more popular then, and did map to how dialup users used email.

                1. 11

                  What they never tell you is what they’re bound to. If they’re bound to my phone and I lose my phone I’m absolutely up seven ways to Sunday in trouble. How do I back them up? How do I recover them if my phone is stolen?

                  1. 9

                    Normal reply from Google, Apple, etc, would be that you should enable CloudSync(TM)(C)(R) and send all your personal information to their servers. So thay can protect you from whoever will want to get your personal data. This is how you can be sure that even if you lose your phone, your passkeys(TM)(R)(C) will be safe*. (see our EULA, part 17A, section 4.5.6.7 what is our definition of passkey safety).

                    1. 7

                      For Apple devices, Passkeys are stored in and synced by iCloud Keychain, which is end-to-end encrypted. (As far as I’m aware, it’s not possible to get iCloud Keychain not to use E2EE. Some categories of iCloud data are only encrypted if you turn on “advanced data protection,” but Keychain is not one of them.) I think Apple’s response would be that if you lose your iPhone, then what you need to do is to get a new iPhone, sign in to iCloud, let it sync, and then you have all of your passkeys again.

                      One counterargument to this is, “well, what about the time between losing my old phone and getting the new one?” I saw one Apple employee point out that if you had used a strong password—one that you were relying on a password manager to remember for you—then you’d be in the same situation. I’m not sure I totally buy that; at the moment, password managers (generally construed) run on a wider variety of computers than passkey-storing password managers. It’s entirely possible that you could lose your iPhone (thus losing access to your passkeys) but still have, you know, pass running on your Linux desktop, so that you’d still have access to your passwords. But over time I’d expect more and more systems to add support for storing passkeys.

                      1. 3

                        My understanding: you have a passkey per device. You log into your account with another device and revoke the passkey for your stolen phone. If you get the phone back, you re-enroll it with a new passkey.

                        1. 3

                          Presumably this means you log in to the new device with a password, so you still need passwords? (or some alternative authentication)

                      1. 2

                        Gradle sucks. But is there anything better?

                          1. 69

                            Not really. That accepts everything automatically, which is pretty much the opposite of the right thing, especially because you’re accepting a full privacy policy, not just allowing cookies.

                            Consent-O-Matic is better: https://github.com/cavi-au/Consent-O-Matic

                            It also seems to be better than the feature described in the OP, which can only do “reject all” or fall back to “accept all” if you set it to do that (why???)

                            1. 20

                              I uninstalled I don’t care about cookies and switched to Consent-O-Matic when they got bought by Avast.

                              1. 11

                                I wish Mozilla would just mainline Consent-O-Matic; its behavior is a reasonable default.

                                I mean don’t get me wrong; I’m glad we have the ability to do this in extensions, but it’s a shame we have to.

                                1. 1

                                  Fair enough. Although I’m also using Cookie AutoDelete: https://chrome.google.com/webstore/detail/cookie-autodelete/fhcgjolkccmbidfldomjliifgaodjagh?hl=en, so I really don’t care if they gather my temporary one-time cookies :), I just don’t want to see these annoying cookie banners.

                                  1. 12

                                    Agreeing to random privacy policies means you’re agreeing to a lot more than just storing some cookies you can delete later. Cookies are not, and never were, the real problem.

                                    1. 1

                                      I think the “real problem” is different for different people.

                                      For me, the “real problem” is that if I would care about all the privacy policies of all websites I visit each day, I wouldn’t have the time to do any work nor to read any articles. Yesterday alone I visited 26 different websites, and by looking at the domain names alone, for most of them, I wouldn’t be able to tell why I was even there. We can also argue about what is the definition of the cookie rejection action, and how does an “essential cookie, without which the website can’t work” work, because not knowing how the law defines these, we can’t make an informed decision if we want those things to happen, or not.

                                      If we say that a cookie is required for basic use of the website, then what does it mean? I think in order to be sure, we have to read the GDPR official legal texts, and even if we’ll gain that knowledge, we will need to decode lots of legal documents that define how the law makes sure that GDPR is even followed, what is the legal enforcement mechanism, and what is the risk that the website will illegally (by omission, or by intentional action) mis-use “essential” cookies for marketing purposes. Because if I check “use essential cookies only”, I implicitly trust that some random website is following the law, which is a bit naive in my opinion. This is also a real problem for me.

                                      The cost of managing the privacy policy of one-off websites is simply too high. You have to use a shortcut somewhere. I choose to go there right at the beginning of that path, because I don’t believe GDPR is designed with users in mind; it’s a part of some shady business, and someone wants to make money off of it. I refuse to pay the price with my attention.

                                      1. 11

                                        The problem that @robert_tweed slides to is that you are not granting consent to cookies. The law was never about cookies. The consent is required to handle and process information about you. Cookies are one mechanism that may be used for this but the site can also use a mix of IP address and browser fingerprinting to fairly accurately identify you as an individual. By consenting and clearing cookies, you are granting them a legal framework to collect this data and share it with third parties for a very broad range of purposes. In comtrast, if you reject tracking then you have denied them the legal framework to justify this and that opens them to the 5% of global turnover fines that the regulator may apply, which any moderately sane company will consider far too high risk.

                                        1. 3

                                          If every stranger you met asked you for your personal information, would you tell them just to avoid wasting time?

                                          1. 2

                                            I don’t think this comparison is apt. It would be more like: every stranger I meet asks me to take my picture. But in order to decline, they hand me over a short book of regulations, each stranger with their own rules. I need to read and understand those regulations to know how to decline the request. So I choose to wear a mask and allow myself to be photographed. They have a photograph of my mask instead of me, and are free to use that as they see fit.

                                            1. 1

                                              A real world comparison could be that strangers approach you and ask you to put colourful ribbons on your wrist. What does it matter when you agree to let them put on the wristband when you discard it the moment younare out of their eyes?

                                              1. 7

                                                Do you think websites just discard your personal information when you stop using them?

                                                1. 7

                                                  No, it’s like someone comes up to you and and says they want to put a wrist band on you. To do so, they ask you to agree to a contract that gives you consent for them to track you using any means that they wish. They then give you a bright shiny wristband that you throw away and they track you with CCTV cameras and drones.

                                                  1. 6

                                                    That’s not what’s happening though. It’s not the wristband (the cookie). It’s that effectively all means they want, your IP, ETags, browser fingerprinting, and most likely a combination of these are legally used if you consent. Even if you delete everything you still have the IP, and if you somehow hide/switch that you still have browser fingerprinting.

                                                    It’s a lot more like have a camera drone following you, watching you discarding your wristband, or watching you putting your mask on or whatever you want for the real world comparison of changing your IP would be.

                                                    There is ways to work with browser fingerprinting as well, but that’s on the rare side to do properly.

                                                    Of course working around these things and things like deleting cookies is great if you don’t trust the reject button which is reasonable. Just go on a random website and see what you get despite rejecting everything.

                                                    1. 1

                                                      Browser fingerprinting works by estimation, not a specific result. Even if the fingerprint is unique, then it can be differently unique between different sessions of the same user. So it’s another tool for statistic analysis, not a “following camera drone” that is always able to detect intentional evasion. And even if it is able sometimes to detect it, then there’s still a statistical uncertainity of the result.

                                                      1. 1

                                                        This is a great point! Fingerprinting is hard to evade from.

                                        1. 3

                                          I’m using Kotlin for prototyping. New versions of JVM have startup reduced to sub-second levels, so tools written in Kotlin JVM are perfectly capable of working as small tools. And if not, there’s always Kotlin Native.

                                          1. 1

                                            Which versions of JVM? Programming Clojure was always a pain because of the JVM speed.

                                            1. 2

                                              Java 18:

                                              $ time java -cp startuptime-1.0-SNAPSHOT.jar org.example.Main                                        
                                              Hello world!
                                              java -cp startuptime-1.0-SNAPSHOT.jar org.example.Main  0.02s user 0.01s system 111% cpu 0.026 total
                                              

                                              Some comparisons:

                                              $ hyperfine "java -cp startuptime-1.0-SNAPSHOT.jar org.example.Main"          
                                              Benchmark 1: java -cp startuptime-1.0-SNAPSHOT.jar org.example.Main
                                              Time (mean ± σ):      24.7 ms ±   1.0 ms    [User: 17.6 ms, System: 10.5 ms]
                                              Range (min … max):    23.3 ms …  28.7 ms    108 runs
                                              
                                              $ hyperfine "python3 test.py"                                                
                                              Benchmark 1: python3 test.py
                                              Time (mean ± σ):      19.3 ms ±   0.8 ms    [User: 15.5 ms, System: 3.8 ms]
                                              Range (min … max):    18.0 ms …  22.2 ms    131 runs
                                              
                                              $ hyperfine "perl test.pl"                                                  
                                              Benchmark 1: perl test.pl
                                              Time (mean ± σ):       1.4 ms ±   0.3 ms    [User: 1.1 ms, System: 1.4 ms]
                                              Range (min … max):     0.9 ms …   3.2 ms    689 runs
                                              
                                              $ hyperfine "ruby test.rb"                                                   
                                              Benchmark 1: ruby test.rb
                                              Time (mean ± σ):      54.0 ms ±   1.3 ms    [User: 45.9 ms, System: 7.8 ms]
                                              Range (min … max):    52.0 ms …  59.4 ms    48 runs
                                              
                                              1. 1

                                                JVM startup is a tiny fraction of the Clojure startup time, unless things have changed a lot in the last couple of years.

                                                1. 1

                                                  They blamed the JVM when I asked why it was taking so long.

                                                  1. 2

                                                    On any computer made in the last decade, the JVM typically takes 100ms or less to start. In contrast, a minimal Clojure program - i.e. literally just nil - takes around a second to run. (At least, it did a couple of years ago, which is when I last used Clojure.)

                                                    If your Clojure program actually does something then it is not unusual for it to take ten seconds to start. That’s a couple of orders of magnitude longer than the JVM startup time.

                                                    I’m pretty sure there used to be something on the Cognitect wiki about this but I don’t know if it’s still around. The problem is that even a minimal Clojure program loads a lot of classes: every Clojure function is implemented as its own Java class and every Clojure program starts by loading at least clojure.core. That’s thousands of Java classes. Once loaded, the classes also have to be initialized and some of them have non-trivial initialization.

                                                    JVM applications are not known for their speedy startup… but that’s compared to native binaries. Normal Java applications still start in a tiny fraction of the time that it takes to start a typical Clojure application.

                                                    However, this isn’t necessarily a problem. If your applications are long-running servers then it’s not a huge problem to wait 30 seconds for them to start. If you develop locally by reloading code into a long-running REPL then you have fast feedback while developing without restarting your application. But it’s definitely not the JVM’s fault that Clojure is slow to start!

                                                    1. 1

                                                      Well nobody in the Clojure community bothered to tell me that it loads thousands of classes. Maybe it’s a dirty little secret.

                                                      I’m unlikely to do much more with Clojure unless I maybe get back into logic programming.

                                              2. 1

                                                Or Scala Native, or Graal Native 😉

                                                1. 1

                                                  True, Graal can even compile Clojure to native code ;)

                                              1. 8

                                                I’m using make mostly as a command runner. If I need to create some testing environment, populate some directory before running a development version of some tool, manage some log files, I’m encoding it by using makefile rules, and it works nicely most of the time. I just wish it wasn’t used for building C++ software that much.

                                                1. 4

                                                  If what you want is an easy way to setup a repeatable dev environment, there are so much better ways! (And I say this as a make aficionado with 10s of thousands of lines of Makefile under my belt.)

                                                  1. 1

                                                    Can you elaborate?

                                                    One of my dev testing environments consists of a tool “C” that acts as a converter from format “F1” to format “F2”. The file that is being converted is “B”, but I can only build it to the “F1” format. Also I have a test runner tool “R” that can only run the “F2” format. So generally my makefile:

                                                    1. makes sure the C converter is up to date,
                                                    2. builds the B binary in F1 format by using a system compiler,
                                                    3. converts file B stored in format F1 to format F2 by using the converter C,
                                                    4. makes sure the test runner R is up to date,
                                                    5. runs tests from B converted to F2 format by using the R test runner.

                                                    Makefile setup makes it easy to specify what is dependent on what, and rebuilds just the necessary things, as I’m concurrently developing C, B and R. Also I actually have lots of different B’s, and by encoding the dependency rules in the makefile I can just run all of the necessary conversions in parallel.

                                                    1. 2

                                                      Based on what you’re saying it sounds like you’re using it as more than just a command runner, as you are defining dependencies and execution graphs. To contrast look at https://github.com/casey/just, which is a much more streamlined application that is purely used as a way to easily store and run multiple commands that are used across a repo

                                                      1. 5

                                                        Just as a datapoint, it doesn’t look like anyone has bothered to package just, at least for Ubuntu (based on a search in apt-cache). While deploying from Github “by hand” is possible, not having a tool in a base Linux install, or even installable via the package manager, is a drawback.

                                                        1. 2

                                                          Yeah definitely. I like it as a way to simplify remembering which commands i run in which repos (./gradlew for java projects, npm run for node projects, etc), but the lack of easy installation on Debian based distros can be a bit annoying

                                                          It is available in the MPR repo, but that’s hardly an ideal solution

                                                          1. 2

                                                            There’s a $200 bounty for doing so: https://github.com/casey/just/issues/429

                                                            That said when I’ve needed to use just on Ubuntu systems I found it easy enough to install the prebuilt Linux binary from the official website, or to build and install it from source with cargo install. It’s not difficult software to build from source if you have a Rust toolchain setup already.

                                                            1. 4

                                                              I figured I’d give it a fair shake, so compiled from source. I usually avoid Rust projects[1] so I guess cargo had to download a ton of stuff. The build process took ~7min on my wimpy VPS, and the source dir ballooned to 745M after the build finished.

                                                              This is quite a lot of work for just a command runner!

                                                              [1] this is not for ideological reasons, just some bad experiences with gemini server projects a while back.

                                                        2. 1

                                                          I’ll try to expand this elaboration later but look into nix for the one I would use. There are some other comments here about the use case as a way to provide dependencies for a dev environment, then you can have a much simpler Makefile just for building your project not managing your system.

                                                          1. 2

                                                            NixOS as in the operating system? Can’t do it, I need to test my stuff on a defined set of Linux distributions, and macOS. And make is available by default there. Even Windows has “nmake”, not the same dialect, but works the same way, for the most part.

                                                            Also not sure if using a custom tool will actually be cheaper, if every time I install a new testing OS on a VM will require me to install a new toolkit. Installation of custom software always has a cost.

                                                            1. 1

                                                              No, not NixOS. Just nix. It is a standalone binary already available as a package on most other OS’s, and one nice function is you can setup file that lists your projects dev tooling requirements and then enter a shell that has those requirements supplied no matter what the host OS has. Especially useful for developer tooling even if you still want to your project to work out kinks with particular OS’s, not mixing in the setup of dev tooling in that mix makes the project a lot easier to manage. Using nix shell is a lot less overhead than maintaining a Makefile that does system management stuff.

                                                              1. 1

                                                                You’ve convinced me to try it, thanks.

                                                              2. 1

                                                                No, they meant “nix” the package manager/build tool. NixOS is just a package (kinda).

                                                                But sure, if you need native Windows build, nix won’t help you much… There’s still ways in which it could help, but probably not worth it.

                                                        3. 1

                                                          this is so backwards it got me laughing

                                                        1. 6

                                                          One of the things that a lot of people haven’t appreciated yet is that you no longer need makefiles for C and C++ codebases thanks to better faster computers and hard disks: You can use a simple linear list of gcc commands and run it with ccache set up. You get fast incremental builds just like make used to be required to get.

                                                          1. 8

                                                            This sounds like nonsense for large projects. When I do a clean build of LLVM, there are over 4000 build steps. When I do an incremental build with ninja (or make, though I have no idea why anyone would choose make when ninja exists) I typically do under 20 and I don’t have to delink any of the binaries I haven’t modified (which, alone, would take longer than most incremental builds). The process creation time alone for the 4000 redundant ccache processes would be huge, but ccache also needs to run the preprocessor, so my incremental build time would likely increase by a factor of 10 if I did this.

                                                            1. 4

                                                              I think you are totally right in this case, thank you for the strong counter example!

                                                            2. 2

                                                              Do you suggest dropping makefiles in favor of bash scripts when building C++ software?

                                                              1. 2

                                                                I don’t suggest that everyone do this but I think it’s useful to re-asses technology we have carried forward from history in the context of our modern hardware that has much more resource available.

                                                                what I’m really getting at with pointing this out is that make might be an overcomplex solution (today) for the problem of incremental builds.

                                                              2. 1

                                                                I’m sure you can. I’m also sure you can manually run a sequence of of rustc commands to compile all your Rust crate dependencies, your own app, and link them instead of using cargo. Such a linear list of commands sounds less fun than the alternative though.

                                                                1. 2

                                                                  cargo is better than a script for rust.

                                                              1. 4

                                                                God I miss the days of native applications. Why does every app now have to be web based?

                                                                1. 12

                                                                  Because it’s better economics for the app makers.

                                                                  1. 2

                                                                    I like to believe it’s more because writing UI code is much easier than alternatives.

                                                                    1. 1

                                                                      Maybe, but writing UI code in stuff like Javascript seems harder and more error prone than the classic alternative GUI builders that were made for native platforms. The point is, of course, that what you write for a browser will run on any platform the browser runs on.

                                                                      1. 2

                                                                        Classic UI builders let you choose your screen size, and it was a PITA to target multiple resolutions.

                                                                        A pity, because they were so much better than the mess we have now.

                                                                        1. 4

                                                                          Classic UI builders let you choose your screen size, and it was a PITA to target multiple resolutions.

                                                                          That depended a lot on the toolkit to some degree. NeXT and Qt, to name just two examples, were actually pretty good at “fluid” layouts – they targeted platforms with high-res capabilities from the start, so they had to support applications whose windows could be resized anywhere between 640x480-ish to 2048x1536-ish. It’s Visual Basic, VCL to some degree (and, if anyone remembers Glade, GTK, early on) that really sucked at this, and gave people GUI builder PTSD.

                                                                          Web-based technologies have brought us a lot of useful things, like React’s component-driven system. But in my subjective experience, putting together a GUI for a single device class with the best web-based tools is still an inferior experience compared to the best native tools from twenty years ago.

                                                                          Crossing device classes is a whole other story. I don’t know of any toolkit that does it well, but that’s also, in part, because early on, device vendors realised it’s probably a bad idea to even try it in the first place. Application vendors obviously disagreed, but 15 years after the iPhone, it’s still pretty easy to tell an application that uses web-based components from one that doesn’t – the slow one with bad scrolling is usually the web one.

                                                                  2. 7

                                                                    Because it is instantly deployed and reachable by everyone with an internet connection. This includes people in poorer countries with $70 phones.

                                                                    Those questioning which one is better, were probably not everyday users of computers before mid 2000s. Desktop apps were a breeze to develop and their UX was superior in all aspects except perhaps how trendy their visuals were. Keyboard navigation almost always covered the whole functionality, latency, screen real estate usage, intuitiveness, etc. Pretty much everything was superior. Even networked applications, ironic as that might sound.

                                                                    Having to provide a binary, convince the user to install it or even buying it…. It was just a value offer that didn’t stand a chance against signing up for an online service.

                                                                    Heck, i am typing on my phone right now on lobste.rs. using a web app no less. On my phone.

                                                                    A little fun fact from an acquaintance of mine: a medium-small accounting company almost closed down a couple of years ago, because they replaced a rock solid desktop app they used since 1993 with a modern web based alternative. Accountant productivity took a nose dive, many customers exited and costs were dangerously approaching revenue.

                                                                    I think serious productivity is still a non goal for 99% of all web apps.

                                                                    1. 1

                                                                      It should be a goal for more than 1% for sure… Especially in the web apps we live in day by day…

                                                                    2. 3

                                                                      Because native GUI dev is horribly painful in comparison?

                                                                      1. 4

                                                                        And web development is any better?

                                                                        But yes, native GUI development has gotten worse over the years. Visual Basic allowed many non-programmers or barely-programmers the ability to write software, and a friend of mine still laments the development system on the NeXT, which made building applications nearly trivial.

                                                                        1. 3

                                                                          I’d love to know your friend’s perspective on how well or poorly Cocoa and Xcode’s version of Interface Builder preserved NEXTSTEP’s development ease in macOS and iOS.

                                                                          1. 1

                                                                            He used NeXT Step back in the early 90s, and then went on to work almost exclusive on Linux. These days he does use MacOS but not for native application development (he mostly does command line stuff).

                                                                        2. 3

                                                                          It might look this way, because GUI web devs skip 50% of the things that are normally required when using the native GUI framework:

                                                                          • error handling (“something went wrong, please click f5” is not error handling),
                                                                          • UI patterns (every website uses different patterns),
                                                                          • standard controls (it’s not clear what is clickable and what is a simple text),
                                                                          • keyboard handling (websites that can be conviniently operated by keyboard alone are very rare).

                                                                          Add in the missing pieces and I think it will be clear that it’s actually the other way around.

                                                                          1. 2

                                                                            What kind of pain do you mean? Forgive my ignorance; my experience is kind of specialized.

                                                                            I find native GUI dev mostly delightful these days because declarative UI tooling has arrived. SwiftUI, for instance, learned lessons from React and Flutter. Prototyping and iteration are both rapid. Jetpack Compose seems about the same. I have to imagine other systems would be somewhere on the same track?

                                                                            1. 4

                                                                              With native GUI dev there’s no REPL or inspector (afaik), you need (in practice) to use an IDE, there’s a lot of up front complexity (whereas with web you can add the more complicated parts in stages as you need them)

                                                                              Like when I’m making a client side web app my usual workflow is

                                                                              • come up with idea for app (usually a little thing I think will be useful for myself or my friends)
                                                                              • copy index.html, script.js, style.css from my template loading dependencies from esm.sh so I don’t need to bundle yet
                                                                              • start writing a basic UI with either vanilla JS or preact
                                                                              • prototype the functions that actually do things using the js console
                                                                              • hook them up to the ui
                                                                              • add css using the browser devtools
                                                                              • if the project is getting more serious i’ll
                                                                                • set up bundling
                                                                                • make an icon and set up opengraph tags
                                                                                • potentially rewrite in typescript

                                                                              I don’t know of a way to replicate anything remotely like my usual workflow with native GUI dev. It’s very important that I’m able to get a barely-working thing started super fast so I don’t lose interest and give up and the web just works really well for that I guess.

                                                                              1. 4

                                                                                I think a lot of this boils down to what we’re used to. My workflow for GUI tools (with the caveat that I don’t need to do those too often) is basically the same as yours, except it uses Qt and C++ or Python, or Tk and Python for really “quick & dirty” hacks. I don’t really need to use an IDE for either. Nowadays I use QtCreator because it’s there but years ago I knew the Qt API well enough that I just used Emacs.

                                                                                A good debugger isn’t always a good substitute for the inspector, but it’s also been my experience that, unless I engage in all sorts of weird QtQuick skullfuckery which I still think shouldn’t have been a thing in the first place BUT ANYWAY, I don’t really need one. Even crowded UIs (think dozens of widgets, some of them dynamically-built) tend to be easy to navigate without an inspector if the widgets don’t sit behind six stacks of nested divs and CSS alignment hacks (which are straightforward if you’ve been doing CSS alignment hacks for your whole computer life but very much not straightforward otherwise).

                                                                                I don’t mean to disparage web-based UIs by this, I’m sure there’s someone out there thinking sure, CSS alignment hacks are kindda nasty but they’re a breeze compared to hoodoo shit like Qt’s event loops. I just want to point out that ease of development is very much a subjective experience. For me, even with things like React, which are a huge improvement over back when I had to do that shit by hand with jquery (cue “that belongs in a museum!” Indiana Jones reference), writing web-based UIs is hell, and I secretly hope someone will just port OpenStep to WASM and release me from this neverending “how do I just fucking place these things next to each other in CSS” punishment cycle.

                                                                                1. 2

                                                                                  I’m not super experienced in native GUI development, and I’m just starting to dip my toes into it with the Common Lisp bindings for GTK4. As far as the initial complaints, GTK has a pretty good inspector, and with the Common Lisp bindings, there is a REPL, and the usual Lisp image-based redefinition of functions and so on; you can live-code the GUI by re-evaluating the macro that defines the application. I suspect, but don’t know for sure, that you can do the same in Python or JavaScript, but not C or Vala. I’m using Emacs.

                                                                                  My main gripe so far is that it seems like a hard either-or between using a builder to lay out your widgets and so on vs. building the UI up programmatically, but that may just be a matter of my inexperience.

                                                                                  1. 2

                                                                                    My main gripe so far is that it seems like a hard either-or between using a builder to lay out your widgets and so on vs. building the UI up programmatically, but that may just be a matter of my inexperience.

                                                                                    I think that’s kind of specific to GTK, which has a complicated history with GUI builders and has pretty much shunned them lately from what I’ve heard. Back when I was doing a lot more GUI development (2012-ish) it was pretty easy to mix the two in Qt – I wrote a lot of code that tl;dr programatically built some widget hierarchies inside builder-created interfaces.

                                                                                    1. 1

                                                                                      Oh, that sounds nice, I should try that!

                                                                                    2. 1

                                                                                      Speaking for iOS/macOS, it’s true that you’ll need the main IDE to work on GUIs, and its download/install size is obscene. It’s like the Call of Duty of IDEs; I don’t get it. But the fast project setup is there, the rapid iteration is there, as are the inspector, REPL, and other stuff you’re talking about.

                                                                                      Just an overview for those who haven’t tried native dev lately:

                                                                                      Basic coding methods available include the Swift REPL and SPM, a Jupyter-like thing called Playgrounds, an official VS Code+LSP extension, and Xcode’s project templates that scaffold various kinds of hello world in one step.

                                                                                      The UI workflow is based on SwiftUI Previews, with which you edit view code next to a preview canvas showing either a wysiwyg layout editor or a live instance with hot reloading (half a second or so). You don’t have to choose between a layout tool and pure code views; instead you write a mostly declarative DSL kind of like a C-family JSX, and the layout editor can alter that code, or you just type it and see it take effect. This works one view or screen at a time; you can set up a view’s previews with sample data or real backing services. The closest comparison I could make is to React+Storybook. I like to build apps iteratively as working wireframes, then style them minimally later.

                                                                                      For inspecting at runtime, there’s lldb, the view debugger (think web inspector for native—the 3D layer breakout is really useful), memory graph debugger, GPU shader debugger, and a huge variety of profiling tools in Instruments.

                                                                                      Now I really wish we had some kind of live REPL attached to the editor like SLIME. I think Swift is just too statically compiled by nature to get there… hot reloading surprised me though, and macro support is coming, so you never know.

                                                                                      I just have to think MS is also trying to provide this kind of cohesive, complete, productive toolchain. I can’t speak for Linux and friends; maybe it’s just too big an effort to organize all this except where a browser has done that work? Or maybe it’s happening and I just don’t know more of the landscape.

                                                                                2. 2

                                                                                  What do you think the benefits of native applications are?

                                                                                  1. 7

                                                                                    The big one is user experience consistency across the whole system. It’s easier to achieve and more likely to happen when apps are built with native tooling and product teams are mindful of native behavior.

                                                                                    For instance, on macOS, every app has File, Edit, other stuff, Window, Help. You’ve never used this app before, but you damn well know how to print, because the first app you ever used already taught you it’s in the File menu toward the bottom. Every user action goes in the menu first, so the menu is a complete index of what the app can do. That makes the app’s capabilities searchable, too, from the Help menu. The menus and items teach you the app’s objects and capabilities and their keyboard shortcuts, while you use the app, at your pace. But they also teach you about the computer in general because other apps match.

                                                                                    Another convention is that the main object types in the app get their own window type, and same-type windows can be docked together as tabs or split apart again. In Mail you can see a mailbox plus an optional focused message, but you can also open 15 messages in their own windows or tabs.

                                                                                    Now open Slack, an Electron app. See how its menu is sleeping on the job and doesn’t include that much of the app’s functionality. Try to open three conversations in three tabs or windows. Tabs would really help triage a lot of unreads. Windows would let you organize conversations among other apps’ related windows according to user tasks—like windows are for in a windowing operating system—but you get one window with two panes, tops. It’s a disaster as soon as you have several competing priorities to handle. In this regard Mail is way better, even though Slack’s entire job is to be better than email. But this single-window Electron app is also compulsory at work for lots of us.

                                                                                    It’s not like an Electron app couldn’t get the details right for each operating system it runs on, but if you were using the native tooling, the defaults would have helped in each platform’s case. It also has to be harder to keep a native behavior mindset when you’re living in a multi-platform abstraction layer, even if you mean to.

                                                                                    1. 6

                                                                                      Faster response for one (c is not just a good idea, it’s the law). Two, it keeps working even offline. Three, the interface is consistent (or should be) with the rest of the operating system. Four, it doesn’t gratuitously change the UI or work flow on me (yes, apps can be updated, but at least I get a warning, and can get an indication from others if it’s a “breaking change” or not). Five, I can keep using the app as long as I want (that is, if it doesn’t force some licensing check on me—seriously, I miss the days of owning software and not just renting it).

                                                                                      1. 2

                                                                                        I miss the days of owning software and not just renting it

                                                                                        This is interesting to think about… It could be a reason why FOSS software continues to be strong and even a bit more well known as time goes on. The current market of “X as a service” really helps the FOSS ideology.

                                                                                  1. 3

                                                                                    I don’t think anyone should encourage anyone to produce videos in GIF (/dʒɪf/) format. There are many better alternatives: APNG, H264, WEBP, etc.

                                                                                    1. 3

                                                                                      Added .mov for now (same as default on macOS). Will look at others.

                                                                                    1. 1

                                                                                      I’ve been meaning to look into converting the Windows mess we have in CHICKEN where we support Cygwin with bash, mingw+msys with bash or plain mingw with cmd.exe into only supporting mingw with Powershell (and perhaps still having cygwin). The current situation is also very confusing for users, because if you run the “wrong” makefile (say mingw+msys makefile from cmd.exe or the mingw makefile from msys) things just don’t work properly (rather than breaking immediately). Unfortunately, my willingness to put up with operating Windows (especially the shitty “modern IE” VMs that are time limited) as well as time to work on software beyond my day job is very limited so it’ll probably never happen.

                                                                                      One question though - is Powershell ubiquitous enough to simply assume it’s there on Windows? Or does it require one to install extra stuff? And what happens when you try to run Powershell stuff from cmd.exe?

                                                                                      1. 2

                                                                                        PowerShell is part of the default install (core bits of Windows depend on it), but you need to be a bit careful about versions. Newer versions are available as optional installs and add new features.

                                                                                        1. 1

                                                                                          is Powershell ubiquitous enough to simply assume it’s there on Windows?

                                                                                          On Win10 and above it’s simply there, but I’m not sure about the differences in versions. Also it was simply there since Win7 I think but I can’t verify this (I only have access to win10).

                                                                                          And what happens when you try to run Powershell stuff from cmd.exe?

                                                                                          From what I see, cmd will fire up the default handler for “ps1” files. If it’s notepad, it will fire up notepad. But I’ve changed the default handler for “ps1” files to be “powershell.exe”, and now it seems that ps1 scripts run under CMD the same way as under PSH. But that required a manual step to make.

                                                                                        1. 5

                                                                                          One thing I dislike about PowerShell is that it came out pretty late after bash, zsh, and lots of other shells were already stabilized and popular. Yet, PowerShell still didn’t manage to create a “normal” language, and .ps1 scripts are not better in the terms of readability than bash scripts. It’s like MS wasn’t aware that other shells have existed, has disregarded everything what’s wrong with those existing shells, and has created something newer, but also with a lot of old problems. Also it really baffles me that today we are excited that a shell can output some items in columns, where shells from 1988 (AmigaDOS + ARexx) could communicate via network by only shell scripting. A lot of things in technology really feels like a regression, but I guess one needs to be old enough to see it. I fear this will only get worse.

                                                                                          1. 7

                                                                                            I like a lot of the ideas that PowerShell copied from Smalltalk but I still can’t quite forgive it for building a verb-noun convention decades after noun-verb was widely known to be better for discovery.

                                                                                            1. 2

                                                                                              Would you have a source for that? I like verb-noun better but am very ready to have my mind changed. Much obliged!

                                                                                            2. 1

                                                                                              Yup, as far as I remember they borrowed [ $x -lt $y ] from shell, but they did it in an incompatible way !!!

                                                                                              If you’re going to be incompatible, why not just use (x < y) ?

                                                                                              It seems like a bizarre cargo culting, to the point of indicating a lack of understanding … Unix shell SYNTAX is not why people use shell!

                                                                                              Pretty much everyone dislikes the syntax, and they did in 1993 too, more than a decade before PowerShell appeared.


                                                                                              The terrible syntax can be mostly traced to implementation hacks. Like x=y is different than x = y because shell had words before it had variables, so they just “stuffed” an assignment into a word.

                                                                                              Likewise test 1 -lt 2 and [ 1 -lt 2 ] is just easier to implement because you skip writing a lexer, like Awk has.

                                                                                            1. 1

                                                                                              Buck2 is written in Rust - Buck1 was written in Java. One of the advantages of using Rust is the absence of GC pauses

                                                                                              Isn’t that an exaggeration? I mean, I understand GC pauses to be a problem in gaming, when there’s a need to provide stable 60fps, but are GC pauses in a build system really that significant in order to use them as a #1 reason for rewrite?

                                                                                              I mean, why aren’t GC pauses significant in web servers, where Java is mostly used for, but it’s somehow a problem for a build system?

                                                                                              1. 5

                                                                                                Build systems like Buck and Bazel are huge graphs of tiny objects, and that really stresses the GC more than most programs.

                                                                                                In the paper linked in the blog post you’ll see that the main things the program does are parsing, deserializing, and then lowering the target graph to the action graph. They are considering millions of files in a monorepo at once, which will easily lead to tens of millions of objects.

                                                                                                Although I would agree that you’re not dropping frames, and garbage collection is very useful for graph-based workloads. If the GC is efficient, and the JVM’s is, then it should be a win?

                                                                                                However another problem is that the JVM lacks value types on the stack, and so it creates more garbage. Go for example has value types.


                                                                                                I googled “Bazel Garbage Collection”, thinking it would be more of a problem, and ended up with this link instead! It does seem like the JVM isn’t ideal, but not because of its GC!

                                                                                                https://github.com/bazelbuild/bazel/issues/6514#issuecomment-675157826

                                                                                                There was a widespread desire for a more integrated implementation, so various interested people had a meeting some time in early 2006 to discuss what the next system should look like. There was no question that it had to be a typed language, which at that point meant C++ or Java. (Rob Pike was in the room. Sadly Go wasn’t invented for four more years, as it would have been the ideal tool for the job. Google uses very little Rust, Scala, and Haskell.) If memory serves, Java won primarily because it had a garbage collector—and half the job of a build tool is concurrent operations on often-cyclic graphs. And I’m sure that at least in part it was because Java was the language Johannes Henkel and I, who started the project, were using at the time.

                                                                                                The other half of a job of a build tool is interacting with the operating system: reading files and directories, writing files, communicating over a network, and controlling other processes. In hindsight, the JVM was a poor choice for this work, and many of Blaze’s problems stem from it. Most system calls are inaccessible. Strings use UTF-16, requiring twice as much space and expensive conversions at I/O boundaries. Its objects afford the user little control over memory layout, making it hard to design efficient core data structures, and no means of escape for performance-critical code. Also, compiling for the JVM is slow—surprisingly, slower than C++ or Go even though the compiler does less—yet the resulting code is also slow to start and slow to warm up, pushing CPU costs that should be borne by the developer onto the user. (Google runs Blaze’s JVM on an 18-bit number of cores.) The JVM is opaque, noisy, and unpredictable, making it hard to get accurate CPU profiles, which are crucial to finding and optimizing the slow parts. The only thing I really like about the JVM is that it can run in a mode in which Object pointers occupy 32 bits but can address 32GB, which is a significant space saving in a pointer-heavy program.

                                                                                                This is after doing a project that I’m guessing has 100K to 500K lines of Java code written from scratch, maybe more. FWIW Alan and Johannes were my team-mates at the time! I worked on Blaze a tiny bit, but it wasn’t my main project.


                                                                                                FWIW I wrote a comment on “Rust is the future of JS infrastructure” here, explaining what I learned about similar workloads:

                                                                                                https://news.ycombinator.com/item?id=35045520

                                                                                                However this is more about static typing than garbage collection. Java is better than say node.js or Python for sure.

                                                                                                1. 1

                                                                                                  Thanks for that informative answer. However, the GitHub post you’ve linked is strange, because it mentions a lot of problems that are either solved, or have good walkarounds: Java Native Interface, UTF-8 by default in JDK18 (plan to move to UTF-8 was known since 2017), also slower compilation of Java code is a mystery to me, since C++ has one of the most slow compilation times from the languages I’ve encountered (Rust may be even slower). Fortunately the author mentions in one of the subsequent posts that slow compilation was somewhat related to the build tool itself, not with JDK. The amount of introspection of JVM is pretty high (profilers, agents), and value types support is en route in Valhalla.

                                                                                                  But well, it’s not like I’ve ever developed Blaze/Bazel/BuckN (I’ve never actually used it as a user), so I’ll stop argumenting here. I’m just not really buying their arguments.

                                                                                                  1. 2

                                                                                                    It makes perfect sense to me, given that Blaze was written in 2006 :) JDK 18 doesn’t exactly help there

                                                                                                    I think Java was the best choice at the time, but these days I think Go and Rust make more sense.

                                                                                                    A key issue is that high performance and predictable performance is greatly aided by control. For example, Java has escape analysis which avoids garbage, but it’s opaque to the user. I’d rather have explicit types on the stack like Go, Rust, C++.

                                                                                                    AOT is just much simpler and predictable than JIT. There’s no reason to throw away that info if you can encode it directly in the language (which is what we’re doing in Oil’s translation to C++).


                                                                                                    Also to repeat: my memory is that Alan wrote over 100K lines of Java from scratch AND optimized the hell out of it (there were also lots of other people on the team of course, but he drove a lot of the initial coding and architecture)

                                                                                                    So his criticisms are from hard-won experience

                                                                                                    Blaze is a very performance-sensitive and highly optimized program

                                                                                                2. 2

                                                                                                  GC pauses get significant when you need either consistently low latency, or prolonged periods of high performance with a bunch of memory operations. For web servers, there’s a bunch of empty time between requests, where the GC can run “for free”, and the request usually doesn’t take a lot of memory allocations or time. But for build systems, when you run it, it’s gonna be a couple seconds of high intensity planning work, with a lot of memory allocations (creating the dependency/task graphs, etc.), which will trigger GC several times during the work (no empty time to fit the pause in unlike in web servers) and that will directly impact the performance the user feels for interactive compilation.

                                                                                                  1. 2

                                                                                                    The thing is that “normal” method of allocating memory pays a similar price as GC. Normal malloc() takes longer than Java alloc, while Java alloc is faster during alloc request, but we pay the full cost later with a GC pause. So it’s not about who is faster, because in the end the speed is similar, but about who does random pauses of the world at some random points in time.

                                                                                                    Games specifically can’t accept GC pauses, because each frame needs to have predictable time. So games accept slower but more predictable memory allocations; because for them it’s better to have slower alloc time than unpredictable GC times. But a build system? I’m not sure I understand why it must have predictable allocation times. Also I’m not sure what’s the problem with a GC pause during building the dependency graph. The build system is very similar to a web service; you just run the build, and GC can run between builds “for free”.

                                                                                                    But anyway, Java also allows to manually manage memory by using off-heap buffers and arena allocators, although this needs specific patterns during programming, not something Java developers do by default.

                                                                                                    1. 1

                                                                                                      Also I’m not sure what’s the problem with a GC pause during building the dependency graph.

                                                                                                      My only guess would be that it’s a pretty un-useful time for the GC to run since it’s just allocated a big pile of memory and it’s all still being used.

                                                                                                      But yeah, I wouldn’t expect the pauses to matter as much as the GC overhead and Java’s more profligate approach to heap memory in general. Would be interesting to see memory profiles for both Buck1 and Buck2!

                                                                                                  2. 1

                                                                                                    TBH, that whole paragraph looks rather incoherent. I think the goal there was “say something, fit it into two lines, don’t start Java vs Rust vs Go vs C++ flamewar”, rather than “summarize the actual reasoning”. GC pauses are certainly irrelevant for a build system.

                                                                                                    Here’s what I’d say instead (I do believe that today Rust is by far the best choice for build systems):

                                                                                                    • Rust would use significantly less memory, as its object graph has far less indirection. Given that the build system runs compilers who tend to devour RAM, setting less ran aside for build system is important.
                                                                                                    • Rust would run somewhat faster. Less memory usage and less pointer chairing improves cache efficiency. Runtime speed is a bottleneck for no-op build, and there are some rather cpu-heavy operations (dependency resolution & hashing)
                                                                                                    • Rust starts up significantly faster. No-op build doesn’t have time to warm the JIT up.
                                                                                                    • Build systems are nastily concurrent, and Rust is great at that, as it tracks thread-safety in the type system, and is somewhat-adequate for evented programming.
                                                                                                    • Build system can build anything except itself, so it needs to solve “bootstrap problem” — getting build system itself to the user’s machine. That’s much easier with a statically linked binary.
                                                                                                    • Build systems are IO-heavy and sometimes do systemsy-schenanigans, so having direct access to OS facilities is a plus.
                                                                                                    1. 2

                                                                                                      Here’s what I’d say instead (I do believe that today Rust is by far the best choice for build systems):

                                                                                                      You only list advantages compared to Java so it should probably be “the better choice”. In particular, none of the items you list make Rust better than C++ (and I can think of a few items that make C++ better).

                                                                                                      1. 1

                                                                                                        What about this point:

                                                                                                        Build systems are nastily concurrent, and Rust is great at that, as it tracks thread-safety in the type system, and is somewhat-adequate for evented programming.

                                                                                                        C++ doesn’t track thread-safety in the type system. That automatically makes Rust a better fit, since you probably don’t want non-deterministic data races plaguing your build system.

                                                                                                        1. 1

                                                                                                          In my experience (which is of writing a build system that is capable of building real-world projects like Qt), they are so “nastily concurrent” that the standard mechanisms (mutexes, etc) don’t cut it and one has to build custom ones out of atomics. And, as I understand, atomics in Rust are unsafe-only. This would appear to put C++ and Rust on equal footing (both unsafe) but I believe C++ has an edge, at least for now: the thread sanitizer. It did help detect and fix a few race conditions in our code.

                                                                                                          1. 1

                                                                                                            And, as I understand, atomics in Rust are unsafe-only.

                                                                                                            I don’t doubt that they’re lower-level and trickier to use than Mutex and RwLock, but I don’t see unsafe in the examples in https://doc.rust-lang.org/std/sync/atomic/.

                                                                                                            I believe C++ has an edge, at least for now: the thread sanitizer.

                                                                                                            While I’ve never tried using this and I imagine Rust’s integration with it is less mature than Clang’s, Rust can use the LLVM thread sanitizer, although it sounds like it entails building the Rust standard library from source, which sounds obnoxious but at least streamlined with tooling.

                                                                                                            Another thing I haven’t used is Loom.

                                                                                                            1. 2

                                                                                                              I swear I read somewhere that atomics are unsafe-only in Rust and the reasons made sense but I can’t remember where and what exactly those reasons were. But looks like you are right and I must have been dreaming.

                                                                                                  1. 85

                                                                                                    nearly every professional programmer works in some sort of specialized domain. Most programmers think that the domain they work in is representative of programming in its entirety, and they’re usually wrong. An example of a specialized domain is “stateless http servers”. If most of your work is in stateless http servers, most of your opinions about programming are actually opinions about programming stateless http servers, which is a smaller and more specific topic. If most of your work is in game engine programming, most of your opinions about programming are actually opinions about game engine programming, which is a smaller and more specific topic.

                                                                                                    Nearly every topic involving Casey Muratori and Jonathan Blow boils down to this:

                                                                                                    • Casey and Jon Blow work in the specialized domain of games. They’re both good at these things.
                                                                                                    • They think that what is true of their domain is universal.
                                                                                                    • Techniques that make sense for programming http servers that do not make sense for game engines or game rendering they say are “wrong”.
                                                                                                    • They present this to a community of people who largely program stateless http servers, who proceed to lose their minds. Both sides are, in a sense, correct, but both sides see the other side as being wrong, because both sides believe they’re talking about the same topic when they are actually talking about different topics.
                                                                                                    • Repeat ad infinitum.

                                                                                                    That’s not to defend Clean Code specifically though, that book is pretty bad.

                                                                                                    1. 23

                                                                                                      There’s a good saying by Paul Bucheit that captures this:

                                                                                                      Limited life experience + generalization = Advice

                                                                                                      :-)

                                                                                                      1. 8

                                                                                                        Well said. I’m often baffled by the choices made in new versions of C++, but that’s because I have no idea how people use the language in embedded, real-time, or low-latency systems.

                                                                                                        I do think, though, they proselytize principles that cut across domains. Primarily: actually understanding what a computer is doing, actually caring about performance, actually molding your tool set to your domain. This isn’t all bad.

                                                                                                        1. 1

                                                                                                          Well said. I’m often baffled by the choices made in new versions of C++, but that’s because I have no idea how people use the language in embedded, real-time, or low-latency systems.

                                                                                                          How do you mean? I think the only place I routinely see C++ performance being worse than C is iostreams, which to me is more a byproduct of the era than C++ itself.

                                                                                                          1. 3

                                                                                                            I think what GP was saying was not “these features are slow” but instead “The design of the API has decisions that I find questionable but assume makes sense in other contexts”

                                                                                                        2. 7

                                                                                                          I think you could characterize this as a dynamic we have observed, although I think you’re selling a lot of folks in the general programmer community short by generalizing it to “nearly every” or “most” and by saying they themselves over-generalize from their limited frame. Maybe, maybe not. It’s a vast community. As a stateless http server programmer by trade but a “person who likes to understand how things work” by disposition, I always get a lot of value out of hearing from the wisdom of experts in adjacent domains. It doesn’t always have to be relevant to my job for me to get that value from it. It’s not as if I come back to my team hollering that we have to implement a HTTP handler in assembly, but it does help form mental models that from time to time break through the layers of abstraction at which my code sits, increasing opportunities to pattern-match and make improvements that otherwise would have been hard for me to conceptualize.

                                                                                                          Relatedly, the creator of Zig recently drew on some of the same performance-oriented learning from the games community to restructure his language’s compiler and dramatically speed it up. Seems like he applied good judgment to determine these seemingly disparate areas could benefit each other.

                                                                                                          1. 11

                                                                                                            general programmer community

                                                                                                            I think perhaps the really interesting hot take here is that such a community doesn’t exist in any meaningful sense.

                                                                                                            1. 3

                                                                                                              I should have said the set of all programmers

                                                                                                              1. 6

                                                                                                                Sure, sure, but I think that you touched on a really interesting point, right? I think we could make the credible argument that we don’t have “general” programmers and instead have a (large) cluster of web programmers, an even larger cluster of folks who use SQL and Excel, another cluster of embedded programmers who mostly do C and assembly, another of game developers, and so on and so forth. All of those clusters experience the act of programming very differently.

                                                                                                                Anyways, I think you were on to something or at absolute worst had kicked off a really interesting idea. :)

                                                                                                              2. 3

                                                                                                                yeah, “the general programmer community” is about as substantive of a concept as “the general hammering community”. It puts the focus on the hammer instead of the blow. It’s a great way to get people to avoid thinking about the consequences of their work, which is really useful if what you want people to focus on is “I went from using Java to Rust” instead of “I am building systems that violate the consent and boundaries of a large number of people and cause harm to society”.

                                                                                                                1. 1

                                                                                                                  It would be a community of moving things around in memory for its own sake and nothing else. Even memtest86 would be too much. “I made a list of things and no one ever used it.” “I printed hello world to /dev/null”. An isolated unapplied spikes-only meetup.

                                                                                                                  1. 2

                                                                                                                    Hell, some programs for microcontrollers use only CPU registers. ;)

                                                                                                              3. 2

                                                                                                                What was bad about Clean Code?

                                                                                                                1. 6

                                                                                                                  Not parent, but I have read the book, and have an opinion: avoid. Much of it teaches fairly bad habits, shows the wrong heuristics, and the code examples range from “meh” to downright awful.

                                                                                                                  Strictly speaking the book is not all bad. Much of it is fairly reasonable, and some of its advice, as far as I recall, is actually good. Problem is, the only people capable of distinguishing the good from the bad are people who don’t need the book in the first place. The rest are condemned to take the whole thing at face value, and in the end we get SOLID zealots that blindly follow principles that makes their programs 3 to 5 times bigger than they could have been (not even an exaggeration).

                                                                                                                  Unless you’re a historian, I would advise you to read A Philosophy of Software Design by John Ousterhout instead. Here’s a teaser.

                                                                                                                  1. 4

                                                                                                                    I like this article on the subject of Clean Code. In particular, the code examples that have been taken straight from the book just show the kind of havoc that following all the advice blindly can cause. For example, the prime generator example at the end of the article is 70 lines long and requires 7 functions with “readable” names such as smallestOddNthMultipleNotLessThanCandidate. By comparison, a simple sieve of Eratosthenes function takes roughly 20 lines of code and does not needlessly split the logic into unnecessary auxiliary functions.

                                                                                                                    1. 2

                                                                                                                      The function names you mention were put into an example of refactoring another example source code (in the book). I pick Bob’s exploded version with long names over the original source code every day. It’s not that I like the name. I prefer it to the original, because the original is obfuscated in some places.

                                                                                                                      Honestly, I really have an impression that most people criticizing the book didn’t read it. There is a lot of good advice in the book, and maybe some shitty details. But those shitty details shouldn’t invalidate the good advice and I think that most of people think that it should. I’m really happy I haven’t followed the advice that the book shouldn’t be read.

                                                                                                                  2. 3

                                                                                                                    In a nutshell: it espouses premature abstraction and a dogmatic expert-beginner approach to programming.

                                                                                                                    https://en.wikipedia.org/wiki/Shuhari

                                                                                                                    Clean Code leaves the reader in the first of the three states of mastery.

                                                                                                                    1. 2

                                                                                                                      Too bad the tradition taught in Clean Code is the wrong one to follow.

                                                                                                                      We should start from a better one. Such as A Philosophy of Software Design.

                                                                                                                  3. 2

                                                                                                                    Games are pretty broad.

                                                                                                                    Even within a single game there are lots of modules with varied constraints. include rendering, sound, gameplay, controls, platform compatibility, tools… Some of it needs top performance, some of it needs top flexibility… from the outside I would guess expertise acquired when developing indie games as significant as Braid and The Witness is very likely to be relevant in many other domains. Perhaps even most.

                                                                                                                    1. 4

                                                                                                                      The Witness took seven years to develop, was privately funded off of a prior success, uses a custom engine, has very sophisticated optics and rendering, and has no combat, no networking, and no skeletal animations. Even in games, The Witness itself is highly irregular in terms of development. Most people don’t have seven years and $800k to make a puzzle game with a fine-grained attention to rendering and no combat. It’s an extremely specific context. The other thing I find so weird is that on HN and Lobsters people constantly bring up The Witness, but working in games, it’s not a game I hear talked about often.

                                                                                                                      1. 3

                                                                                                                        Two points:

                                                                                                                        • Before The Witness there’s this success you mention: Braid. So it’s not just the one game, and he worked on other things before.
                                                                                                                        • Many things go into the development of a single game. Especially if you do most of those things yourself. The game may be specific, but the development required to achieve it… not so much.

                                                                                                                        The other thing I find so weird is that on HN and Lobsters people constantly bring up The Witness, […]

                                                                                                                        This is a link involving Casey Muratori, and a comment thread mentioning one of his closest peers, Jonathan Blow. Of course The Witness was gonna come up.

                                                                                                                    2. 1

                                                                                                                      Really well said. It’s hard, because on the one hand I’d like to think that there are some universal truths about software. And maybe there are. But, so much is context-dependent.

                                                                                                                    1. 2

                                                                                                                      IDEA for Kotlin and markdown, CLion for C++ and Rust.

                                                                                                                      1. 6

                                                                                                                        A practical defense from rootkits is not detection (which is a cat and mouse game), but control over what is loaded into the kernel. So, kernel module digital signing and whitelisting, or compiling the modules statically into kernel and disallowing the ability to load kernel modules at all.

                                                                                                                        After we implement this, we “only” need to care about security holes and privilege escalation exploits. Of course we lose extensibility, so a potential company can’t release a driver for their hardware. But i.e. Apple has partly solved it by providing a lot of user-mode frameworks that allow writing user-mode drivers, which are properly process-separated from crucial parts of the kernel (so firewalls, usb devices, AV real-time scanning, don’t require kernel-mode drivers at all).

                                                                                                                        1. 3
                                                                                                                          • Java Standard Library
                                                                                                                          • Akka for Java/Scala
                                                                                                                          1. 1

                                                                                                                            What do you find compelling about Akka? Some higher ups have begun talking about using it at $WORK so I’m curious what the appeal is.

                                                                                                                          1. 55

                                                                                                                            This guy is way more generous than I am. I would have yanked the whole project from the internet ages ago. Some big company would take over or launch a new project to protect their bottom line, probably paying an entire team of developers a lot more than they would have ever needed to donate to him. And I wouldn’t have a single shred of guilt about any of it.

                                                                                                                            I don’t even believe corporations are doing this on purpose to kill individual-driven open source. I just don’t think they care. They have no financial incentive to care. The aforementioned cost of $BIGCORP funding a replacement project with an entire team of developers compared to donating a pittance to this poor bastard? A rounding error on their quarterly financial reports. Why would they care?

                                                                                                                            1. 23

                                                                                                                              I would have yanked the whole project from the internet ages ago.

                                                                                                                              Agreed, I’d have stopped working on this thing long ago. The hate this guy gets is absurd and undeserved. I honestly don’t see why he puts up with it.

                                                                                                                              1. 21

                                                                                                                                Ever since spending some time maintaining something small, I’ve long dreamed of dual-licensing, with a free-license clause allowing revocation in the case of abusive behavior towards maintainers.

                                                                                                                                There would be something deeply satisfying about forwarding some trolls abusive messages to the legal department of their employer, explaining that the employer is no longer in compliance with the free license because they employ this person.

                                                                                                                                1. 7

                                                                                                                                  i think it is all about having a “throat to choke”, as one boss I had crassly put it. You use open source, it causes problems. No one is blamed. No one gets in trouble (for the most part). “You got it free? what did you expect?” is a safety net for a lot of people, even managers. Hire a team, pay their salary and benefits, and they don’t deliver? Well, there are lots of bottoms to be spanked (to stick with the crassness for continuities sake). Open source exposes the beauty and ugliness of human nature/incentives. Besides people have integrity, and taking their responsibilities seriously no matter their link on the chain, things will keep being just “meh”.

                                                                                                                                  1. 1

                                                                                                                                    Why would anyone care about a hypothetical project, if you – the hypothetical project’s autor – wouldn’t even care about it?

                                                                                                                                  1. 11

                                                                                                                                    I get why they’re packing features to KRunner (in order to have some kind of an omni-tool with fast access to lots of the things user might need) but I end up always disabling all features, and I’m treating it exclusively as a app launcher. The reason is that when some features are enabled, there is a noticable lag in KRunner on my KDE machines before reacting on the first letter – 2 or 3 seconds – and I feel better when the typing starts immediately.

                                                                                                                                    Having written that, KDE is THE desktop for me. Pragmatic, no ideology, and it has tray icons. I really don’t fathom why GNOME hates them.

                                                                                                                                    1. 2

                                                                                                                                      Other than just app launching, the feature that works well with KRunner is “go to this window”. Especially when it has browser integration. This lets you search and switch to browser tabs. It ends the painful shuffle of windows when you hit something like “Okay, which browser window had the tab with the documentation for zyx REST API?” I do wish it was a bit snappier. OTOH it is going to find it a lot faster than I’m going to find it by rifling through all my browsers and tabs.