Threads for robey

  1. 2

    The idea that git’s CLI is well-designed is a great spit-take, and the first 2/3 of the article is really just about open source philosophy, but I think the later point of the article is dead-on: good design doesn’t happen by sanding down a corner here or there, it’s from thinking about the entire app or workflow holistically. I had a front-row seat to this tragedy in the early days of gnome: if non-designers get a vote equal in weight to designers, no good design can happen. It really requires breaking out of some comfy local maxima.

    1. 3

      I constantly think about this ancient Gruber post - often times when programmers think of design, it’s in terms of “we’ll make file management friendly by designing an X frontend to ls” instead of “let’s make a file manager”.

      As for designers vs non-designers, I wonder if HCI is an inherently authoritarian art form, and usual open source development is going to cause problems for it. I’m going to be a bit of a prick and link my own post (appendix is the relevant part), because it’s an interesting question.

      1.  

        ‘Designers’ covers a lot of different people and levels of expertise. HCI is closely tied to psychology and leads to a lot of objectively measurable things. Fitts’ Law is a bit more complex on touchscreens (and needs you to reason about parallax, something that the people who designed the on-screen keyboard that I’m currently using on my iPad ignored, leading me to repeatedly make the same typos), but for track pads and mice it gives some quite simple metrics for evaluating a UI. It tells you where to put the things that the user will want to click the most. The vast majority of GUIs (open and proprietary) get this spectacularly wrong. The easiest places to hit on a screen the size of a typical laptop are the corners. Think about what your favourite GUI puts there. Is it the four things that you want to click most often?

        The same is true for things like button order (humans, even in right-to-left reading order countries) perceive back as left and forward as right, so your buttons should respect this. The average human can keep 3-7 concepts in their active attention at a time, so this gives you limits on the complexity of attention that you should present. Humans approximate thinking using neural networks, which learn by repetition, so you should ensure that the same action has the same result as much as possible (e.g. put things in the same place in the UI across contexts, use the same shortcut keys, and so on). Around 10-20% of humans place things in hierarchies for natural organisation, but most have good spatial memory for up to about 20 things and organise other things with overlapping labels.

        Just by paying attention to the rules in the above paragraph, you can design a GUI that’s better than most of what’s available today. And most of that is drawn from papers that I read 10-20 years ago, many of which were decades old then. Some designers (I’ve been privileged to work with some) study this stuff and apply it. A lot more are failed artists. Artists have a natural tendency to want to make things distinctive, which works directly in opposition to good usability.

    1. 66

      I read this and felt weird about it initially but couldn’t quite put my finger on it. From my experience, using Rust has lead to faster developer velocity (even for CRUD apps) simply because the strong type system allows you to encode invariants to be checked at compile time, and the tooling and libraries are amazing. This article seems to be about a project that was being developed right around async/await release, which was a rough time for using libraries. Most of the sharp edges are gone now, but I don’t doubt that the state of the ecosystem at that point affected this person’s experience.

      However, I do think there is a situation where this advice holds true (even for someone as heavily biased as I am), which is for a very specific kind of startup: a hyper-growth startup with 100%+ YoY engineer hiring. The issue with Rust is not that its slower to develop in, I don’t think that is true, its that in order to develop quickly in Rust you have to program in Rust. And frankly, most new developers to Rust have no idea how to program in Rust because so many languages do not feature strong type systems. And the problem is that if your influx of new developers who need to learn Rust is too large, you won’t be able to properly onboard them. Trying to write Java using Rust is horrible (I’ve worked with a number of colleagues who I’ve had to gently steer away from OO design patterns that they were used to, simply because they make for really difficult Rust code, and are largely obsoleted by the type system).

      It isn’t even lifetimes or borrowing that are necessarily tricky, in my experience issues with lifetimes are fairly rare for people and they almost always immediately seek out an experienced Rust dev for guidance (you only need a handful to deal with all questions on Lifetimes; my current team only has me and its been a non-issue). The bigger problems are around how to structure code. Type-driven development is not something most people have experience with, so they tend to stick to really simple Structs and Enums, to their detriment.

      For instance, I commonly see new Rust developers doing something like this:

      fn double_or_multiply(x: i32, y: Option<i32>, double: bool) -> Result<i32> {
          if double {
              if y.is_some() {
                  return Err("Y should not be set");
              }
              x * 2
          } else {
              if y.is_none() {
                   return Err("Y should be set");
              }
              x * y.unwrap()
          }
      }
      

      Yes, I know its a completely contrived example, but I’m sure you’re familiar with that kind of pattern in code. The issue is that this is using the shallow aspects of Rust’s type system – you end up paying for all of Rust but only reaping the benefits of 10% of it. Compare that to what you could do by leveraging the type system fully:

      enum OpKind {
         Double(x),
         Multiply(x, y),
      }
      
      fn double_or_multiply(input: OpKind) -> i32 {
          match input {
               Double(x) => x * 2,
               Multiply(x, y) => x * y,
          }
      }
      

      Note how the error has disappeared as there is no way to call this function improperly. That means fewer tests to write, less code to maintain, and APIs that can’t be used improperly. One of the most interesting questions I get commonly when I promote this style of code is “how do I test that it fails then?”; its always fun to explain that it can’t fail[1] and there is no need to write a test for the failure. The developer efficiency benefits from this style of thinking everywhere is massive and more than pays for the cost of using Rust.

      But developers from other languages take time to get to this point, and it does take time and effort from experienced Rust developers to get everyone on the same page. Too many new people and I can see more and more code leaking in as the first example, which means you get minimal benefits of Rust with all the cost.

      I can’t argue with this person’s experience, as much as I love Rust and think it has features that make it an incredibly productive language, but I think the core issue is that the majority of developers do not have experience with strong type-system thinking. As more languages start adding types I’m hopeful this problem becomes less prevalent, because the productivity differences between developers who understand type-driven development vs. those who don’t is large (in languages with strong type-systems).

      [1] Technically it can panic, which I do explain, but for the majority of cases that is a non-issue. Generally if there is a panic-situation that you think you might have to handle you use a different method/function which returns a Result and bubble that up. Panics are largely unhandled (and for good reason; they aren’t exceptions and should be considered major bugs).

      1. 10

        FWIW Matt Welsh is a very experienced programmer and former computer science professor at Harvard:

        https://en.wikipedia.org/wiki/Matt_Welsh_(computer_scientist)

        (I know of him from his research on SEDA, an influential web server architecture, as well as having a good blog.)

        So this comments strikes me as a bit out in left field … I don’t doubt it’s your experience, but it doesn’t seem relevant to the article

        1. 8

          I’m not familiar with who Matt Welsh is, but I found his post to be well written and accurate to his experience. My comment was simply a reflection of my own experience, which I think differs.

          I don’t see how my comment isn’t relevant to the article, but I am open to feedback if there is something specific you felt made my comment out of left field!

          1. 7

            Not GP, and I hope this doesn’t come off as too negative, but your comment is pretty dismissive of the idea that Matt Welsh could have substantive issues with the design of Rust. You seem to imply that the problems he ran into stem from a lack of experience:

            I think the core issue is that the majority of developers do not have experience with strong type-system thinking.

            Your example about double_or_multiply is great, but IMO it’s a pretty elementary example for most readers here, as well as Matt Welsh.

            The general vibe is like this: someone complains that a particular proof in graduate-level mathematics is hard to read, and then you respond with a tutorial on pre-calculus. I like your comment, and it is relevant, but it doesn’t feel like it responds to the article, or takes the author seriously.

            1. 3

              Thanks for the explanation, I do appreciate it. I hoped to not make my comment be dismissive of the article, and more as a refinement of which aspect it was talking about. I think the fundamental issue is that I completely disagree with Matt Welsh that programming in Rust lowers developer efficiency, and so my comment was exploring a reason for why he may feel that way.

              The example was simple, and more just for having something to ground my argument in (ie. Rust makes you faster because you write less code overall), as well as having something for developers who are unfamiliar with type-driven development to see. I didn’t mean to imply it as Matt Welsh doesn’t know that, but more that type-driven development is a completely different style of programming, and onboarding devs to a new language is easy but to a new style is hard.

              Clearly my point wasn’t made as clear as I had hoped, and thank you for pointing out where it felt like I was off-base. I do think it’s important to disagree, and I’ve never been one to defer to another just because of their accomplishments or prestige.

              I’m thinking it might make sense for me to write my own post on my thoughts on developing quickly in Rust, and hopefully I can take these ideas and your feedback and make something that pushes the conversation forward on what makes a productive language without coming across as dismissive of others experience :)

              1.  

                You:

                …a hyper-growth startup with 100%+ YoY engineer hiring… in order to develop quickly in Rust you have to program in Rust. And frankly, most new developers to Rust have no idea how to program in Rust because so many languages do not feature strong type systems. And the problem is that if your influx of new developers who need to learn Rust is too large, you won’t be able to properly onboard them.

                Matt:

                We hired a ton of people during my time at this company, but only about two or three of the 60+ people that joined the engineering team had previous experience with Rust.

                This was in two years, which he claims was 10x growth in headcount, so from ~6 people to 60 in two years, with only 3 people who knew Rust. Basically, well above your 100% YoY hiring threshold for being able to onboard Rust engineers.

                You:

                I completely disagree with Matt Welsh that programming in Rust lowers developer efficiency

                I don’t think you do disagree with him :)

                My interpretation of his claim is that in a rapidly growing organisation that needs to ship product rapidly, and where you don’t have a lot of Rust expertise already, Rust may not be a good fit. What you are claiming is that given enough experience with Rust, it can accelerate your day to day development. But that to do so you need to give developers sufficient time to onboard to Rust such that they can become familiar and efficient with it (which of course is true for any language, but Rust onboarding likely takes a lot longer than, say, Go, or Python, or C#).

                Both of these claims can be true because they are discussing different aspects of software engineering at different scales.

                Related, I’d be interested in hearing what your experience has been on how long it takes to onboard a complete Rust novice, to the point they are at close to 100% productivity.

        2. 8

          On a related note, I’m very curious to see what happens with the recent Twitter situation. If Twitter manages to thrive, I think many companies are going to take notice and cut back on developers. The easy pay-day for software engineers could be at an end, and fewer developers will have to handle larger and larger systems. In that case, I’d imagine building things which are robust will outweigh building things quickly. If you have 10% the number of engineers you want to minimize the amount of incident response you are doing (1 out of 100 devs dealing with oncall every week is very different from 1 out of 10 devs dealing with oncall every week in terms of productivity; now the buggy systems have a 10% hit on productivity rather than 1%).

          I’m both worried (large scale cutbacks to mimic Twitter would not be fun) but also somewhat optimistic that it would lead to more reliable systems overall. Counter-intuitively, I think Rust/Haskell/Ocaml and friends would thrive in that world, as choosing a language for quick onboarding of hundreds of devs is no longer a constraint.

          1. 19

            I draw the exact opposite conclusion:

            Tighter pursestrings mean less resources allocated to engineers screwing around playing with new languages and overengineering solutions to stave off boredom.

            There will probably be cases where a business truly uses Rust or something else to differentiate itself technically in performance or reliability, but the majority of folks will go with cheaper tooling with easier-to-replace developers.

            1. 11

              I agree. People will go where the libraries are. If you have 1/10 the number of people you aren’t going to spend your time reimplementing the AWS SDK. You are going to stick to the beaten path.

              1. 2

                I’m sure you meant that as more of a general example than a specific one, but: https://aws.amazon.com/about-aws/whats-new/2021/12/aws-sdk-rust-developer-preview/

                1. 1

                  Yeah, I meant it generically. More niche languages are missing a lot of libraries. If you have fewer people you probably want to spend less time reinventing the wheel.

                  I know for any one language people will probably come out of the wood work and say “I don’t run into any issues.” but it’s more about perception in the large.

              2. 2

                You make a really good point, and I’ve been mulling on it. My logic was based on the idea that if the software industry suddenly shrank to 10% of its size, the engineers maintaining buggy systems would burn out, while those maintaining robust systems would not. Sort of a forced evolution-by-burnout.

                But I think you’re right, tighter purse strings means less experimentation, so the tried-and-true would benefit. So who knows! Hopefully it’s not something we will ever learn the answer to :)

                1.  

                  The department I run, after over 50% casualty rate this year, has made it a major focus to consolidate, simplify, and emphasize better-documented and smaller systems specifically to handle this. :)

                  I hope it works out, but these are going to be interesting times whatever happens. I just personally wish engineers in tech as a culture hadn’t overplayed their hand.

              3. 5

                I can probably set your mind at ease about Twitter (but not the other tech companies having layoffs, nor the new management there who is utterly clueless). Since at least 2009, Twitter’s implicit/unspoken policy was that engineers are cheaper than machines. In other words, it’s more cost-effective to hire a team to optimize the bejeezus out of some service or another, if they can end up cutting server load by 10%. If their policy was based on any real financial data (I have no idea), good dev and devops people will continue to be in high demand, if only to reduce overall machine costs.

              4. 3

                Any recommend way to learn about that from your experience (except than being lucky enough to have an experienced Rust programmer to help you out)?

                Maybe something like exercise.org?

                1. 4

                  I’m a fan of trial-by-fire, and if you really want to understand type-driven development then learning and using Haskell is what I’d recommend. Rust is a weird language because it seems really novel, but it really only has the ownership model as unique (and even then, Ada + spark had it first). Everything else is just the best bits borrowed from other languages.

                  Haskell’s type system is more powerful, and the docs for libraries heavily lean into the types-as-documentation. I’m not good enough at Haskell to write production software in it, but getting to that “aha!” moment with the language has paid dividends in using Rust effectively.

                  1. 3

                    Rust is a weird language because it seems really novel, but it really only has the ownership model as unique (and even then, Ada + spark had it first

                    Ada/SPARK did not have Rust’s affine types ownership model.

                  2. 4

                    Just wanted to mention that it is actually https://exercism.org for others’ sake. Didn’t want to let that go unnoticed as it is a wonderful platform for learning!

                    1. 2

                      Elm! If you want a beginner-friendly way to epiphany, work through the Elm tutorial. That was my first, visceral, experience of the joy of sum types / zero runtime errors / a compiler that always has my back.

                      Why via Elm? Because it’s a small and simple language that is user-friendly in every fibre of its being. This leaves you with lots of brain cycles to play with sum types.

                      • Friendly error messages that always give you hints on what to do next. (Elms error messages were so spectacularly good, and that goodness was so novel, that for a while there was a whole buzz in all sorts of language communities saying “our error messages should be more like Elm’s”. Rust may be the most prominent success.)
                      • You’re building a web page, something you can see and interact with.
                      • Reliable refactoring, if your refactor is incomplete the compiler will tell you.
                    2. 2

                      fn double_or_multiply(x: i32, y: Option, double: bool)

                      my 2c: I know it’s a contrived example, but even outside of rust it’s generally (not always) a bad idea (e.g. it’s a code smell) to have a function that does different thing based on a boolean.

                      Also, a good linter/code review should help with the kind of issue you’re pointing to.

                      1. 2

                        In hopes it’s instructive, your code samples are an instance of parse don’t validate where you push all your error checking logic to one place in the code.

                        1. 3

                          Yes it is :) I’m a huge fan of that article, though I’ve found it can sometimes be difficult for someone who isn’t familiar with strong types already. Thank you for sharing the link, I think it’s a great resource for anyone interested in reading more!

                        2. 1

                          I’m new to Rust, could you provide an example of calling your second function? I’ve only just passed the enum chapter of the book and that is the exact chapter that made me excited about working with Rust.

                          1. 7

                            Of course! You would call it like so:

                            double_or_multiply(OpKind::Double(33));
                            
                            double_or_multiply(OpKind::Multiply(33, 22));
                            

                            Its good to hear your excitement from enums in Rust, as I think they are an under-appreciated aspect of the language. Combining structs + enums is super powerful for removing invalid inputs, especially nesting them into each other. The way I think about designing any API is: how can I structure the input the user supplies such that they can’t pass in something incorrect?

                            I wish I could find a source for how to design APIs, as there is some place out there which lists the different levels of quality of an API:

                            • low: the obvious way to use the API is incorrect, hard to use correctly
                            • medium: can be used incorrectly or correctly
                            • high: the obvious way to use the API is correct, hard to use incorrectly
                            • best: no way to incorrectly use the API, easy to use correctly
                            1. 6

                              You may be thinking of Rusty Russell’s API design levels.

                              1. 1

                                Yes! That was exactly what I was looking for, thank you!

                            2. 3
                              double_or_multiply(Double(2)) // = 2*x = 4
                              // Or
                              double_or_multiply(Multiply(3,7)) // = 3 * 7 = 21
                              
                            3. 1

                              Can you explain how it could panic?

                              1. 2

                                Multiplication overflow, which actually would only happen in debug mode (or release with overflow checks enabled). So in practice it likely couldn’t panic (usually nobody turns on overflow checks) (see below)

                                1. 6

                                  It’s not that uncommon. Overflow checks are generally off because of perceived bad performance, but companies interested in correctness favor a crash over wrapping. Example: Google Android…

                                  https://source.android.com/docs/setup/build/rust/building-rust-modules/overview

                                  Overflow checking is on by default in Android for Rust, which requires overflow operations to be explicit.

                                  1. 2

                                    I stand corrected! I’m curious what the performance impact is, especially in hot loops. Though I imagine LLVM trickery eliminates a lot of overflow checks even with them enabled

                                    1. 2

                                      I remember numbers flying around on Twitter, most of what I hear is that it is in neglectible ranges. Particularly that if it becomes a problem, there‘s API for actually doing wrapping ops.

                                      Sadly, as often, I can‘t find a structured document that outlines this, even after a bit of searching. Sorry, I‘d love if I had more.

                                  2. 1

                                    So, it’s specific for this example, if the enum was over structs with different types and the function did something else, it wouldn’t necessarily panic, right?

                                    Is there a way to make this design panic-proof?

                                    1. 5

                                      Yes, the panicking is specific to the example. And you can make it panic-proof if none of the function calls within can panic. IIRC its still an open design problem of how to mark functions as “no panic” in rust so the compiler checks it [1][2]. There are some libraries to do some amount of panic-proofing at compile-time[3] but I haven’t used them. I thought there was a larger RFC ticket for the no-panic attribute but I can’t find it right now.

                                      [1] https://github.com/rust-lang/project-error-handling/issues/49 [2] https://github.com/rust-embedded/wg/issues/551 [3] https://crates.io/crates/no-panic

                              1. 1

                                Is this being treated as off-topic because of the headline? Because among all the guff being posted about Twitter this actually had some interesting points. Maybe the headline should’ve been something more like “SRE worst-case scenarios for complex systems”. I found this via Reliability stuff worth reading.

                                1. 1

                                  The link goes to a random tweet deep into a thread about doom scenarios, but it’s not clear what the significance of that specific tweet is. Maybe they meant to link to the thread as a whole?

                                  1. 1

                                    Weird, I see the whole thread. Firefox on Linux.

                                1. 3

                                  I had a lot of thoughts about “Scaling [the fediverse] is impossible” which I dumped on my fedi account, but in summary:

                                  • I don’t think “distrust of governments/taxation” or “cryptocurrency” are the main drivers for interest in decentralization. I think the majority of us are disillusioned by large corporations with no oversight and terrible intentions (Twitter, Facebook) and believe that we can do better with small trust-based communities cooperating toward a larger goal: that’s federation.

                                  • Moderation, human relationships, and managing social interactions is hard. I don’t think a centralized service like Twitter solved any of those problems. It just handed the authority to a group of investment bankers who (shock!) turned out to make bad decisions on your behalf, with no recourse.

                                  • We’ve had federated online systems before: BBSs and usenet, at the very least. They don’t require everyone to run a server, and they don’t require everyone to be good at moderating. They build on “centralization in the small”: your town may only have a few BBSs, but they require the volunteer time of several people. It’s messy at times but it can and does work.

                                  1. 2

                                    OMG my parents totally had one of these, and yes they did use it to protect the Apple II.

                                    1. 0

                                      Cool.

                                    1. 8

                                      $50 is hardly low-cost for a microcomputer/microcontroller, not when you can get a RPi Zero for US$5. (With a significantly higher clock speed than 18MHz…)

                                      Clearly I’m not the target market for this. But wouldn’t a true retro fetishist be taken aback that this board has an ESP32 (32-bit SoC) on it, running the display? Sort of like the dad holding the baby on his lap letting it pretend to drive…

                                      1. 2

                                        from https://www.thebyteattic.com/p/about.html

                                        I focus on 8-bit computers because my kick is innovative, elegant computer architecture, not performance. Therefore, the number of bits is rather irrelevant to me, and I choose 8 bits because the architecture and design effort aren’t smothered by massive data and address buses.

                                        and

                                        I’ve found that only now, as a hobbyist, did I finally manage to do the kind of creative engineering work I dreamed of as a 17-year-old freshman in engineering school.

                                        so this is about passion, not efficiency

                                        1. 1

                                          I’m not sure why they conflate the two categories. $50 is okay for a microcomputer – they really are selling something that is more like a ZX Spectrum, and I’d definitely pay $50 for that – but definitely not… anywhere near okay for a microcontroller. I’m pretty sure you can get a $50 development board for a microcontroller.

                                          The Pi Zero is one tenth of that but you basically get an underpowered phone running Linux in some mysterious manner. $50 for an open source board with open source software (no idea if these claims are true, mind you!) that’s basically a beefed up Sinclair isn’t bad.

                                          I can’t say I ever fell in love with the Pis for small hacks – it always felt like I spent two days putting a cool hack together, and another day or so dealing with systemd, Samba or NFS shennanigans, and God knows what else. I’d gladly pay $50 for something that takes me to a BASIC prompt and lets me peek and poke things instead of asking me to install some weird Python library with fifty Github stars which only works on Python 3.4.2. I’ve given up on cool ideas I had in an afternoon more than once just because, by the time I managed to put together a working image, it was eight in the evening and I had boring grown-up shit to do.

                                          The true retro fetishist will probably scoff at things like Flash memory, too, but then again, if they’re tru retro, they can just get an actual ZX Sinclair :-).

                                          1. 1

                                            $20 ESP32 boards are nice if you don’t want a big OS getting in your way. They can be coded either like Arduinos or in full C/++ with a bigger library.

                                            If you want a friendlier language there are a lot of boards that have MicroPython built in, making them super easy to start writing code for. Installing other Python libs can be annoying but that’s not a fair comparison with BASIC, where you’re limited to the features and commands that come with it.

                                            1. 3

                                              ESP32 is not floss.

                                              1. 2

                                                I am proooobably not the best person to talk to about the ESP32, either :-(. Maybe it’s because I come from an embedded background – working with these things is kind of my job, so I don’t have a lot of patience when doing it as a hobby. I love things that are either really good embedded development platforms – good documentation including schematics, lots of pins and few hoops to jump through for interfacing, lots of test pads, hardware that’s easy to debug, good cross-compiler support that shield you as little as possible from the underlying hardware, good JTAG interfacing etc. – or really good computers, like, I dunno, the Mega65.

                                                In my experience – but admittedly, that was years ago, when it was far less popular – the ESP32s were neither. It took a lot of fiddling to get them to work, and when they didn’t, you had to dig through a lot of magic to figure out why. Lots of times it boiled down to libraries stepping on each others’ toes, or to bugs in their HAL.

                                                I don’t mind a large language or OS However, I would really love a hobby platform that requires less fiddling than a “pro” platform, with the understanding that, sure, it’s not going to be as flexible, or maybe really not as capable. Most “maker”-category boards I tried require a comparable amount of fiddling, for about the same capabilities (if you’re willing to forgo the Arduino or MicroPython bits) or less, modulo weird design choices made in order to meet specific size requirements or manufacturing constraints.

                                                Mind you, I haven’t tried this board. But I’ve often seen the Pi marketed from a similar angle – something supposed to make computing fun again, something you can just plug in and use, like you could do with the BBC Micro or the Spectrum decades ago. Frankly, if the Spectrum, my first computer, would’ve been as annoying as a Pi or an ESP32, I probably would’ve chosen a very different career path :-D. From the specs, this looks far closer to the “modern reinterpretation of classical microcomputers” view.

                                                1. 2

                                                  Yeah, MicroPython (and its for fork CircuitPython) really blew me away with how immediate & accessible it is, and reminded me a lot of the BASIC days. A Raspberry Pi Pico can boot directly into python, and that seems like a better, smaller, cheaper equivalent to this Agon.

                                            1. 2

                                              The author declares that this is a response to a link to the orange site, but it ends up being a cross-link back to this lobsters post: https://lobste.rs/s/fgfxvu/risc_2022

                                              1. 35

                                                This seems to miss the big issue of: some of those threads would not exist somewhere else. It’s one thing to complain about the interface which is a twitter problem. But “stop writing twitter threads” effectively means “I don’t like this so much, others shouldn’t have the chance to experience it at all”.

                                                There are people like foone and swiftonsecurity who could possibly write somewhere else but won’t change for $reasons and the medium works for them so you can only choose to not engage.

                                                1. 6

                                                  This is a good point. However, I would point there are a lot of blogging platforms that offer free accounts; and creating your own blog is not that complicated. I wish Internet Service Providers still had the same mentality as back in the days, when (at least in France) they would offer an FTP with some disk space where you could host your (static or dynamic) website. This is how the whole blogging thing took off in the late 90s/early 2000s.

                                                  1. 25

                                                    It’s not about the availability of blogging platforms, but the way the medium works in the moment for some. For example see https://nitter.net/Foone/status/1066547670477488128

                                                    1. 3

                                                      Foone’s threads are delightful and I find them very easy to read. ❤️

                                                      I wish Twitter had a couple of extra features like a button to jump to the top of the current thread and an easy way to see all the replies to an individual tweet inside a thread.

                                                      1. 2

                                                        Interesting challenge for the fediverse: make a server where people like foone can jot out tiny posts in a thread, but readers can view it as an unadorned stream-of-consciousness post (no replies, no ads).

                                                        1. 3

                                                          Interesting idea, but I’d be sceptical that it works. A stream of consciousness, written bite-sized and then glued together, reads like a rough draft. Some form of visual separation needs to be applied to make the readers aware that they are not reading a polished product.

                                                          1. 1

                                                            I hear you, but I recall being forced to read “The Old Man and the Sea” at school, and many automatically-reconstituted tw––r threads have been at least as coherent and interesting as that. :)

                                                        2. 1

                                                          I sure hope Foone doesn’t find out about this thread, they have a very low opinion on people bashing their tweets as it is.

                                                          1. 1

                                                            Right - it closely models just talking about it in something like IRC, without the friction.

                                                          2. 6

                                                            Even if those blogging platforms were magically even easier to use than Twitter, it would still be close to meaningless because Twitter has the eyeballs.

                                                            (To be clear, I share your preferences but it’s a pointless battle to fight unless you can solve the real forces that drive why things are the way they are.)

                                                            1. 1

                                                              My periodic reminder to any uninitiated readers that there is a name for this phenomenon: a network effect.

                                                          3. 3

                                                            some of those threads would not exist somewhere else

                                                            What is your thesis here? That free Twitter accounts are easier to create than free Wordpress, Substack, or $your_favorite_blogging_platform accounts?

                                                            Or is it that content authored as Twitter threads would not otherwise exist because authors are already logged into Twitter, rendering it the easiest place to author one’s thoughts?

                                                            I can see some ways in which your argument is correct, and many in which it can be dismissed.


                                                            Good ideas deserve a better reading experience than twitter threads. In addition to a better reading experience, alternative platforms – e.g. writing text/markdown in a Gist – are a significantly better authoring experience. URLs to better reading experiences can be cross-posted to Twitter to gain access to the “eyeballs” on Twitter.

                                                            1. 3

                                                              Swiftonsecurity used to post on a blog and it was great! I really miss those days.

                                                              1. 3

                                                                People write to be read. For better or worse, there are a lot more readers on Twitter than other, better platforms. Ergo, threads.

                                                                1. 1

                                                                  I made abundantly clear in my comment that I understand the concept of optimizing for one’s readers. Ergo, give readers a better reading experience, not threads.

                                                                  1. 2

                                                                    I do not like the threads, but people seem to like “lots of readers - threads” more than “a well designed reading experience - readers”.

                                                                2. 2

                                                                  The second one (see the explanation I linked for example) + twitter is its own kind of medium. Where else can you effortlessly link to / embed other bits of content the same way, continue a post from years ago while bringing only the new part into focus, post both a one line comment and a multi-page story without either looking out of place. I think SwiftOnSecurity basically mastered the medium - if that content existed somewhere else, it would change. The form of Twitter threads shapes the content itself.

                                                                  1. 2

                                                                    Foone, for one, has said that the twitter format is what enables them to post. That flow of think, then type in a very short burst, then hit “post” works especially well for their brain, and if they needed to write a blog in order to post they would not do it and would feel bad about all the unfinished blog posts in their backlog.

                                                                    Looking at the number of unfinished blog posts in my own backlog, I understand a little bit where they’re coming from.

                                                                    So when I see a twitter thread, even though I very much don’t enjoy reading it in that format, I just use one of the alternative twitter frontends to roll it together and read it that way. Because I suspect that Foone is not alone, and some stuff only exists because it can go out with very low friction like it can on twitter. And it’s interesting enough that I’d rather see it published in a format I don’t like than see it sit unpublished in someone’s backlog and never learn about it.

                                                                    1. 1

                                                                      I think it’s self evident that if writers aren’t able to write with certain tools, then they can and should use the tools that work for them. If Twitter threads allow them to write most effectively – great; they should use Twitter threads.

                                                                      In my opinion, this discussion began not about those people, but about the readers who have to endure the reading experience that twitter threads mandate. I think it’s important to finish that conversation before having one about author UX.

                                                                      Obviously the clickbatey title lacks nuance, but if one were inclined to rewrite it, it might read “Please stop writing Twitter threads if you can help it”.

                                                                      1. 2

                                                                        I would actually push a small step further: we should start with the assumption that authors have considered alternatives, and found them lacking for some reason.

                                                                        It obviously never hurts to ask nicely, but the assumption that people don’t know twitter sucks to read and if we just tell them about it, they’ll instantly see the wisdom of maintaining their own site or otherwise altering their workflow, can seem more than a bit condescending.

                                                                        The discussion began about the readers, to be sure. But you mentioned the authoring experience and that made me think of Foone’s comments. The reading experience would be worse if authors stopped publishing.

                                                                        (With that said, I hope everyone who can stand to moves right off twitter. I don’t enjoy reading there, especially since the attempts to make me log in just to read have gotten more and more aggressive lately.)

                                                                1. -1

                                                                  Please stop releasing TUI frameworks for Python. The language is way too slow and makes these tools a pain to use.

                                                                  To whoever is starting to write a reply - no, it cannot be optimized. Python will always take a bit to start the program.

                                                                  1. 10

                                                                    I find this comment very surprising.

                                                                    The stopwatch app from the Textual tutorial starts running for me in less than a quarter of a second - it’s fast enough that I didn’t even notice the delay until I tried to eyeball-measure it just now.

                                                                    The whole point of TUI apps is that you’re going to spend some time in them. Does a quarter of a second to show the initial screen really matter to anyone? That’s way faster than loading a web application in a browser tab.

                                                                    Thinking about it, I don’t think I’ve ever used a TUI written in any language where the startup speed has bothered me.

                                                                    1. 2

                                                                      Java/scala usually has a 1-3 second startup time, which is too long for me, but I agree – that’s the only one I can think of.

                                                                    2. 7

                                                                      To whoever is starting to write a reply - no, it cannot be optimized. Python will always take a bit to start the program.

                                                                      It depends. Maybe startup time doesn’t really matter for someone’s particular use case. While there will always be some baseline startup time from Python, there are cases where you can optimize it and possibly bring it down to a level you find acceptable.

                                                                      At a job, I was tasked with figuring out and speeding up slow start of a Python program. Nobody knew why the bloody thing was taking so long to start. Part of it was network delays, of course, but part was Python. I did some profiling.

                                                                      This little Python program was importing a library, and that library imported something called pkg_resources. Turns out that pkg_resources does a bunch of work at import-time (nooo!). After some digging, I found that pkg_resources was actually an optional dependency of the library we were using. It did a try … import … except: …, and could work without this dependency. After digging into the code (both ours and the library’s), I found that we didn’t need the facilities of pkg_resources at all.

                                                                      We didn’t want to uninstall it. Distro packages depended on it, and it was possible that there were other programs on the system that might use it. So I hacked up a module importer for our program that raised ModuleNotFoundError whenever something tried to import pkg_resources.

                                                                      I cut a nearly one-second start time down to an acceptable 300 milliseconds or so, and IIRC a fair portion of the 300 milliseconds was from SSH.

                                                                      Know your dependencies (direct and indirect). Know what you’re calling (directly and indirectly) and when you’re calling it. Profile. And if your Python startup times are slow, look for import-time shenanigans.

                                                                      1. 3

                                                                        Program startup speed is important for some applications but negligible compared to other aspects like usability, accessibility or ease of development, wouldn’t you agree?

                                                                        1. 1

                                                                          Program startup speed and performance is an important part of usability. It’s bad for software users when the software they use is full of latency, or uses so many system resources it bogs down their entire computer.

                                                                          1. 2

                                                                            Agreed, it’s part of usability. But it depends on the numbers. Saying “stop writing TUIs in Python” because of 200ms (out of which something can be shaved off with optimization) sounds extreme.

                                                                        2. 2

                                                                          I completely agree with the unsuitability of Python for TUI / CLI projects! (Especially if these tools are short-lived in their execution.)

                                                                          Long ago (but still using today) I’ve written a simple console editor that has ~3K lines of code (25 Python modules) which imports only 12 core (and built-in) Python modules (without any other dependencies) and mainly uses curses.

                                                                          On any laptop I’ve tried it (even 10 years old) it starts fast enough. However recently I’ve bought an Android device and tried it under Termux. It’s slow as hell, taking more than a second to start… (Afterwards it’s OK-ish to use.)

                                                                          What’s the issue? The Python VM is slow to bootstrap and load the code (in my case it’s already in .pyo format, all in a zip with zipapp). For example just calling (on my Lenovo T450) python2 -c True takes ~10ms meanwhile python3.10 -c True takes ~14ms (python3.6 used to take ~20ms). Just adding import json adds another +10ms, meanwhile import curses, argparse, subprocess, json (which is perhaps the minimal any current-day project requires) yields a ~40ms startup.

                                                                          With this in mind, this startup latency starts to pile-on and it has no solution in sight (except rewriting it in a compiled language).

                                                                          Granted, even other languages have their issues, like for example Go, which is very eager in initializing any modules you have referenced, even though will never use, and thus easily adds to startup latency.

                                                                          (I’ll not even touch on the deployment model, where zipapp is almost unused for deployment and https://github.com/indygreg/PyOxidizer is the only project out there trying to really make a difference…)

                                                                          1. 2

                                                                            Have you tried the framework?

                                                                            1. 2

                                                                              Why would I? It only has buttons and checkboxes implemented. And according to comments in here is still taking 1/4 of a second to start on a modern CPU.

                                                                              EDIT: In the demo video, the demo takes 34 frames to boot. At 60fps, that’s more than half a second.

                                                                              1. 5

                                                                                I guess it will never be succesful then, like that slow-as-hell Slack thing /s

                                                                                1. 4

                                                                                  The popularity of a chat app - particularly one that most people use because it’s what their workplace standardizes on - is driven much more by network effects than by the quality of the standard client app. It is bad that the Slack client is slow, and this is made worse by the fact that there aren’t a whole lot of alternative clients for the Slack network that a person who is required to use Slack as part of their job can use instead of the official, slow, one.

                                                                            2. 1

                                                                              I think that the problem with your assessment is the assumption that the users of this framework have the knowledge to use a different language or want to use a different language than python. Nobody is forcing you to use it and if folks are releasing tools using it, nobody is forcing you to use those. For those that want to add a TUI front end to a script they made, this seems like a good option.

                                                                              1. 3

                                                                                I think Ink is overall a better TUI framework than this, and let’s face it, Python really is slow, JavaScript is much better.

                                                                            1. 13

                                                                              The title is a bit misleading: the tldr is “we’re moving to swap files”.

                                                                              1. 3

                                                                                I would recommend almost any other source. Steven Levy’s “Hackers”, for example, is dated now but aged much better than this.

                                                                                1. 19

                                                                                  The fact we’re celebrating hackish extensions on top of TTYs is a sign of the myopia in our industry. We have all these great ways to render any kind of interface (even 1980’s bitmaps are an improvement), command line or not, and we just consider 1977 state of the art to be perfect.

                                                                                  1. 9

                                                                                    While I think it’s valid to criticize “back to the future” efforts to revitalize the command line instead of doing something new, I’d like to see more efforts to replicate the strengths of the command line in new UIs. I like this summary of the command line’s advantages:

                                                                                    • Power of the keyboard — the keyboard allows for faster input than the alternation of keyboard & mouse usage.
                                                                                    • Reproducibility — because all commands are text, they may be more easily reproduced and shared.
                                                                                    • Composable commands — commands on the command line may be arbitrarily composed, using the output of one command as the input of another. This allows for powerful means of expression.
                                                                                    • Scripting — the interactive session may be used as the basis for a script (a computer program), allowing for the automation of certain tasks.

                                                                                    Any GUI that fulfilled these requirements as well as a TTY would be a force to be reckoned with, at least with the vital programmer demographic. You can see some successful efforts to bring these into the GUI world, such as Spotlight harnessing the power of the keyboard for desktop search, but generally this is underexplored territory IMHO.

                                                                                    I’ll also echo @bsandro’s advantages of standardization, portability (between windowing systems and OSes), and remote connection (via SSH).

                                                                                    In the absence of efforts to replicate the command line’s strengths, I will continue to invest time and energy into command line and TUI software.

                                                                                    1. 7

                                                                                      Command lines and GUIs aren’t a dichotomy - things from CLIM to Mathematica should make this known. Even within more commodity things, command palettes, AppleScript/Shortcuts/OSA, and the fact Windows is fully keyboard navigable make this already addressed.

                                                                                      And as much as I don’t like Pike, I have to agree with his studies on keyboard and mouse, where the mouse was found to be actually faster, even if it didn’t feel like it. With newer HID, that leads to me swiping on the trackpad to switch workspaces instead of going for the keyboard. With future HID, who knows what’s next?

                                                                                      1. 4

                                                                                        and the fact Windows is fully keyboard navigable make this already addressed.

                                                                                        I’m creating apps that enable faster interaction with medical software by using the Windows automation. I assure you this is not the case and I’m trying really hard to use the accessibility/Windows apis, before attempting to use keyboard, before resorting to “look for this and that field and click between them + about 5px to the right”. It’s uncommon, but still way more common than it should be.

                                                                                      2. 3

                                                                                        Any GUI that fulfilled these requirements as well as a TTY would be a force to be reckoned with…

                                                                                        What about the Oberon interface, which was a very strong influence on Plan 9 and Acme? I’d say those are both GUIs that meet all of those requirements, with the possible exception of the first one (people have forked Acme and added more keyboard-only control but I’m not sure it caught on, possibly because that advantage isn’t really real).

                                                                                        @calvin also pointed me at Apollo’s Domain/OS a while ago: there are some similar interface ideas in there too.

                                                                                        … at least with the vital programmer demographic.

                                                                                        That’s the problem, really. Within little niches, people are already using alternative interfaces. They just never make it to the mainstream as they aren’t compatible with the lowest common denominator.

                                                                                      3. 7

                                                                                        kinda sucks that rendering proportional fonts in a terminal today is at best a hack, too. I just really dislike reading monospace prose 😔

                                                                                        1. 6

                                                                                          I woke up one day and realised this, so now I run gtk+ emacs. There are upsides and no real downsides

                                                                                          1. 5

                                                                                            It’s depressing, isn’t it? Everyone would rather cosplay a 1970s mainframe wizard than consider what a 21st century interface could be.

                                                                                            Well, not everyone. We also have people using GUI IDEs from the ‘90s. I don’t know what would be better but I think it’s a shame that nothing new has appeared for a while now.

                                                                                            I wonder if we’re stuck at a lowest common denominator? I’ve seen some cool programming interfaces for Smalltalk but that’s no good unless you’re using Smalltalk. Are we stuck because everything has to be cross-platform and language-agnostic?

                                                                                            1. 8

                                                                                              One of the recent articles on here really resonated with me, about how desktop GUIs got lost after the advent of tablets fooled them into wanting to merge the two ways of interacting. It explains why there’s so much interest in terminals and 90s GUIs: we’re looking for things that fit laptops/desktops better than the current bad tablet-oriented compromises. Maybe we’re stuck in a dead-end, and by rewinding to GUIs that worked well, we can break out and start improving things again.

                                                                                              1. 4

                                                                                                The “tabletification” tends to be overblown, and the retrofetishism started long before.

                                                                                            2. 4

                                                                                              It sounds like you consider more pixels to be better, or perhaps more arbitrary layouts of information.

                                                                                              While the terminal isn’t perfect, it seems a lot of people find it very powerful and useful. I’d love to see your prototype of a better system. Would it be something like a Jupiter notebook for shell? Or what ?

                                                                                              1. 4

                                                                                                Are bitmaps better though? I can’t think of a system that’s better than terminal for everything. (I’m excluding actual images of course) Specific cases, sure, but even though I’m using vscode, half the time I’ve got a terminal panel open inside it.

                                                                                                When things are outside the terminal, I normally run into this issue: how do I filter / reuse / compose this information? For example, I’ll take lshw / lspci over a Windows device info app because it’s so much more useful. Sure, we could do better and there are ideas like powershell and other shells that try to present structured outputs. I keep trying those and finding out they’re just not convenient enough to be worth switching.

                                                                                                I disagree that it’s an industry myopia - there’s a new project trying to improve the situation every week. None have really succeeded as far as I can tell. There’s a chance we’re really at the global maximum right now with the idea of “pretty text in a grid for interactive use + a flag for structured output + bitmaps for very specific things that require bitmaps”, yet we’re still able to constantly improve on that with various implementations.

                                                                                                1. 3

                                                                                                  Specific cases, sure, but even though I’m using vscode, half the time I’ve got a terminal panel open inside it.

                                                                                                  What if you could just execute a shell command from within VS Code? For example, suppose you could select any text in any tab or panel, then right-click and choose “Run in shell” or something similar. Then you’d get the result as editable text in a new tab. Or maybe you could select any text and pipe it into stdin for an arbitrary shell command, with an option to replace the selected text with the result. Or the selected text could be the arguments to a shell command. You’d get all the flexibility of textual commands and outputs, with all the ability to filter / reuse / compose results, but integrated into your editor, rather than having to copy & paste things to and from a separate terminal panel.

                                                                                                  Maybe there’s a VS Code extension that does this, I have no idea. I just wonder if it’s really the terminal that you want, or is it the ability to compose commands and munge their output?

                                                                                                  1. 4

                                                                                                    I don’t think that would be helpful (for me). Issues: already too many tabs, how to handle interactive commands, how to handle binary input/output, how to handle command history, how to handle huge/streaming/ephemeral data? Also, previous output / task context wouldn’t be visible. Once in a while I do want the output in a tab… so I redirect to /tmp/foo and ctrl-click it in the terminal panel - solves 99% of what you describe without requiring a redo of the whole system.

                                                                                                    Running the selection as a command already exists as “workbench.action.terminal.runSelectedText”.

                                                                                                    or is it the ability to compose commands and munge their output?

                                                                                                    This, but the new system would have to do it better than the plain terminal solution.

                                                                                                    1. 2

                                                                                                      I like that you call it “the plain terminal solution” - to me, embedding a whole terminal emulator seems much more complicated than the “plain” solution :)

                                                                                                      Regarding the other issues, I’m not sure why the terminal solves any of them better than a normal text editor window - except the interactive commands, I guess. But I’m not trying to convince you of the merit of my entirely hypothetical vscode extension, just trying to work out if it’s really a terminal emulator that you want rather than specifically the ability to run and manipulate shell commands.

                                                                                                      The article is about using terminal escape sequences to build TUIs, which seems like a weird thing to want in a panel inside a real GUI application like VS Code.

                                                                                                      1. 1

                                                                                                        I think there is another benefit that make the terminal more appealing than that hypothetical extension and I see it rarely mentioned. The command line offers a very strict chronological command/response interaction. A new command is placed at the bottom, older stuff moves up.

                                                                                                        This means that I have a clear overview of history, with specific information of all the steps. This command resulted in that output. It also follows one thread of history per terminal. So while one can do things in parallel, nothing gets “lost” running in the background.

                                                                                                        That historical overview is for instance what is missing from Jupiter notebooks and I find them always very confusing to work with when I need to do more than just follow the steps from top to bottom.

                                                                                                        1. 4

                                                                                                          I completely agree! Although I think that’s an argument against this sort of terminal interface…

                                                                                                          The problem is the overloading of the word “terminal”. Although the article is called The Renaissance of the Command Line, it’s really more about TUIs: the ncurses style of interface that uses terminal escape sequences. Likewise, the top comment in this thread is about rendering interfaces using “hackish extensions on top of TTYs”. When you use that sort of interface, you lose your history in the terminal. You can’t scroll up and see exactly what that vim command did.

                                                                                                          Obviously, you can’t “scroll up” and see exactly what that code or subl command did either. But since neither TUIs nor GUIs preserve terminal history, that’s obviously not an advantage for TUIs.

                                                                                                          I sometimes use good old ed for sysadmin work, because it preserves the history. Then I really can scroll up and see a very clear audit log of this command resulted in that output.

                                                                                                          I’m not against having a terminal window into which you can type commands: I still think that’s the best interface for many situations. I’m more questioning the utility of TUIs. Or, to put it another way, a “dumb” terminal is great, but should we really, in the 21st century, be building application interfaces on top of an emulated VT100?

                                                                                              1. 1

                                                                                                The windows version of this (which didn’t work) was on lobsters a few months ago: https://lobste.rs/s/x4jxxn/inside_story_outside_investigation

                                                                                                1. 1

                                                                                                  I have no idea whether the Windows version of this product worked or not, but the story you linked is not about that. It is about a competitive, contemporaneous product from Syncronys called “SoftRAM”, not Connectix RAM Doubler.

                                                                                                1. 6

                                                                                                  I have a 2020 Thelio desktop (AMD; thelio-r2 running GNU/Linux). It isn’t terrible, it’s reasonably quiet, it’s a reasonable size, so far customer support has been great, and i’d buy a Thelio again. But i have three complaints:

                                                                                                  • it doesn’t have any front USB ports
                                                                                                  • insufficient cooling. I wonder why they didn’t just put an additional fan in the side of the case?
                                                                                                    • it can’t handle the preinstalled Ryzen 9 3900X graphics card that i purchased with it; when under heavy load for about 20 minutes (for example, when playing a game), heat builds up and causes the rest of the system to shut down. I have to throttle the graphics card by about 30% to prevent this, at which point it’s not that great, and one of the supposed attractions of a desktop over a laptop was a great graphics card. I was hoping that if i bought a prebuilt desktop rather than building one myself, i would avoid this sort of problem.
                                                                                                    • my preinstalled internal NVMe SSD drive (Sabrent Rocket) crashed in 2022. I noticed it’s mounted right under the GPU, so given the problems with failing to dissipate the heat from the graphics card, i suspect this got too hot over time too.
                                                                                                    • this is ironic because System76 has a blog post about their careful optimization of airflow for cooling in the Thelios; also because they seem to care about the aesthetics of the case (which i don’t care about, but i do care about cooling)
                                                                                                  • it crashes from time to time (the system freezes and the fan starts running at full speed; probably not their fault; but my previous computers (laptops running GNU/Linux) didn’t have this problem – therefore I suspect it’s some problem with the GNU/Linux drivers for the AMD GPU)

                                                                                                  Any suggestions for my next desktop? I’d like something comparable to the Thelio in terms of power and size and quietness, but with some front USB ports, and a high end graphics card that can run at full power, and a minimum of fuss (ie “it just works”, eg sufficient cooling so that it doesn’t ever overheat and shut down, and my hard drive doesn’t crash after two years). I’d prefer pre-built but i’m willing to build it myself; with the 2020 Thelio i went pre-built because i figured if i did it myself i’d screw it up and buy some component that doesn’t work well with GNU/Linux, or put the thermal paste in the wrong place, or not provide enough cooling, or something. But since I didn’t achieve “it just works” with pre-built anyways, maybe i should just build it myself?

                                                                                                  Come to think of it, i should just ask System76 support if it would be feasible for me to replace the case on my 2020 Thelio with an aftermarket case with a side hole for a fan, and front-facing USB ports.

                                                                                                  1. 3

                                                                                                    insufficient cooling. I wonder why they didn’t just put an additional fan in the side of the case?

                                                                                                    it can’t handle the preinstalled Ryzen 9 3900X graphics card that i purchased with it; when under heavy load for about 20 minutes (for example, when playing a game), heat builds up and causes the rest of the system to shut down. I have to throttle the graphics card by about 30% to prevent this, at which point it’s not that great, and one of the supposed attractions of a desktop over a laptop was a great graphics card. I was hoping that if i bought a prebuilt desktop rather than building one myself, i would avoid this sort of problem.

                                                                                                    I’m having this exact same problem and have since I bought the unit. This is SUPER sad since I otherwise love the machine but what the hell is the point of buying a monster desktop that you can’t even push to anything like its full potential.

                                                                                                    I kinda gave up gaming on the beast because running No Man’s Sky at anything but low detail/res settings causese the case to get BLAZING hot to the touch, and then the system shuts down.

                                                                                                    And now I’m stuck for at least another 5-6 years because my desktop budget needs to refill :)

                                                                                                    1. 2

                                                                                                      If I’m buying a desktop in 2022, I’d probably go for off-lease business desktop if I didn’t care much about graphics (as most are SFF). They’re very thick on the ground, fast, cheap, and low-trouble. Whitebox is very tempting, but I’ve had so many miserable and hard-to-debug issues with them.

                                                                                                      Of course, desktop Macs also put a wrench into things value wise. Next time it comes down to upgrade, I’m considering a Mac.

                                                                                                      1. 2

                                                                                                        I’m sad to hear this. I bought two of their laptops (over the years) and both have been extremely strange and unreliable beasts, but I was hoping this could be chalked up to their reluctance to design the laptops themselves. (Apparently they are re-branded imports.) Given the freedom of designing a whole desktop PC from components, they should have been able to do a much better job.

                                                                                                        1. 2

                                                                                                          my preinstalled internal NVMe SSD drive (Sabrent Rocket) crashed in 2022. I noticed it’s mounted right under the GPU, so given the problems with failing to dissipate the heat from the graphics card, i suspect this got too hot over time too.

                                                                                                          This is an annoying anti-pattern common in many motherboards I’m afraid. I believe it’s because NVMe connects directly to the PCIe bus, and so the slot for it tends to take the space that would otherwise be occupied by a PCIe card. A double-width GPU in an adjacent slot will then happily sit right over it. It worked just fine a few years ago, but NVMe drives and GPU’s both now tend to run hotter than they used to.

                                                                                                          1. 1

                                                                                                            it crashes from time to time (the system freezes and the fan starts running at full speed; probably not their fault; but my previous computers (laptops running GNU/Linux) didn’t have this problem – therefore I suspect it’s some problem with the GNU/Linux drivers for the AMD GPU)

                                                                                                            Oh my god. I have this exact problem for the entire lifetime of my AMD card. It’s not a Linux problem, I’ve hit (and can deterministically reproduce) this problem on Windows too. The only thing that kinda worked was tuning the fan curves really aggressively to the point that the fans spin up at the slightest 3d rendering. I’ve tried a lot of stuff up to re-pasting the card and nothing helped.

                                                                                                            Not buying an AMD card again.

                                                                                                            1. 1

                                                                                                              What card, out of curiosity? Would be nice to have something to avoid.

                                                                                                              1. 2

                                                                                                                An RX590

                                                                                                            2. 1

                                                                                                              A good method of guessing how probable cooling problems are with a given computer is look at how much ventilation the case has. Small windows and/or grilles in corners? Trouble. This is just a fact and all case manufacturers create cases like this for some reason. For example, I love the aesthetics of Fractal Design Define cases, but they run hotter and louder than their Meshify cases that have a full mesh front panel.

                                                                                                              1. 1

                                                                                                                I think the best move is to use a customizable gaming-focused company like iBuyPower, where you can pretty much spec out whatever you want and they put it together for you, to have them source roughly the same hardware as System76 uses and put it in a chassis with better air vents + fans + front ports; then you install Pop!_OS (System76’s distro, and coincidentally by far my favorite consumer-focused Linux distribution!) on it yourself when the fully built rig arrives in the mail.

                                                                                                                As long as the underlying CPU/GPU combination is the same, and you’re using a motherboard that has compatible WiFi and Bluetooth, I think you’ll end up with very similar Linux/Pop!_OS compatibility, but better thermals, performance, and longevity. System76 seems to optimize for having an aesthetically-pleasing chassis over thermals, and if you don’t care about the former (or enjoy gaming-style aesthetics, where thermals are an important design consideration) you can get a lot better of the latter. You can probably even control any RGB lighting you’ve had them set up for you, if you’re into that sort of thing, via OpenRGB!

                                                                                                                One thing I’d stress though: specifically for the motherboard, make sure you’re checking for Ubuntu compat, not “Linux.” WiFi/Bluetooth drivers ship in the kernel, so while the latest kernel may have support for the drivers, that kernel may not yet be used in the latest version of Ubuntu/Pop!_OS. Since Ubuntu is extremely common amongst Linux distros, checking for Ubuntu compat should be fairly easy via Google, and if it’s Ubuntu-compatible it should be Pop-compatible since they use the same kernel.

                                                                                                                And by using something like iBuyPower you have roughly the convenience of a prebuilt, minus having to install the OS yourself and having to do an upfront check to make sure you’re using a motherboard with WiFi and Bluetooth that work with Ubuntu.

                                                                                                                You could also just build a desktop yourself! It’s not thaaaat time-consuming. But if you’d rather not spend a day clicking and screwing parts together, and managing a panoply of hardware orders from various websites, that’s valid and there are other options.

                                                                                                                1. 1

                                                                                                                  I just took a look at iBuyPower on your suggestion, and it seems like they don’t really address the biggest problem of doing a custom build: the research required to pick all of the components out. Snapping the parts together is easy enough, the benefit of a pre-built is not having to select all of the components individually. It does look like iBuyPower has some pre-built machines, but if then you are back to the “might not work with Linux” problem.

                                                                                                                  A lot of gaming focused companies also, frustratingly, seem to top out at 32gb of ram these days. That’s fine for gaming still, but increasingly not fine for a lot of other workloads. I know ram is upgradable later, but you often end up paying for ram you need to throw out (or deal with reselling) because they do 2x16 or 4x8 configurations.

                                                                                                              1. 3

                                                                                                                Wait, why did they keep the terrible arrow key layout?

                                                                                                                1. 3

                                                                                                                  One thing you may have noticed is that both C files include the square.h header, but we haven’t specified it as an input to the command. You may be surprised to see that we can still change the header and cause both files to recompile to pick up the change […] The trick is that tup instruments all commands that it executes in order to determine what files were actually read from (the inputs) and written to (the outputs).

                                                                                                                  With what, strace? Automatically determining what files are read could end up causing a lot of unnecessary rebuilds right? Perhaps I’m missing something.

                                                                                                                  1. 9

                                                                                                                    Tup uses FUSE to discover dependencies. A FUSE filesystem which proxies all requests to the underlying filesystem is mounted, and reads and writes are recorded as dependencies. See https://github.com/gittup/tup/tree/master/src/tup/server for how it is done.

                                                                                                                    1. 6

                                                                                                                      Pretty sure it uses a FUSE filesystem to essentially enforce the dependency graph. If a build process for a target file accesses an asset, that asset is silently marked as a dependency, causing a rebuild of the target if that asset is modified. Such a rebuild is (axiomatically) desirable, or why would the build process have read the file?

                                                                                                                      Similarly, when a build process creates extra files, those are marked as outputs. If a future run of the build process doesn’t touch those outputs, they’re automatically cleaned up (which is desirable, or the build process would have accessed them).

                                                                                                                      Finally, if these undeclared input or output files are also defined as (unrelated!) targets, the build halts and complains that the dependency graph you’ve defined is insufficient: it needs to know which targets to build first, that’s the whole point of the graph. It forbids violations of the graph you’ve defined and demonstrable insufficiencies within that graph, based on material observations of the build process, which it then uses to intelligently keep build outputs clean and up-to-date.

                                                                                                                      I loved playing with it, had to really bend it’s rules to get it to parse files for declared imports (it has strong opinions), and am glad to see that it’s not as harsh about having strictly one Tupfile per directory level of outputs as it used to be, that was a bit of a pain (re: strongly opinionated). Still a little confusing to know what $PWD is going to be if you’re using a Tupfile.default file. Lua extension is really cool.

                                                                                                                      1. 2

                                                                                                                        Jeez, this is way more informative than the linked article. I could have used 80% less cheerleading about how great they are and 100% more description of what it actually is and does. Thanks!

                                                                                                                        1. 1

                                                                                                                          If a build process for a target file accesses an asset, that asset is silently marked as a dependency, causing a rebuild of the target if that asset is modified. Such a rebuild is (axiomatically) desirable, or why would the build process have read the file?

                                                                                                                          Because it needs it for some part of the functionality that doesn’t affect the output? Say translations of the diagnostics messages.

                                                                                                                          We’ve actually ran into this with environment variables: quite often tools query environment variables that have nothing to do with their output (verbosity, color output, etc). So if you want to track changes accurately, you not only need to get the list of the environment variables (which is often a pain by itself), but you actually need to understand their semantics.

                                                                                                                          Similarly, when a build process creates extra files, those are marked as outputs. If a future run of the build process doesn’t touch those outputs, they’re automatically cleaned up (which is desirable, or the build process would have accessed them).

                                                                                                                          Again, I can think of a scenario where a tool produces multiple files and on a re-run checks one of them to determine if anything changes and skips touching the rest if not. Say a sentinel file guarding a directory full of files?

                                                                                                                          The underlying theme here is that in software builds relying on implicit knowledge and assumptions is asking for trouble.

                                                                                                                          1. 1

                                                                                                                            […] for some part of the functionality that doesn’t affect the output? Say translations of the diagnostics messages.

                                                                                                                            Ah, thanks! I should have mentioned that, because this FUSE FS is mounted over your build directoy, it doesn’t catch reads of eg. system locales. Configuration files affecting colors, verbosity, or translations within your repository are ofc significant (they’re part of you’re program), and changes to them should naturally cause rebuilds and be manually tested for correctness.

                                                                                                                            I can think of a scenario where a tool produces multiple files and on a re-run checks one of them to determine if anything changes and skips touching the rest

                                                                                                                            Sounds like the kind of caching a build system would do :p
                                                                                                                            One probably shouldn’t nest build systems, but in practice, the way to do this would be to run the secondary build system before or after (probably before) running Tup, which begs the question “what if I need to run it in the middle?”, in which case the answer is to “not nest build systems” and migrate / break-up that build process into Tup rules ¯\_(ツ)_/¯
                                                                                                                            If you’ve got something that does this sort of caching, isn’t a build system, and can’t be broken up into a non-batch process via flags or options, then yeah you got me there.

                                                                                                                        2. 1

                                                                                                                          Sounds like a portability nightmare as well.

                                                                                                                        1. 1

                                                                                                                          I would add one more thing that I worry about from time to time: Now that rust isn’t (as much) financially supported by Mozilla, is the language too complex to manage? Async/await isn’t even done yet but it added a huge new surface area. I think it would be good to focus on simplifying/unifying some parts of the language to let more of it fit into people’s heads at once. For example, making async-vs-sync or macros more transparent (I have no idea how) would go a long way.

                                                                                                                          1. 7

                                                                                                                            To me, ring’s stance on portability and such seems a little silly - relying so much on platform-specific C and assembly, when they could just have a Rust implementation that’d work on almost any platform. You could then derive and test riced out assembly implementations against that. The way they do things now leads to a ton of friction for platforms off the beaten path as soon as they need anything relating to cryptography.

                                                                                                                            1. 2

                                                                                                                              I concur. I was building my Rust module for a Flutter application running on Android. With Ring there were a lot of compilation issues, so I gave up. Luckily, native-tls-vendored feature in reqwest crate just made things to work.

                                                                                                                              I know that I was probably missing some libraries to do cross compilation but I just don’t want all that extra work. I use Rust specifically to have a lot less headaches. Sadly, it is not always the case.

                                                                                                                              1. 1

                                                                                                                                Yep. This exact problem made me disqualify ring when hacking on bitbottle, and it caused me a lot of early headaches trying to find native implementations (not always successful). Given how perfectly-suited rust is for doing crypto work, I hope this improves over time.

                                                                                                                                1. 1

                                                                                                                                  With how the author of ring handled yanking and reacting to external requests, I get a feeling that we will see some kind of breakage in the future. I wouldn’t even be surprised if they would like to get paid for adding another target to their repo, which they also would have to maintain. But I don’t think that will generate much sympathy. A lot of people would probably rather turn the ecosystem into using native-tls, than dealing with this.

                                                                                                                                1. 4

                                                                                                                                  [takes a long puff on a candy cigarette] “Eclipse… Now that’s a name I haven’t heard in a long time…”

                                                                                                                                  Looks pretty cool though! I would use it just for switching to AA fonts.

                                                                                                                                  1. 2

                                                                                                                                    Yes, but this time it’s Eclipse without Java and TypeScript instead. And it sounds kind of nice that it’s basically just a front end for their language server, so with that and the CLI part you can use whatever you feel like for the front end.

                                                                                                                                    I hope that’s an approach that will become more wide-spread, so we don’t have multiple IDEs/editors where developers largely spend time to chase each other on the language side and one is more independent of languages and GUI toolkits that are in in fashion at a certain point in time. I think Eclipse might have learned that the hard way.