Threads for ntietz

    1. 3

      I see just a blank blue screen in Mull on lineage os on my phone.

      1. 2

        Hm, sorry to hear that and thanks for letting me know. It’s using JavaScript / WASM to render and uses some localStorage but is vanilla Yew otherwise.

        I probably won’t be able to reproduce it, but debug logs are welcome in case it’s something evident from those.

        1. 1

          Did you change it? Now it looks like a reasonable HTML document.

          1. 1

            I didn’t make any substantial changes, only some copy updates. The mystery deepens.

            1. 1

              My browser isn’t doing anything funny; if I wget it I still see the reasonable HTML source. No sign of JS rendering…

              1. 1

                Are you using the same browser as timthelion, or another one?

                The body of the HTML source should contain a script tag, and there should be a couple of link tags that pull in WASM and a small bit of JS. Are you seeing something different? If so I’d love to have a log of the command you ran to fetch the page, and what was retrieved, if you could email them to me (email is on my website).

                1. 2

                  no offense but I don’t want to be responsible for a site going from reasonable HTML to a JS-rendered monstrosity… sorry; I care about that stuff more than I probably should.

    2. 19

      This is beautiful, thank you for sharing

      1. 5

        Thanks ❤️

    3. 3

      This is a nice “wise old man soundbite.” But I don’t think avoiding new, innovative stuff because it’s new and innovative is actually any wiser than seeking it out.

      This might be easier because of my academic PL background, but I very frequently find myself using some new technology, seeing some need to do something in a way that increases security or software quality, knowing exactly what a good approach should look like, and then discovering that the ecosystem is just missing the features needed to solve the problem and that everyone just lives with a problem which is to me like walking on a bed of nails.

      There’s an important role in trying to counteract the fad-chasers, and an important niche in trying to be a late adopter, but someone has to actually evaluate when the new approach is just better and start it down the path of becoming the next generation of boring tech, or else we’d all be stuck coding in BASIC and assembly.

      1. 1

        For sure, innovation has its place and we shouldn’t be afraid of using new tech. But we should know precisely why we’re using it and what problem it solves that can’t be solved otherwise (in terms of either being impossible otherwise, or being significantly harder). We need to use new tech but we also have to make sure that we reserve it for the pieces that matter for this use case. If your whole app is built on new shiny, it’s going to be a nightmare.

        I tried to communicate that nuance in the post, but I think it could be clearer.

        1. 3

          I think the trick is to find the right places to use new tech, and IMO it’s when it doesn’t matter. Use it for internal tooling that makes life more convenient, not for important infrastructure. That gives you a chance to explore it, and if it doesn’t work out or nobody wants to maintain it, not really a big problem.

          1. 2

            Oh yeah, that’s also a great place to use it.

            Sometimes internal tooling can become load bearing though, I found a comment today in some internal tooling about something being temporary until X was done, and that was dated… 2019 haha.

    4. 4

      They make it sound so simple. What if there are two existing solutions, and one of them has a large community but brings with it a lot of baggage you don’t need and you dread working with it because everything is so interconnected (cough wordpress), while the other solution has less of a community but has just what you need right now, is a joy to work with, but is possibly missing what you might need in two months?

      1. 6

        There’s another factor to consider: the cost of changing your mind. From your example, I would imagine that the shiny new thing has a fairly simple data model (possibly flat files tied together with some YAML in a git repo, or a SQLite database) that is easy to import into WordPress. Exporting from WordPress may also be easy.

        If migration to WordPress from other-thing is cheap, the risk of going with the other thing is lower. In this case, I’d start by planning a migration, then adopt the shiny thing and have a set of decision points in the plan where we decide whether to implement the migration plan.

      2. 3

        This feels a little orthogonal to me. This is a core question in designing any system and choosing the components: how much do you design for today vs how much do you plan for things that may never come? (Assuming the joy to work with tech is also established, just smaller.)

        I’m not sure there’s a great answer, it depends so highly on details of the situation. But I would be asking: is this tech core to our business or supporting (like a marketing site)? Which of these lays a better foundation for what we need down the road? What are the risks with adopting each?

        If you can add the required feature to the tech which lacks them but is great to work with, then that seems like an interesting tradeoff

      3. 1

        That’s the point of this comment:

        With any given choice, the question is: does this technology fundamentally alter my chances of solving this problem? If the answer is “no”, then just go with the boring choice.

        This includes community, as well as ease of use. WordPress isn’t the only option in its world, with other options that, depending on the context, are more likely to solve the problem at hand: There is a difference between standing up your own online front with Wordpress+WooCommerce, a tried-and-true option, vs. the ultra-boring route of using something like Shopify: Both are “not innovative” but both have their positives and negatives.

        If your team has a history working with WordPress, your choice may come down to “Something we have a known history with” vs “Something we’ve never encountered before.” Shopify isn’t new, it’s new to you in this situation. A Principal Engineer is there to help you make the long-term good decision now, or at least make you aware of the potential pitfalls of making a particular choice.

    5. 17

      Enjoyed the post, thanks.

      The way I describe this sometimes is “chasing the silver bullet”, “innovation disease”, or “crow syndrome” - always looking for something new and shiny. This affliction (or addiction) is particularly prevalent amongst the young and new, and a common symptom is the question “which is better” and mutterings about “best jobs”.

      It is usually accompanied by an ignorance and devaluation of all the lessons that has been learned before in matters relating to computing, other than perhaps “data structures and algorithms”.

      1. 5

        This affliction (or addiction) is particularly prevalent amongst the young and new, and a common symptom is the question “which is better” and mutterings about “best jobs”.

        There might be some truth to that - it’s relatively easy to find people with quite a bit of knowledge of the boring old tech, so getting a higher-paying job as a junior might be easier if you know some shiny new tech that nobody else knows. You’ll be ahead of the competition with more knowledge about that particular tech. Like all the job postings asking for “5 years of Rails experience” when Rails had just come out.

        Of course, you’ll be setting yourself up for failure to go work for a shortsighted company with unproven tech. But at least it’ll be exciting… And then once you burn out on all the bullshit you can go work with more boring tech, with the battle scars to prove that you’re a medior/senior now.

        1. 2

          Definitely the case that this happens for promotion seeking, too. It’s harder at big tech to be promoted for using the existing stuff than by inventing something new.

          1. 1

            That’s a bit unexpected - I would expect people get promoted for improvements that have a measurable effect on the bottom line, or at least for showing your effectiveness. Using shiny unstable new things should cause things to be on fire more often and take more time to develop into something solid in general.

            1. 3

              Actually, I would speculate that things that are on fire more often are also more visible, and every time there’s a problem, here you are, (i) saving the day, and (ii) explaining yet again why we really really need this thing.

              The things I code on the other hand are rarely heard from ever again.

              1. 2

                Sounds like we’re doing something wrong ;)

              2. 1

                You can always invent a reason something’s on fire for less-technical audiences: “there was NO way we could know how long a string people would pass to bigcorp_printf()!” And they’ll nod their head, thinking, “ok, sounds reasonable, just make this problem go away.”

                The worst part is: it’s true, technically, but it also completely ignores the fact that you can make it so it won’t crash and burn regardless of input size.

            2. 1

              Yup. One strategy I’ve heard is to time it so that your invented thing is relatively new when you come up for promotion, so it hasn’t had time to be on fire much yet. Big tech promotions are a whole other world and I stay out of it.

              1. 2

                Oh man, that’s almost evil. But hey, when you set rules, people are gonna play the game that emerges from them.

                1. 2

                  It’s amusing to observe how much of the developer community has to believe that FAANG devs are the absolute elite and yet we hear so many horror stories about how many are willing to play the stupidest games in order to angle for promotion, or politick successfully.

    6. 9

      when you store names in the files, you have to rewrite history when someone changes email address, but in another world, you’d use a uuid and a file called .gitauthors that maps one to the other

      This is possible currently with a .mailmap file

      1. 2

        .mailmap is so handy and it is better than nothing. That said it would definitely be nice for our old names to be able to be expunged from the world rather than hanging out forever… The OP’s suggestion would allow that, rather than mailmap’s papering over of it.

    7. 2

      Working on an alpha version of a chess club management web app. Also working on sticking with my use of Talon part time instead of reverting to keyboards full time, my body is unhappy with what I’ve been doing of late.

    8. 3

      I noted that you said you have experience with the Keyboardio Model 100, could you speak to how you liked it and comparisons here? It’s my current daily driver (and has made typing much less painful for me) so I’m curious about a comparison.

      1. 4

        I think the Model 100 is one of the greater flat keyboards. Quick summary of my findings:

        Pro:

        • The thumb keys and layout make a lot of sense (except the Esc and Enter on the inner columns are awkward).
        • The sculpted keycaps are really nice.
        • I love the use of wood in the build, it feels quite pleasant.
        • Hot swap.
        • Tripod mount by default, not an expensive add-on.
        • RJ45 linking cable, kill TRRS.

        Cons:

        • Pretty thick, so you definitely need a height-adjustable desk.
        • It’s hampered by the firmware (buggy, too limited for advanced features like home row mods).
        • I initially liked the palm keys, but they became painful after a while.
        • The tenting angle of the octofeet is small and they add a lot of height.

        If someone added QMK support like they did with the Model 01, it could perhaps be the best flat keyboards (excluding homebrew options). They have a better thumb cluster and key cap profile than the Moonlander. And it is a large improvement over row stagger keyboards without thumb keys like the various Kinesis Freestyle models.

        That said, key wells are such a large improvement for me that there is no way back from the Glove80 or KA2 (I’d love to try a Maltron).

    9. 5

      Until accessibility software works broadly on Wayland, it doesn’t work for me. I use Talon and it doesn’t work on Wayland, and it appears this is broadly true for accessibility tools.

      Wayland seems great, in a lot of ways, but lately there seems to be this rise of ideological promotion of Wayland and dismissal of reasons to use X, which is disappointing and doesn’t make me optimistic about the community supporting Wayland.

    10. 4

      I’ve always used wemux for this (it is a pretty thin wrapper around changing socket permissions and tmux new-session -t), it makes all terminal work multi-player.

      Changing editors seems like solving the problem at a more difficult layer. That said. if I can talk people into it, kakoune has a multi-session mode.

      1. 1

        I read this expecting to say “use gnu screen” but a fair point is letting the different participants scroll the file independently, and a shared terminal wouldn’t make that work/

        1. 1

          Tmux can have multiple independent sessions per window set (note I said new-session -t, not attach) so you can actually do this. I don’t think screen can.

      2. 1

        That’s a great option to have available, totally.

        Changing editors was for me lower friction than using any sort of tmux tooling because, though I use tmux and vim, none of my collaborators do, so it would be forcing them to switch editors. I think the switch from vim to VS Code is easier than the other way around, so this made for a smoother pairing experience (for me).

      3. 1

        How do you handle NAT/firewall traversal? Or do you just have the repo live on a cloud VM and both ssh into it? I suppose VPN software like Tailscale also works actually.

        1. 1

          Through a VPN with a guest account on my dev box (which is not my desktop). There needs to be some kind of connection.

    11. 3

      I was trying to find a FOSS alternative to VS Code’s live share feature but couldn’t. Looking online it seems live share is very flaky with VSCodium, which is to be expected. Another company called CodeTogether has the interesting feature of supporting multiple types of IDEs in the same session but is proprietary and (reasonably) paid. Also possibly bundled with management-oriented snitchware?

      It really seems like a very interesting & challenging FOSS project for someone to take on. You could define an open standard API for editors to interact with the local live share service similar to the LSP, and various editor communities (neovim, emacs, etc.) could implement that client if they wanted. Then the project could have a free lightweight coordination service similar to tailscale that hands off to P2P connections (it would probably even use tailscale code for NAT/firewall traversal). The option to self-host the coordination service would of course be available, probably using one-time codes shared OOB. Don’t have the bandwidth for this myself but would be incredibly fun to work on!

      1. 3

        I would also love to have a FOSS project to use for this, especially if it could integrate with all sorts of editors. I’d happily help with some development but like you don’t have the bandwidth to start up that project and run it.

        1. 1

          Actually seeing the sibling comment on this post the solution might already exist; I just wasn’t thinking in terms of the UNIX philosophy of doing one thing well! You could use Tailscale to have your coworker be able to connect directly to your computer, then use wemux and a terminal editor. The LSP-like API for a shared editing session would still be a fun project though.

      2. 2

        Another example of (proprietary) prior art, Floobits launched a decade ago, but now seems to be defunct: https://floobits.com/

    12. 1

      Nice! What do you get for π from your real cake?

      Matt Parker does this kind of thing every year; this year he found π from the tyre marks of a skidding car.

      1. 2

        I haven’t counted the sprinkles on the cake yet haha. But based on the simulation, the value of pi would be somewhere over 2 and under 4, which is not a very good range!

        That’s a really interesting video, thanks for sharing it!

    13. 6

      I don’t recommend ever putting lifetime annotations on &mut self. In this case it’s sufficient to only name Token’s lifetime:

      impl<'source> Scanner<'source> {
          pub fn next_token(&mut self) -> Token<'source>
      

      Lifetimes on &mut self are very very rarely legitimately useful, but can easily lead to even worse gotcha: when you mix them with traits or make them dependent on other lifetimes with a bigger scope, they can end up meaning the call exclusively borrows the object for its entire existence (you can call one method once, and nothing else with it ever).

      1. 1

        Good point, and thank you for the example of where it would end up causing another confusing error!

        In this case I put on an explicit lifetime just to make all of them explicit, but you’re right that it’s not legitimately useful here. It would probably be better on the method, if I want to leave it for explicitness.

        (For what it’s worth, I also didn’t have that parameter in the code this blog post was inspired by.)

      2. 1

        …they can end up meaning the call exclusively borrows the object for its entire existence (you can call one method once, and nothing else with it ever).

        What would be an example of this, out of curiosity?

        1. 1
          struct Bad<'a>(&'a str);
          
          impl<'a> Bad<'a> {
              fn gameover(&'a mut self) {}
          }
          
          fn main() {
              let mut b = Bad("don't put references in structs");
              b.gameover();
              b.gameover(); // error: cannot borrow `b` as mutable more than once at a time
          }
          
    14. 4

      I shot myself in the foot with the same gun while also writing a parser a few weeks ago, but there is in fact a lint for this, which one should arguably add to any new code base. https://doc.rust-lang.org/rustc/lints/listing/allowed-by-default.html#elided-lifetimes-in-paths

      1. 1

        Ooh thanks for this, I’ll look to add it!

    15. 2

      The Google Slicer paper(pdf) is a good read. I believe that many applications benefit greatly from an above-database stateful layer, especially at scale where hot rows and hot entity groups become a real concern, or when you find yourself doing things like polling a database for completion status.

      Stateful services aren’t right for every use, but when used well they greatly simplify your architecture and/or unlock really compelling use-cases.

      1. 1

        Oooh this looks great, adding it to my paper reading list.

    16. 1

      This is a nice piece of work, and clarifies something that people forget about stateless services: the service can have state that requires warmup, just not authoritative state. If you’re using Hack or the JVM, your stateless service already has warmup from the JIT. Having a local read cache is a similar case. If you lose the host, a new host will have worse performance for users for some time until its cache is warm.

      I’d be curious to see a comparison of this approach for them vs trying VoltDB.

      1. 1

        I would also be curious to see usage of VoltDB compared with other options!

    17. 3

      In the spirit of breaking rules and going pretty far with just one machine, I wonder if a single machine that locally ran PostgreSQL and used an in-memory cache directly in the monolith would be even better. Sure, take periodic off-site backups, but a single bare-metal box from a provider like OVH can have pretty good uptime.

      1. 3

        A single machine can definitely take you very far. The biggest instance (pun intended) I know of doing this is Lichess, which runs on one rather beefy machine, but I am sure there are others that are bigger or equally/more well known.

        Unfortunately, that particular bet wasn’t one I could make for us ;)

    18. 2

      The memory space or filesystem of the process can be used as a brief, single-transaction cache. For example, downloading a large file, operating on it, and storing the results of the operation in the database. The twelve-factor app never assumes that anything cached in memory or on disk will be available on a future request or job[.]

      When a participant starts responding to a message, they open a WebSocket connection to the server, which then holds their exercises in the connection handler. These get written out in the background to BigTable so that if the connection dies and the client reconnects to a different instance, that new instance can read their previous writes to fill up the initial local cache and maintain consistency.

      Sounds like they are still following the rules by not relying on the state hehe

      I’m not surprised they had to. You can certainly run stateful things in Kubernetes, but the ease at which you can roll out new versions of containers means restarts are common. And even when running multiple replicas, restarts still kill open connections (terminationGracePeriod can help but still has limits).

      1. 3

        Well, you’re right, we’re kind of in the middle: we rely on per-connection state, but we don’t rely on it existing for a long time after the connection. We wanted to go there, too, but sticky routing was unfortunately not feasible for us.

    19. 4

      Very cool writeup Ntietz. I think as more and more applications diverge from the old-school request -> DB-work -> render-output webapp model, we’ll find ourselves “breaking the rules” more often.

      This type of architecture makes me happy – Erlang/Elixir programs can very often really capitalize on this pattern (see, for example, caching user-local data in a Phoenix Channel for the duration of a socket’s existence).

      1. 1

        Elixir and the BEAM definitely make this easy to do and can be used to great effect. I’m really excited to see what comes about with Phoenix LiveView (and the similar projects in other languages) leveraging connection state and lots of backend processing.

    20. 2

      Programmer time is more expensive than CPU cycles. Whining about it isn’t going to change anything, and spending more of the expensive thing to buy the cheap thing is silly.

      1. 15

        The article makes a good counterpoint:

        People migrate to faster programs because faster programs allow users to do more. Look at examples from the past: the original Python-based bittorrent client was quickly overtaken by the much faster uTorrent; Subversion lost its status as the premier VCS to Git in large part because every operation was so much faster in Git; the improved grep utility, ack, is written in Perl and waning in popularity to the faster silversurfer and ripgrep; the Electron-based editor Atom has been all but replaced by VSCode, also Electron-based, but which is faster; Chrome became the king of browsers largely because it was much faster than Firefox and Internet Explorer. The fastest option eventually wins. Would your project survive if a competitor came along and was ten times faster?

        1. 7

          That fragment is not great in my opinion. Svn-git change is about the whole architecture not about implementation speed. A lot of speedup in that case comes from not going to the server for information. Early git was mainly shell and perl too so it doesn’t quite mesh with the python example before. Calling out Python for BitTorrent is not a great example either - it’s an io-heavy app rather than processing heavy.

          Vscode has way more improvements over atom and available man-hours. If it was about performance, sublime or some other graphical editor would take over from them.

          I get the idea and I see what the author is aiming for, but those examples don’t support the post.

          1. 3

            I was an enthusiastic user of BitTorrent when it was released. uTorrent was absolutely snappier and lighter than other clients. Specifically the oficial Python GUI. It blew the competition out of the watter because it was superior in its pragmacy. Perhaps python Vs c is an oversimplification. The point would still hold even in the presence of two programs written in the same language.

            The same applies for git. It feels snappy and reliable. Subversion and cvs, besides being slow and clunky, would gift you a corrupted repo every other Friday afternoon. Git pulverised this non sense brutally quick.

            The point is about higher quality software built with better focus, making reasonable use of resources, resulting in superior experience for the user. Not so much about a language being better than others.

          2. 2

            BitTorrent might seem IO heavy these days; ironically this is because it has been optimised to death; but you are revising history if you think that it’s not CPU/Memory intensive and doing it in python would be crushingly slow.

            The point at the end is a good one though, you must agree:

            Would your project survive if a competitor came along and was ten times faster?

            1. 1

              I was talking about the actual process not the specific implementation. You can make BitTorrent cpu-bound in any language with inefficient implementation. But the problem itself is IO bound, so any runtime should also be able to get there. (Modulo the runtime overhead)

        2. 2

          This paragraph popped out at me as historically biased and lacking in citations or evidence. With a bit more context, the examples are hollow:

          • The fastest torrent clients are built on libtorrent (the one powering rtorrent), but rtorrent is not a very common tool
          • Fossil is faster than git
          • grep itself is more popular than any of its newer competitors; it’s the only one shipped as a standard utility
          • Atom? VSCode? vim and emacs are still quite popular! Moreover, the neovim fork is not more popular than classic vim, despite speed improvements
          • There was a period of time when WebKit was fastest, and browsers like uzbl were faster than either Chrome or Firefox at rendering, but never got popular

          I understand the author’s feelings, but they failed to substantiate their argument at this spot.

        3. 2

          This is true, but most programming is done for other employees, either of your company or another if you’re in commercial business software. These employees can’t shop around or (in most cases) switch, and your application only needs to be significantly better than whatever they’re doing now, in the eyes of the person writing the cheques.

          I don’t like it, but I can’t see it changing much until all our tools and processes get shaken up.

      2. 11

        But we shouldn’t ignore the users’ time. If the web app they use all day long take 2-3 seconds to load every page, that piles up quickly.

        1. 7

          While this is obviously a nuanced issue, personally I think this is the key insight in any of it, but the whole “optimise for developer happiness/productivity, RAM is cheap, buy more RAM (etc)” line totally ignores it. Let alone the “rockstar developer” spiel. Serving users’ purposes is what software is for. A very large number of developers lose track of this because of an understandable focus on their own frustrations, and tools that make them more productive are obviously valuable, as well as meaning they have a less shitty time, which is meaningful and valuable. But building a development ideology around that doesn’t make this go away. It just makes software worse for users.

          1. 7

            Occasionally I ask end-users in stores, doctor’s offices, etc what they think of the software they’re using, and 99% of the time they say “it’s too slow and crashes too much.”

            1. 2

              Yes, and they’re right to do so. But spending more programming time using our current toolset is unlikely to change that, as the pressures that selected for features and delivery time over artefact quality haven’t gone anywhere. We need to fix our tools.

          2. 5

            In an early draft, I cut out a paragraph about what I am starting to call “trickle-down devenomics”; this idea that if we optimize for the developers, users will have better software. Just like trickle-down economics, it’s just snake oil.

            1. 1

              Alternately, you could make it not political.

              Developers use tools and see beauty differently from normal people. Musicians see music differently, architects see buildings differently, and interior designers see rooms differently. That’s OK, but it means you need software people to talk to non-software people to figure out what they actually need.

      3. 3

        Removed because I forgot to reload and multiple others gave the same argument I did in the meantime already.

      4. 3

        I don’t buy this argument. In some (many?) cases, sure. But once you’re operating at any reasonable scale you’re spending a lot of money on compute resources. At that stage even a modest performance increase can save a lot of money. But if you closed the door on those improvements at the beginning by not thinking about performance at all, then you’re kinda out of luck.

        Not to mention the environmental cost of excessive computing resources.

        It’s not fair to characterize the author as “whining about” performance issues. They made a reasonable and nuanced argument.

      5. 3

        Yes. This is true so long as you are the only option. Once there is a faster option, the faster option wins.

        Why?

        Not for victories in CPU time. The only thing more scarce and expensive than programmer time is…. User Time. Minimize user time and pin cpu usage at 100% and nobody will care until it causes user discomfort or loss of user time elsewhere.

        Companies with slow intranets cause employees to become annoyed, and cause people to leave at some rate greater than zero.

        A server costs a few thousand dollars on the high end. A smaller program costs a few tens of thousands to build and maintain and operate. That program can cost more than hundreds of thousands in management and engineer and sales and marketing and HR and quality and training and compliance salaries to use it over its life.