Newest Comments

  1.  

    In the old days, CPU architectures might define an explicit NOP instruction that was specially recognized by the CPU, such as the 6502’s NOP.

    The 6809 is an interesting bridge between these two paradigms, with an explicit NOP and two instructions that are basically explicit NOPs by different names: BRN (branch never) and LBRN (long branch never), with the former occupying two bytes and taking three cycles and the latter occupying four bytes and taking five cycles. In spite of explicitly doing nothing, they come in handy for micro-optimizations: branching to a location one byte past the current PC (so skipping over one one-byte instruction) with BRA will take two bytes (BRA opcode and PC-relative location) but since BRN still has a one-byte operand that gets ignored, its operand can be the opcode of the instruction being skipped (in practice you would write this oppositely, using the FCB pseudo-op to insert BRN’s opcode so you can write out the skipped instruction normally and be able attach a label to it as a branch target). Same behavior with one less byte of code!

    1.  

      The models were able to get large enough which is due to enough compute resources being available to build such large models.

      We are in the “computers are the size of an entire room” phase of AI models at the moment. The focus now should be to shrink them while keeping and improving the functionality.

      1.  

        Maybe there is a third option. I don’t use it for memory management or performance, but mainly because I want things to be correct. And the way Rust forces me to think about error handling and option leads me personally in the right direction. And yes, that also means I’m not as fast in prototyping as I am in other languages, like Dart.

        1.  

          Let’s avoid tying our identity to a single language and embrace practicality first and foremost.

          upvoted, and if only because of this sentence

          1.  

            Returning to full-time work after some surgery, and continuing to work on my team’s internal esbuild wrapper for our monorepo dev-loop.

            1.  

              I think this is a matter of what you’re working on. If the performance differential (in memory or CPU use) between Go and Rust is not that important, then bothering with lifetimes/the borrow checker seems like a substantial cognitive load cost. If you do care, getting granular control over mem/cpu in Go may be more frustrating than in Rust.

              1.  

                Thanks, didn’t realize Go has that! Though, it’s a thin wrapper over dlopen, and unix only, so probably not a good idea to try to build an Emacs out of that.

                1.  

                  Your point might be easier to discuss if you spelled out your assumptions here. My abbreviated history of rust development is that it began life at mozilla to underpin their Firefox browser. That seems like the epitome of a program written for typical users. It sounds like maybe you disagree or you think it has strayed from the original design constraints. If so, why?

                  1.  

                    I think it would be interesting to explore which is easier to iterate on. It seems true that the first stab at any modality is usually wrong (with some wiggle room) but I bet theres more leverage on iteration.

                    1.  

                      better not to talk about “plugin”. I don’t know that anyone uses it in prod

                      1.  

                        There’s a standard that exists for that. I was party toimplementations of that, but I don’t think it got much traction on the internet at large. The easiest mainstream way is to certify it using a root that you control and add name constraints, but for that to be secure (in a general way) you need to own both CAs.

                        1.  

                          From a quick look, it looks like hashicorp plugins are separate processes using IPC for communication. For this bullet, I had “shared memory plugins” in mind. It’s an open question whether shared memory plugins are a good idea at all. For example, in VS Code plugins communicate with the editor’s core via IPC. At the same time, plugins can communicate amongst themselves via shared memory, as all plugins are in the same process. Fuchsia is heavily IPC based, and that does seem like a good “from first principles” approach. And WASM with interface types confuses the picture even more, as it really blends IPC vs shared memory boundary.

                          1.  

                            Completely off topic, but I feel like theres a pressure on tech forums to be happy for one another. But often times I do not feel happy for my fellow commenters, but quite angry at them.

                            1.  

                              if you are building a platform with dynamically loaded plugins (think Eclipse or IntelliJ), you obviously can’t do that in Go, you need a fairly dynamic runtime!

                              Go has plugin support in the standard library: https://pkg.go.dev/plugin.

                              1.  

                                This week I’ll finally have time to work on my little toy project; a web bookmark manager I’m writing in C, so I’m going to spend most of it doing that. Normally I would’ve just said stuff about my job but this is my last day, so there’s not much to be told about that x)

                                1.  

                                  Working on a new coding side project that intersects with my D&D hobby. :)

                                  1.  

                                    Thanks. The end does mention “NetBSD implemented µUBSan in 2018” but there isn’t much information yet. From https://papers.freebsd.org/2019/BSDCan/turner-Fuzzing_the_Kernel.files/turner-Fuzzing_the_Kernel.pdf it seems that FreeBSD and OpenBSD have imported the implementation.

                                    1.  

                                      More O(your where clause), it’s difficult to extricate that, but yes this technique (“keyset pagination”) has roughly constant performance compared to offset pagination’s proportional increase with page distance.

                                      1.  

                                        I operate an open wifi for friends to use, but that’s a fair point

                                        1.  

                                          Not to say this is a bad solution, but what happens when your friends come over and ask to use your wifi? Presumably they haven’t installed your CA’s root cert. (Ignore for a moment the fact that obviously any TRUE friend would install their friend’s root cert.)

                                          Anyway the benefits outweigh the downsides, but it’s something to think about.

                                          A much better solution is to abolish t.co altogether, which is now a lot closer to happening than I would have dared to hope six months ago! I haven’t followed a t.co link in months, and with any luck never will again, but I understand others might not be so fortunate at this time.