Threads for unconed

  1. 3

    Automated code formatting

    • As an individual, you get back all of that mental energy you used to spend thinking about the best way to format your code and can spend it on something more interesting.
    • As a team, your code reviews can entirely skip the pedantic arguments about code formatting. Huge productivity win!

    I will happily die on this hill: No. Autoformat is for noobs.

    Even when they use tools like prettier, I constantly have to point out to juniors ways in which their code is hard to read, repetitive, poorly factored, and so on, on the line level. They pick names that are too long. They don’t extract common subexpressions. They nest way too much. The act of tidying up the code by hand gives you time to think about all of this, it forces you to slow down, and to develop good habits so that there is no mess to clean up in the first place.

    Auto-formatters give you only the illusion that you don’t need to care about any of this. In practice, you still do, only now you’ve taught people that it’s a “waste of time” or “nitpicky” to do so. It is a local maximum that keeps you from getting better.

    The worst symptom is when you have two or three pieces of code that are structurally near-identical, but which auto-format forces to be formatted differently because e.g. one line happens to cross some internal auto-wrapping threshold while another doesn’t. Or because autoformat forbids you from lining it up vertically. Now, code that should look the same looks wildly different because a bot was too dumb to do it right, and somebody convinced you that this is the Rule that must be followed.

    Given a piece of code that was nicely formatted and lined up by hand, nobody can seriously argue that the auto-formatted version is better. It is obviously less clear, worse at communicating intent, and mangling it through auto-format is just like trampling all over a carefully manicured garden. If you do use it, auto-format should only apply to changed lines, as a tool to get you to cleaner starting point. Forcing everything to match a linter’s preferences only seems like a good trade-off if you’ve never seen code crafted with precision.

    1. 4

      I didn’t realize it until I adopted Black, but I used to spend a material amount of my time typing code - I’d estimate as much as 5-10% - agonizing over my indentation. And I was using Python!

      I used to have very strong opinions about when to break up a long set of function parameters into new lines, or how to indent a complex sequence of chained function calls.

      I resisted Black for a couple of years because of tiny, trivial differences it had from my very opinionated way of formatting code.

      Then one day I decided to drop that… and suddenly I was liberated. All of that mental energy I had spent agonizing over where to put my new lines… it turns out I could put that to good work somewhere else instead.

      As for code review: my goodness, the HOURS if not DAYS of time I’ve seen wasted on back and forth between the office pedants and junior engineers over how best to format code. I don’t regret eliminating those conversations in the slightest.

      1. 3

        I couldn’t disagree with you more.

        Even when they use tools like prettier, I constantly have to point out to juniors ways in which their code is hard to read, repetitive, poorly factored, and so on, on the line level. They pick names that are too long. They don’t extract common subexpressions. They nest way too much. The act of tidying up the code by hand gives you time to think about all of this, it forces you to slow down, and to develop good habits so that there is no mess to clean up in the first place.

        Perhaps I have a different opinion here but all of this does not sound like the job for an auto-formatter. Poor factorization, repetitive, too long variable names, not extracting common subexpressions, nesting, etc is the sole responsibility of the author (linters can provide hints at some of these things but its not their job to fix them) and you should be pointing those out to so they can write better and more clear software.

        For me the benefit of an auto-formatter is in consistency of formatting. gofmt for example helps make code written by different authors look consistent. If someone has their editor set to 2 spaces vs 4 spaces I don’t want mixed indentation littered about the code base nor do I want to comment in PRs telling people to change their editor settings. The auto-formatter is there to make it so I don’t have to think about it.

        It sounds to me like maybe you have some poor auto-formatter rules or you don’t like the defaults of whatever auto-formatter you’re using perhaps? In most (all?) tools I’ve used if you think your formatting is better you can turn on/off rules globally or on specific lines if you think your formatting is more clear. Not having to think about it or nit pick about it in PRs is a huge win in my opinion.

        1. 2

          I agree with almost all of your points, yet completely disagree with your conclusion. All of the things that you list (repetition, naming, and so on) are things that require understanding of the code (some CSE is possible, though it requires an understanding of the source language). Auto formatting allows you to focus on this on code review because all of the trivial things are handled with a tool. If you have three almost identical bits of code that are formatted differently because of line lengths, the problem is not the auto formatted, the problem is that you didn’t pull this out into a generic function.

          The slavish application of the rules is a huge benefit because it avoids subjective arguments. Either you can unambiguously express a rule about how code is formatted, in which case you can teach the tool to do it, or it is subjective and so you will get disagreements. Any time spent on these disagreements is time not spent on the more important things.

          In my experience, the code review quality is significantly higher on every project that I’ve worked on that uses automated tooling for formatting than ones that don’t.

          I’m still sad that we store text in revision control systems. Some years ago, I had a student implement a project called Code Editing in Local Style, which used a C or Python AST to typeset the code in the user’s preferred style (including capitalisation conventions, variable declaration locations), with the idea that you’d commit code in one representation and edit it in another. I’d love to see modern languages adopt something like this.

          1. 1

            I’m sympathetic to the “friction is good” argument, but obviously these things are tradeoffs with limits. In the olden days, people would write out programs on pads of paper and get everything figured out so that by the time you typed up the punchcards, it was just a matter of text entry. Lots of friction! And they managed to create, e.g., C and Unix this way. But I wouldn’t want to work that way…

            In practice, sometimes prettier will format something in a way that I find ugly, but there’s typically a way to “fix” it by introducing a comment or something that it has to format around. The benefits for 99% of lines are worth it for the 1% that it does wrong.

            FWIW, gofmt doesn’t change line breaks, only indentation, so lines that are long stay long. I can’t say I can remember disagreeing with gofmt, although Go itself has strong feelings about comma and brace placement that you have to adhere to, which I do sometimes dislike.

          1. 20

            So what you do is, you keep stepping this number up 1 millisecond at a time while playing a smooth animation, until you’ve eliminated any stuttering in the animation. And now you have done something X11 cannot do- eliminated screen tearing with the absolute minimum latency cost possible. This fantastic. It feels fantastic.

            Wow. I don’t think I’ve ever seen such a perfect crystallization of the Linux mindset. Just… chef kiss I absolutely hate it.

            1. 5

              MacOS is the best UNIX DE.

              1. 1

                you could pick a good base value for end users and give them an automated setup which does basically the same but just asks you to say when it’s getting bad, automatically calibrating this value

                1. 9

                  You don’t even have to say it, because you can measure how many skips you had, and aim for some default value.

                  1. 1

                    that’s even better, probably something that will make it to the wayland compositor eventually (maybe KDE & co do this already ?)

              1. 1

                Where do I learn creating such nice GPU (fluid) sims ? Can’t say why but it always feels like you just have to know.

                  1. 1

                    Thanks! Maybe it’s time to finally read a book again ;)

                1. 2

                  Very nice. Too bad it’s JSX-focused (and leaving out a lot of non-JSX frameworks).

                  1. 5

                    Actually all the JSX is 100% optional and there isn’t any in the core packages. The equivalent of React.createElement in Live was designed to be ergonomic.

                    1. 1

                      Ah, that’s great then! I didn’t dig deep into the docs and all the example code I’ve managed to get looked like jsx.

                  1. 6

                    This article tries to say a lot but meanders, and is sorely lacking in actual practical lessons from application development. The theory is mostly irrelevant, that’s why REST doesn’t mean REST.

                    As originally conceived, REST is too naive, and it only seemed to be a good idea during that brief period that client-side JS was focused on progressive enhancement: serve the exact same HTML for script and noscript, then sprinkle on extra behavior.

                    The very idea of having all state serialized in HTML implies that no other changes will be made by anyone else. It effectively requires the client to have an indefinite lock, or otherwise the HTML could become stale and lead to 404s or 403s, or result in silent last-write-wins after completing an action. So it is impractical for a connected world.

                    The more important question with REST is whether the API actually works by passing state-objects whole-sale back and forth, or whether most of the work is done via POST requests which perform specific mutations. The only true RESTful JSON API is really a key-value store like a CouchDB, which is theoretically pure but practically not sufficient.

                    APIs are really about enforcing policy, something which is usually done in the boilerplate of writing handlers for individual server methods or mutations. A good solution for APIs should treat policies as first class things. GraphQL has the same issue, it solves the reading-data part, but leaves the writing-data part up to individual implementations.

                    1. 6

                      The theory is mostly irrelevant

                      Maybe to help with a bit of context, but this is from Fieldings original thesis:

                      This classification is used to identify a set of architectural constraints that could be used to improve the architecture of the early World Wide Web –Architectural Styles and the Design of Network-based Software Architectures

                      It wasn’t really aimed at what we now call APIs, but looking at ways of extending the current web (so indeed, lobste.rs is a fully functioning REST application, if you’ll excuse tunneling all commands over POST). As another example, WebDAV was very commonly used as a common method of doing online data sync, and was based along the same principles described in the thesis.

                      And ultimately, each of the constraints enables certain abilities–for example the focus on caching makes certain kinds of disconnected operation easier.

                      Ultimately, I half remember Fielding (or someone similar) REST as being designed for applications that last on the scale of decades, as the focus on document exchange rather than RPC style leans towards interactions with less coupling.

                      Conversely, the vast majority of end-user applications built today have fairly tight control of both the client and the server (think web or mobile apps). In that case, you can get away with supporting older clients for a far shorter span of time. For example, for a web app, you can relatively easily force the entire page to reload, and voila, you have your new client version.

                      Ultimately, I think the descpancy comes because the original REST style solves problems that most current developers don’t care about–whether for economic or other reasons.

                      1. 5

                        The theory is mostly irrelevant

                        Kind of the opposite. The theory is so ubiquitously relevant that we only have a handful of systems that have been designed that way because it works so well. When an alternative to the web shows up, like Gemini, the first question everyone asks is, “Why not the web?”

                        implies that no other changes will be made by anyone else. It effectively requires the client to have an indefinite lock, or otherwise the HTML could become stale and lead to 404s or 403s, or result in silent last-write-wins after completing an action.

                        No, it implies that the client will receive a full description of possible actions to take from where it is now in the last payload it received. If that payload is different from last week, that’s not the client’s problem. Stable URIs is a different issue, and not necessary to REST.

                        Nor is last-write-wins the only option for REST. You can also have semantics which are “there’s a token in the URI for writing, and if there’s been an update since that token was provided, your request fails but returns the new result and an updated URI+token that will let you write. Or potentially a lot more URIs if there are now new options available. You could also have any other conflict resolution scheme that you want.

                        APIs are really about enforcing policy

                        One aspect that some people want from APIs is enforcing policy. Or maybe it’s better to drop API and speak of RESTful interfaces, since they’re not just for programming applications against. The difference here is that an RPC API expects a policy to be set from outside and obeyed by all involved. A RESTful interface expects that policy is dictated by what’s available during traversing hypermedia. If I provide a link to do something, it’s allowed in the policy. If I don’t, it’s not. The client does not know for certain ahead of time. If you are talking about large, long lived systems with different parts controlled by a huge number of unrelated entities, it is a fact of life that you won’t know ahead of time.

                        If that seems useless to the problem of making a client and server you both control stay in sync with each other, it’s because it is. But like people complaining about the complexities of relational databases when they only need to save some data in a file, that’s not what it’s for.

                        1. 1

                          The theory is relevant because RPC is usually worse than alternatives.

                        1. 1

                          i noted the lack of an “examples of famous developers” section for the third tribe. miguel de icaza perhaps?

                          1. 3

                            IMO Bret Victor should not be in tribe 1, but is an archetypal tribe 3. His body of work and the talks he’s given all point in that direction. The difference is he’s also concerned about what the tool says about the toolmaker.

                          1. 7

                            This is an aside, but:

                            Apple made a fateful decision that mobile-phone internet should be app-centric, not browser/website centric. Then Android copied their mistake.

                            The irony is that this too is a rewriting of history, one which most people seem to have forgotten. When the iPhone was revealed, there was no app store, and Steve Jobs explicitly said they believed that the web was going to be the platform of the future for mobile.

                            What happened was very simple: the web couldn’t hold a candle to native back then. The iPhone browser was only usable because of numerous webkit-only extensions, which website builders duly incorporated when they started making mobile-compatible and then mobile-first websites.

                            Naturally there was also a huge incentive for mobile apps in the form of paid app stores. But it’s crucial to remember that this came after the initial release, and that it was highly welcomed. Even the mobile web-app-shells like Cordova that emerged to bridge the gap were poor knock-offs compared to native, and never felt right.

                            The death of Flash is a similar tale. Android actually had a fully functional, working version of mobile Flash at one point. It was terrible, because unlike a web page, a Flash app was an arbitrary canvas and it was pretty much impossible to substitute e.g. text fields and dropdowns with touch-friendly alternatives, or e.g. make buttons tolerant to fat-finger presses or e.g. add sensible touch scrolling. The iPhone 2G needed a first-class Youtube app because Youtube used Flash to play video at the time, and trying to use that player on an Android Nexus 1 was completely ridiculous.

                            It is very clear to me that Apple had learned a lot of lessons about touch phones before they revealed them to the public, and Android, in typical Google arrogance, took until version 3 / Honeycomb before they had even started to catch up, and version 4 to become a serious competitor.

                            1. 1

                              Scattering the shader compiles + resource allocation around does sound like it will result in a lot if stutter around startup. Do you have a way to deal with that?

                              Also, and this is a more subjective comment, I have the general feeling that using standard opengl/DX/Vulcan can actually be more maintainable than a custom system like this, because GPU programmers are familiar with it. A heavyweight wrapper like this will be very unfamiliar to any new people joining the project, and while it might make things easier for people who don’t know GPU programming, I think it might make things harder for people who do.

                              Basically, I have a vague sense that it might be falling into the trap of making easy things easier at the expense of making hard things harder.

                              1. 2

                                Actually the framework is designed to let revealed preference solve that. Your vague sense is misplaced.

                                The most basic drawing abstraction is literally just a handful of lines of code that gather functions to call, and then calls them, with no idea what they do.

                                Within a render pass, the same applies: it just passes a normal webgpu render pass encoder to a lambda, which can do anything it wants.

                                Everything beyond that is opt-in. If you want to construct naked draw calls from pure, uncomposed shaders, you can.

                                There is no overarching scene abstraction, and the extensions to WGSL are extremely minimal, unlike almost every other engine out there. Specifically, what I wanted to avoid is exactly what you describe, which you run into in e.g. three.js: if you wish to render something that doesn’t fit into three’s scene model, you still need to pretend it is a scene, just to render e.g. a full screen quad.

                                Furthermore, the abstractions Use.GPU does have, rely as much as possible on native webgpu types which are not wrapped in any way. I call this “No API” design.

                                In short: I recommend you actually look at its code before judging. It may surprise you. Most of the work has not gone into towering abstractions, but rather, decomposing the existing practices along saner lines, that allow for a la carte, opt-in composition.

                                As for the start up problem: I compile shaders async, and hence it loads similar to a webpage, with different elements popping in when available. If you don’t want this, you can use a Suspense like mechanism to render fallback content/code, or to keep rendering the previous content until the new content is ready.

                              1. 1

                                This seems very interesting after a quick skim but I didn’t understand what exactly are you caching. Shader programs?

                                1. 2

                                  No, it’s caching everything. The entire point is minimal recomputation. It is so thorough that a normal interactive program doesn’t even have a render loop. It simply rewinds and reruns non looping code instead.

                                  You only need a render loop to do continuous animation, and even then, the purpose of the loop is to just schedule a rewind using requestAnimationFrame, and make sure animations are keyed off a global time stamp.

                                1. 3

                                  One thing I’d like to tentatively raise, because I suspect you know it already but it might be useful to know the vocabulary if you don’t already, is that for making things that go fast and parallel in an automatic way in Haskell, people tend to get much better results by implementing Applicative rather than Monad. The implementation can see all the steps that are going to happen up front when it’s Applicative so there’s a lot more freedom to work with, unlike Monad where it doesn’t know what’s going to happen next until the user function actually runs. I suspect that your “monad-ish “thing is probably already more similar to Applicative than Monad.

                                  e.g. There some write ups out of (ick) Facebook about how they have DSLs for making calls to lots of backend APIs in parallel with the code still looking somewhat imperative-ish (at least by Haskell standards anyway) by implementing an Applicative instance and not using Monad.

                                  1. 4

                                    I was hoping that the burrito references would set the appropriate tone here, which is that I’m not particularly concerned with exactly what it maps to in FP terms, because it’s not haskell-like at all.

                                    I think from the point of view of the code doing the composing, it is monad-like in that you can only bind like to like. From the point of view of the shader linker, it does have a full picture of all the code, but at that point it’s not going to do anything with it other than glue things together with some minor local polyfilling.

                                    The fact that I’m using one language (JS) to compose code in another (WGSL) means a lot of conceptual and literal purity goes right out the window.

                                    1. 1

                                      I was hoping that the burrito references

                                      I get that, but this is a technical discussion forum so I wanted to try to be helpful on the admittedly unlikely off chance that you might not have seen some of these papers

                                      because it’s not haskell-like at all

                                      On some level, batching oriented graphics API code reads to me a bit like haskell code viewed at the level of the ABI / raw machine code output. You’ve got an the manual poking of pointers and values into registers. :)

                                  1. 3

                                    Welcome to Lobsters! I’m excited to look at the runtime orchestration code. I think it’s interesting to apply the hooks abstraction to arrange a computation (here rendering), but then delegate the computation to a different execution model. This means the hooks API only memos things that are actually meaningful to memo. What I mean by that is that it’s obvious that these GPU primitives benefit from memoization.

                                    The issue I have with memo in React is that most computations in a React component are not expensive, but the component must memo them anyway (spending precious memory?) in order to avoid React doing a lot of expensive “re-render” work. Like, sorting an array of 30 items is “free”, but I need to memo it to avoid React re-diffing a big HTML tree, which is not “free”. The nice thing about the GPU model is - the render work will happen no matter what at 60hz, and is presumed “cheap” compared to the memoizable setup work.

                                    1. 5

                                      I have other posts that describe the runtime in more detail (e.g. React - The Missing Parts) which shed light on this.

                                      There a few crucial things to note:

                                      • Live/Use.GPU does not render an HTML VDOM so there is nothing to diff with. Components merely execute code and yield values. This even includes a Suspense-like mechanism where you can yeet a suspend symbol to pause updates in a map-reduce tree. Unlike Suspense, this occurs in the forward direction only, and parents don’t need to be re-rendered if a child unsuspends. This is because the tail of a mapreduce is not actually part of the original parent, but is mounted as a separate Resume(..) continuation.

                                      • By wrapping entire components in memo(), re-rendering can be halted midstream… the same goes for passing the same immutable value to a context. In fact i’d wager most people’s mental model of when React stops re-rendering is actually wrong. The Use.GPU inspector specifically highlights re-renders in green, so you can see how few components actually need to re-evaluate. Observing this live was extremely useful to verify memoization of the resulting component tree.

                                      • GPU rendering is cheap, but there is always a lot of extra work that needs to happen that is not 60fps, such as building an atlas of SDF glyphs for text rendering. This sort of code runs only when necessary.

                                      While the API is a very close carbon copy of React, it is more appropriate to think of it as a ZIO-like effect system instead of a DOM tree.

                                      (I do have a very nice <HTML> wrapper so you can switch from Live to React mid stream, and the inspector will even show you the react tree underneath, although it is not fully inspectable)

                                      1. 1

                                        So you have a thing that kinda looks like a VDOM but it really gets used more like a scene graph? :)

                                        1. 3

                                          It gets used as a scene graph by the GLTF module, by passing a matrix down from parent to child and doing the matrix multiplications along the way. But that’s a relatively new addition, as there is no actual scene model like in three.js. There are also no “isDirty” flags, because that would just replicate what the memoization is already doing.

                                          In contrast, the plot module instead passes down a shader function instead using a react-like context, so it’s not limited to affine transforms and can instead compose e.g. a polar coordinate transform with literally anything else.