Threads for Sietsebb

  1. 17

    I like the idea, but this part really bothers me:

    you can craft a stylish modern site that will run faster than greased lightning even on mobile thanks to Google AMP technology

    How about saying no to Google’s attempts to monopolize the web?

    1. 6

      For what it’s worth, AMP support is off by default. The switch to enable it has this warning:

      AMP (accelerated mobile pages) creates mobile-optimized pages for your static content that render fast.

      Please note: when this option is enabled your website will load third-party scripts provided by Google’s AMP CDN.

      1. 16

        That first paragraph is still misleading, though. You know what else creates mobile-optimized pages that render fast? Publii with AMP turned off.

        Google’s AMP project is not benign. It sells itself as ‘faster pages’, but you don’t need AMP-the-technology for that; and it does all sorts of things are tenuously connected to speed, but clearly increase Google’s power over the web and its users. Especially: when a user goes from Google dot com to an AMP page, the site never sees that traffic because Google serves the page itself (all the better to track you with, my dear.)

        So Google’s purpose for AMP is pretty clear, and dmbaturin makes a good point that it should not be voluntarily included in any project that cares about the web.

        1. 2

          I largely agree, but this is just a technicality. I wouldn’t say Publii project cares about the web, they care about their users. And their users care about their own visibility. And someone told them they need amp. So they got amp.

          …or something close to that. Again, not saying that this is proper, just that this is how it probably goes down, for a lot of software projects.

      2. 1

        Why stop at Google AMP? I admire that Sourcehut Pages stated limitations include:

        Connections from CloudFlare’s reverse proxy are dropped. Do not help one private company expand its control over all internet traffic.

        1. 1

          What does this mean in practice? Can I be blocked from viewing content hosted on SH pages in some contexts?

          1. 2

            No, the primary goal (I’m assuming) is to prevent people following ‘guru’ advice and putting Cloudflare as a caching proxy in front of their site. If Page’s are not already distributed CDN for static content, I’m sure the goal would be to at least have a couple of mirrors distributed globally if that level of performance is needed in the future which makes Cloudflare redundant as well. Using Cloudflare in any capacity at this point is centralizing the internet because of how many people are using it because it’s ‘free’ and because SEO’d advice always recommends it. We saw Cloudflare go down not long ago and a massive portion of the clearnet went down with it. Cloudflare also tends to through up hCAPTCHAs for users on the Tor network, using VPNs, or just trying to use WiFi in a non-Western country putting an unnecessary burden on users seeking privacy.

      1. 9

        A thoughtful little essay! As it opens by discussing an essay that proposes removing the URL bar from browsers, I’d like to share this from the ever-thoughtful Dorian Taylor:

        it’s pretty well-established that certain vested interests want to kill the url, which is actually an insanely powerful and individually-empowering invention

        no urls == you need an intermediary to locate shit.

        —source: (Imagine if Twitter toots didn’t have URLs, only a share button..)

        To be clear: Noncombatant themself comes down for URLs, not against them. Wouldn’t want to accidentally imply the opposite.

        1. 4

          it’s pretty well-established that certain vested interests want to kill the url, which is actually an insanely powerful and individually-empowering invention

          no urls == you need an intermediary to locate shit.

          This right here is a compelling case for wrenching Chromium and Android out of the hands of Google. And finding a funding stream for Mozilla that doesn’t tie them to the world’s biggest surveillance companies.

        1. 1

          Dangit, I need to change the message on my last commit!

          git commit --amend

          …but staging your next commit loads the footgun. If you try then try to amend the last message, the footgun goes off and blenders what you wanted to be a new commit into the last commit.

          1. 2

            Anyway, leaving aside Git’s Mutually-Supporting Network of Fortified Footgun Emplacements (which the site author clearly resents no less than I do): Oh Shit, Git!?! / Dangit, Git!?! is just really nice to have at your shoulder when things inevitably fall apart. Big help when learning a system: knowing that screwing up can be undone, and that YOU can undo it. Which is what Oh Shit, Git!?! gives you.

          1. 6

            Just a correction: That composability is only possible through a classless language design is simply false. For example NumPy has the array interface protocol. You can call numpy.sin(array) (sine of the variable array), where array could be a NumPy array, a cupy array in GPU memory, a Dask array distributed on a cluster, a combination of both or any other object which supports the array interfaces. Those libraries are developed by different people and organizations and besides talking about the standardization of the array interface they don‘t communicate much in order to make it work.

            Having said that, I would be very keen on learning the advantages and addition of Julia multiple dispatch beyond that.

            1. 3

              Having said that, I would be very keen on learning the advantages and addition of Julia multiple dispatch beyond that.

              I’m haven’t used Julia, but I know function dispatch from R, and object-dot-method dispatch from Python/pandas, and I can tell you a few advantages of function dispatch. They boil down to this: you can start a new package to expand what an existing class can do.

              I’m going to use “method” to mean a (function, class) combination, regardless of whether it’s done R-style (define a specialized function) or Python-style (define a method in a class’s namespace).

              Very brief description of R’s S3 system for generic function dispatch / object-oriented-programming-before-Java-narrowed-what-that-means:

              • specialized functions have names like summary.glm, summary.lm,, etc. (NB: these are ordinary functions with a dot in the name.)

              • User calls the generic function summary(mymodel), which dispatches by looking at the object’s class(es) and then calling an appropriately-named specialized function.

              • Whether you’re starting with a generic function or with a class, you can list the available methods:

                  methods(summary)  # See the different classes that `summary` supports
                  methods(class="lm")  # See the different functions that support `lm`

                (Try here:

              • Further reading in the docs and Wickham’s Advanced R chapter on S3

              Alright, on to the stories.

              First story: Data Frames Forever, or, how R’s Core Data Type Supported Every New Way Of Working

              The data frame is a fundamental object type in R. It is defined in the base library, I think.

              Here’s how the split-apply-combine work pattern changed over the years – let’s assume a data frame (player, year, runs) of baseballers-or-cricketers-or-joggers, and we want to define a new column ‘career_year’ based on when the sporter first started.

              • It used to require some rather manual coding that I won’t reproduce here.

              • Then came ddply:

                df1 <- ddply(df, "player",
                    function(chunk) {
                        chunk$career_year = chunk$year - min(chunk$year) + 1;
              • and then came dplyr (and the %>% pipe syntax innovation):

                df1 <- df %>%
                    group_by(mycategory) %>%
                    mutate(career_year = year - min(year))

              All this time, data.frame did not need to change. ‘But that’s just defining new functions,’ I hear you holler. Yes, it is; but because it was functions, you could define new ones in your own package without touching data.frame.

              You see this a lot in R: an old package’s interface isn’t much used anymore, but its data structures are still used as a lingua franca while the interfaces evolve.

              You don’t see this a lot in Python, because if you want to support new methods .argh(), .grr(), and .hmm() to a class you’ll have to extend the class itself. Pandas, as a result, is a huge library. At some point Pandas sprounted df.pipe(myfunction) syntax, which allows you to pass your own function; but that has not stopped the Pandas package from growing.

              Second story: Broom, or, How a Whole Bunch of Classes All Got A New Method.

              broom is an R package that extracts information from a variety of fitted models into a tidy data frame.

              It does this by defining 3 generic functions: tidy() (per-component), glance() (per-model), and augment() (per-observation); and a whole host of specialized variants to support models from many different packages.

              It started smallish, and its obvious usefulness and extensibility made it really take off. It now supports a whole host of models from a host of different packages: some unilaterally implemented in the Broom package; some were contributed to Broom by package authors;

              And I seem to recall, but cannot currently search, that some packages include functions like tidy.ourmodeltype themselves in order to support Broom’s interface. And that is only possible if it doesn’t matter where the specialized myfunction.myclass lives.

              NB: this is different from giving a new class a few of the same methods (same interface) as an existing class; this is about thinking up a few new methods (an interface), and then supporting a whole bunch of already-existing classes.

              … I am fully aware that the above is neither rigorous argumentation nor inspiring rhetoric, but I hope it nonetheless captures some of the positive effects of namespace-agnostic method dispatch that I’ve seen in the R ecosystem. Which probably also apply to Julia.

              1. 5

                you can start a new package to expand what an existing class can do.

                You can also do this in a number of OO languages that support extending classes. These include Objective-C (which calls these “categories”), Swift, and … damn, I’m drawing a blank on the others. (I’m sure Ruby, because you can do anything to a class at runtime in Ruby.)

                The common functionality is that you can declare an extension of a class, regardless of its provenance, and within that scope add methods or declare conformance with protocols/interfaces.

                Nim is interesting in that it blurs together functions and methods, so f(x, y) is semantically the same as x.f(y). Since it also has C++-style overloading by parameter type, you get a lot of the same functionality as Julia/R, with the caveat that it’s (mostly) earlier-bound, i.e. the binding is done by declared type not runtime type. (The exception is that you can declare polymorphic methods on classes, and calls to those are late-bound, “virtual” in C++ parlance.)

            1. 28

              Could we stop using 0-informatiom titles like this please? There is no way for me to have any idea what this is about without following the link.

              1. 2

                There’s a text box after ‘tags’ section that can used to add more information while submitting.

                1. 19

                  And that text box says “do not use this […] summarize the story or explain why you posted it”. ;)

                  This isn’t the first situation when following submission guidelines seems counter-productive, and “editorializing” the title, e.g., by adding “[using fzf for bash history search]” would be helpful. Myself, I believe a requirement to keep the original title in the story title field should be the only hard requirement, especially considering that most people here comply with the spirit of that rule at all times anyway.

                  1. 6

                    Fun fact: confusion between “edit” and “editorialize” is part of how we ended up here.

                    • ‘Edit’ merely means to alter
                    • ‘Editorialize’ means to inject an opinion, as in a newspaper’s editorial opinion column.
                    • The original submission guidelines asked submitters not to editorialize their submission titles
                    • This got misinterpreted as a ban on any edits, the community started reminding each other of this rule-as-they-understood-it, and eventually the old intent was forgotten to the point that the guidelines were reworded.

                    And yes, I agree that brief title clarifications, summaries, and ‘this is why I think this is interesting’ introductions would make the forum a nicer place, and help keep us from ever becoming a linkdump.

                    1. 1

                      I didn’t know any of that. And I am not sure I’d touch stuff, because I’m afraid to get it wrong.

                    2. 3

                      Yeah I wasn’t sure whether to add that or not, it would probably have been a good idea since we’re missing a shell tag. ‘programming’ and ‘unix’ are rather vague. I erred on the side of caution here.

                    3. 1

                      flag as off topic

                  1. 6

                    Repost prompted by

                    Were I to survey the programming blog posts I’ve read in the last ten years, I would file this little blog post among my favourite ten. Every project I started this way got a better design out of it. But then, I like writing docs, so maybe this works because it gels with my usual design process.

                    1. 3

                      In rare flow moments at nights this past few weeks, I documented READMEs for a couple of projects. While exhausting, it was therapeutic to update these constantly. Usable, good-enough breakdown of work is much better than starting at a mountain of thoughts.

                      This and the larger ‘writing is thinking’ theme is certainly a solid mental model. Everyone can verify independently and derive benefit from doing it.

                    1. 10

                      Joe needed to transfer a number of files between two computers, but didn’t have an FTP server. “But I did have distributed Erlang running on both machines” (quote from the book, not the blogpost), and he then proceeded to implement a file transfer program in, what? 5-10 lines of Erlang, depending on how you count? Beautiful.

                      1. 6

                        It’s certainly impressive. As someone used to corporate environments 15 years after Joe wrote this, I’m even more amazed at the network access he enjoyed.

                        1. 2

                          Ye-ess, the network access and ability to start/access a server program is key, isn’t it?

                          If it were SSH instead of Erlang, one could write

                          ssh joe@server sh -c "base64 < myfile" | base64 --decode > myfile.copy

                          to much the same effect — this is not downplaying at Erlang at all, it’s a second illustration of how powerful a remote procedure call sytem can be. (And SSH can only return text, imagine having Erlang where your RPC can return any data structure a local call can. (In part because Erlang deliberately limits its data structures to ones that can be passed as messages.))

                          1. 3

                            If it were SSH instead of Erlang, one could write

                            The funny thing is that iirc SCP works like this (or has fallbacks to work like this)

                            And SSH can only return text, imagine having Erlang where your RPC can return any data structure a local call can

                            SSH can transmit arbitrary data. That’s how terminal control characters still work.

                            1. 2

                              Heh, I’ve used ssh cat a few times when scp and sftp weren’t available or didn’t function.

                      1. 2

                        I tried to get the IBIS webapp mentioned in the first video running. I apparently succeeded, despite really not knowing anything about perl. This is more or less what I was able to muddle out (I may have forgotten a step):

                        git clone
                        cd p5-app-ibis
                        # This adds 'cpanm' which something said to use.
                        cpan App::cpanminus
                        cpanm inc::Module::Install
                        cpanm Module::Install::Catalyst
                        perl ./Makefile.PL
                        cpanm Convert::Color
                        cpanm RDF::Trine
                        # At this point I noticed that ./Makefile.PL made a Makefile, but it sure didn't help with the dependency hell.
                        cpanm Data::GUID::Any
                        cpanm Encode::Base58::BigInt Data::UUID::NCName
                        # This sure didn't do it.
                        cpanm --installdeps .
                        # This package has a test failure.
                        cpanm --force Data::UUID::NCName
                        cpanm MooseX::Types::Moose
                        vi app_ibis.conf
                            change line 66
                                dsn       dbi:Pg:dbname=trine
                                dsn       dbi:SQLite:dbname=trine
                        1. 2

                          Brave of you to try to fire it up. I probably haven’t updated the Makefile.PL in a while, thus the missing dependencies.

                          I made that thing in 2013 for the express purpose of testing RDF::KV. It turned out to be marginally useful so I kept poking at it for year or two afterward, but consider it to be way too sclerotic for what I actually need out of a tool like that. I’m in the process of rewriting it (including a completely different visualization).

                          1. 1

                            Replying real quick before (the time switch of) my modem turns itself off for the night (v. personal form of regulating an ADHD-related thing):

                            Extremely cool that you’re actually trying it out, you are a webizen after my own heart. I’m following your instructions right now, see if I can get Ibis running, too.

                            While make runs, some initial thoughts on playing around with the prototype at :

                            • the non-labeled, non-spatiality of the circle graph gives me no sense of place/structure/what connects to what. really, the only thing it tells me how many/few relations (incoming/outgoing/looping) a node (Issue/Position/Argument) has.
                            • the compact list on the front page of every Issue, Position, and Argument in the reasoning, even without information? That is awesome to get a feel of ‘what sort of stuff is in here’. It’s like a word cloud but good.
                            • okay I have more to type but only minutes left, sending now. Busy few days ahead, might get back to you in a few days? [turned out I was just too slow, briefly turning on mobile Internet to send this]
                            1. 2

                              Yeah the visualization is trash; I just implemented it because it was easy. I have a new one in mind for the rewrite.

                              1. 1

                                Eh, it may be trash for transferring the graph structure, but it was pretty good at transferring a ‘vibe’. It made a very striking image, made me want to understand it, made me want to know more about this IBIS thing. And I enjoyed trying to puzzle it simultaneously with reading through the issues/positions/etc.

                                Also, I am amused to note that after a day or so of not thinking about the image, when I recalled it just now it my brain went “Ah, the one that was like a Star Trek UI, but in a circle!” Make of that what you will. (Maybe because of the round(ed) bits + the colour scheme?)

                                1. 1

                                  Ha, thanks for being so charitable!

                                  I just cribbed the design (though I wrote my own implementation) from Krzywinksi’s Circos plot because my original design tried to use his hive plot, but what I found was the aspect ratio of the hive plot was unpredictable so I swapped it out for the Circos, which stays put in that regard.

                                  I’d say the drawback to either is that they aren’t the best for representing the IBIS structure which ends up being pretty (but rarely strictly) hierarchical, so a Sugiyama (GraphViz) style treatment would ultimately be better, which is the general direction I’m headed (with some pretty hefty customizations).

                                  Another experiment I had going on with this tool was to use the embedded RDFa to marshal the CSS: I use Sass (I can’t remember if it’s computed on the fly or not) to match the palette to RDF classes and properties, which, being embedded in the markup hierarchy, turns out to be a pretty powerful way to style web pages. (Not to mention SVG; I use this technique pretty much everywhere now.)

                                  I did a more expansive writeup which talks about the history of IBIS and the direction I plan to (eventually) take the tool.

                          1. 12

                            The lesson here sounds more like “bad protocols will make your client/server system slow and clumsy”, not “move all of your system’s code to the server.” The OP even acknowledges that GraphQL would have helped a lot. (Or alternatively something like CouchDB’s map/reduce query API.)

                            I don’t really get the desire to avoid doing work on the client side. Your system includes a lot of generally-quite-fast CPUs provided for free by users, and the number of these scales 1::1 with the number of users. Why not offload work onto them from your limited and costly servers? Obviously you’re already using them for rendering, but you can move a lot of app logic there too.

                            I’m guessing that the importance of network protocol/API design has been underappreciated by web devs. REST is great architecturally but if you use it as a cookie-cutter approach it’s non-optimal for app use. GraphQL seems a big improvement.

                            1. 17

                              Your system includes a lot of generally-quite-fast CPUs provided for free by users

                              Yes, and if every site I’m visiting assumes that, then pretty quickly, I no longer have quite-fast CPUs to provide for free, as my laptop is slowly turning to slag due to the heat.

                              1. 8

                                Um, no. How many pages are you rendering simultaneously?

                                1. 3

                                  I usually have over 100 tabs open at any one time, so a lot.

                                  1. 5

                                    If your browser actually keeps all those tabs live and running, and those pages are using CPU cycles while idling in the background and the browser doesn’t throttle them, I can’t help you… ¯\_(ツ)_/¯

                                    (Me, I use Safari.)

                                    1. 3

                                      Yes, but assuming three monitors you likely have three, four windows open. That’s four active tabs, Chrome put the rest of them to sleep.

                                      And even if you only use apps like the one from the article, and not the well-developed ones like the comment above suggests, it’s maybe five of them at the same time. And you’re probably not clicking frantically all over them at once.

                                      1. 2

                                        All I know is that when my computer slows to a crawl the fix that usually works is to go through and close a bunch of Firefox tabs and windows.

                                        1. 4

                                          There is often one specific tab which for some reason is doing background work and ends up eating a lot of resources. When I find that one tab and close it my system goes back to normal. Like @zladuric says, browsers these days don’t let inactive tabs munch resources.

                                2. 8

                                  I don’t really get the desire to avoid doing work on the client side.

                                  My understanding is that it’s the desire to avoid some work entirely. If you chop up the processing so that the client can do part of it, that carries its own overhead. How do you feel about this list?

                                  Building a page server-side:

                                  • Server: Receive page request
                                  • Server: Query db
                                  • Server: Render template
                                  • Server: Send page
                                  • Client: Receive page, render HTML

                                  Building a page client-side:

                                  • Server: Receive page request
                                  • Server: Send page (assuming JS is in-page. If it isn’t, add ‘client requests & server sends the JS’ to this list.)
                                  • Client: Receive page, render HTML (skeleton), interpret JS
                                  • Client: Request data
                                  • Server: Receive data request, query db
                                  • Server: Serialize data (usu. to JSON)
                                  • Server: Send data
                                  • Client: Receive data, deserialize data
                                  • Client: Build HTML
                                  • Client: Render HTML (content)

                                  Compare the paper Scalabiilty! But at what COST!, which found that the overhead of many parallel processing systems gave them a high “Configuration to Outperform Single Thread”.

                                  1. 4

                                    That’s an accurate list… for the first load! One attraction of doing a lot more client-side is that after the first load, the server had the same list of actions for everything you might want to do, while the client side looks more like:

                                    • fetch some data
                                    • deserialize it
                                    • do an in-place rerender, often much smaller than a full page load

                                    (Edit: on rereading your post your summary actually covers all requests, but missed how the request and response and client-side rerender can be much smaller this way. But credit where due!)

                                    That’s not even getting at how much easier it is to do slick transitions or to maintain application state correctly across page transitions. Client side JS state management takes a lot of crap and people claim solutions like these are simpler but… in practice many of the sites which use them have very annoying client side state weirdness because it’s actually hard to keep things in sync unless you do the full page reload. (Looking at you, GitHub.)

                                    1. 6

                                      When I’m browsing on mobile devices I rarely spend enough time on any single site for the performance benefits of a heavy initial load to kick in.

                                      Most of my visits are one page long - so I often end up loading heavy SPAs when a lighter, single page optimized to load fast from an un sched blank state would have served me much better.

                                      1. 4

                                        I would acknowledge that this is possible.

                                        But that’s almost exactly what the top comment said. People use framework of the day for a blog. Not flattening it, or remixing it or whatever.

                                        SPAs that I use are things like Twitter, the tab is likely always there. (And on desktop i have those CPU cores.)

                                        It’s like saying, I only ride on trains to work, and they’re always crowded, so trains are bad. Don’t use trains if your work is 10 minutes away.

                                        But as said, I acknowledge that people are building apps where they should be building sites. And we suffer as the result.

                                        What still irks me the most are sites with a ton of JavaScript. So it’s server-rendered, it just has a bunch of client-side JavaScript that’s unused, or loading images or ads or something.

                                    2. 4

                                      You’re ignoring a bunch of constant factors. The amount of rendering to create a small change on the page is vastly smaller than that to render a whole new page. The most optimal approach is to send only the necessary data over the network to create an incremental change. That’s how native client/server apps work.

                                      1. 5

                                        In theory yes but if in practice if the “most optimal approach” requires megabytes of JS code to be transmitted, parsed, executed through 4 levels of interpreters culminating in JIT compiling the code to native machine code all the while performing millions of checks to make sure that this complex system doesn’t result in a weird machine that can take over your computer then maybe sending a “whole new page” consisting of 200 kb of static HTML upon submitting a form would be more optimal.

                                        1. 4

                                          In theory yes but if in practice if the “most optimal approach” requires megabytes of JS code to be transmitted, parsed, executed through 4 levels of interpreters culminating in JIT compiling the code to native machine code all the while performing millions of checks to make sure that this complex system doesn’t result in a weird machine that can take over your computer

                                          This is hyperbole. Sending a ‘“whole new page” of 200 kb of static HTML’ has your userspace program block on the kernel as bytes are written into some socket buffer, NIC interrupts the OS to grab these bytes, the NIC generates packets containing the data, userspace control is then handed back to the app which waits until the OS notifies it that there’s data to read, and on and on. I can do this for anything on a non-embedded computer made in the last decade.

                                          Going into detail for dramatic effect doesn’t engage with the original argument nor does it elucidate the situation. Client-side rendering makes you pay a one-time cost for consuming more CPU time and potentially more network bandwidth for less incremental CPU and bandwidth. That’s all. Making the tradeoff wisely is what matters. If I’m loading a huge Reddit or HN thread for example, it might make more sense to load some JS on the page and have it adaptively load comments as I scroll or request more content. I’ve fetched large threads on these sites from their APIs before and they can get as large as 3-4 MB when rendered as a static HTML page. Grab four of these threads and you’re looking at 12-16 MB. If I can pay a bit more on page load then I can end up transiting a lot less bandwidth through adaptive content fetching.

                                          If, on the other hand, I’m viewing a small thread with a few comments, then there’s no point paying that cost. Weighing this tradeoff is key. On a mostly-text blog where you’re generating kB of content, client-side rendering is probably silly and adds more complexity, CPU, and bandwidth for little gain. If I’m viewing a Jupyter-style notebook with many plots, it probably makes more sense for me to be able to choose which pieces of content I fetch to not fetch multiple MB of content. Most cases will probably fit between these two.

                                          Exploring the tradeoffs in this space (full React-style SPA, HTMX, full SSR) can help you come to a clean solution for your usecase.

                                          1. 1

                                            I was talking about the additional overhead required to achieve “sending only the necessary data over the network”.

                                    3. 4

                                      I don’t really get the desire to avoid doing work on the client side.

                                      My impression is that it is largely (1) to avoid JavaScript ecosystem and/or* (2) avoid splitting app logic in half/duplicating app logic. Ultimately, your validation needs to exist on the server too because you can’t trust clients. As a rule of thumb, SSR then makes more sense when you have lower interactivity and not much more logic than validation. CSR makes sense when you have high interactivity and substantial app logic beyond validation.

                                      But I’m a thoroughly backend guy so take everything that I say with a grain of salt.

                                      Edit: added a /or. Thought about making the change right after I posted the comment, but was lazy.

                                      1. 8

                                        (2) avoid splitting app logic in half/duplicating app logic.

                                        This is a really the core issue.

                                        For a small team, a SPA increases the amount of work because you have a backend with whatever models and then the frontend has to connect to that backend and redo the models in a way that makes sense to it. GraphQL is an attempt to cut down on how much work this is, but it’s always going to be some amount of work compared to just creating a context dictionary in your controller that you pass to the HTML renderer.

                                        However, for a team that is big enough to have separate frontend and backend teams, using a SPA decreases the amount of communication necessary between the frontend and backend teams (especially if using GraphQL), so even though there’s more work overall, it can be done at a higher throughput since there’s less stalling during cross team communication.

                                        There’s a problem with MPAs that they end up duplicating logic if something can be done either on the frontend or the backend (say you’ve got some element that can either be loaded upfront or dynamically, and you need templates to cover both scenarios). If the site is mostly static (a “page”) then the duplication cost might be fairly low, but if the page is mostly dynamic (an “app”), the duplication cost can be huge. The next generation of MPAs try to solve the duplication problem by using websockets to send the rendered partials over the wire as HTML, but this has the problem that you have to talk to the server to do anything, and that round trip isn’t free.

                                        The next generation of JS frameworks are trying to reduce the amount of duplication necessary to write code that works on either the backend or the frontend, but I’m not sure they’ve cracked the nut yet.

                                        1. 4

                                          For a small team, a SPA increases the amount of work because you have a backend with whatever models and then the frontend has to connect to that backend and redo the models in a way that makes sense to it

                                          Whether this is true depends on whether the web app is a client for your service or the client for your service. The big advantage of the split architecture is that it gives you a UI-agnostic web service where your web app is a single front end for that service.

                                          If you never anticipate needing to provide any non-web clients to your service then this abstraction has a cost but little benefit. If you are a small team with short timelines that doesn’t need other clients for the service yet then it is cost now for benefit later, where the cost may end up being larger than the cost of refactoring to add abstractions later once the design is more stable.

                                          1. 1

                                            If you have an app and a website as a small team, lol, why do you hate yourself?

                                            1. 4

                                              The second client might not be an app, it may be some other service that is consuming your API.

                                        2. 4

                                          (2) avoid splitting app logic in half/duplicating app logic.

                                          The other thing is to avoid duplicating application state. I’m also a thoroughly a backend guy, but I’m led to understand that the difficulty of maintaining client-side application state was what led to the huge proliferation of SPA frameworks. But maintaining server-side application state is easy, and if you’re doing a pure server-side app, you expose state to the client through hypertext (HATEOAS). What these low-JS frameworks do is let you keep that principle — that the server state is always delivered to the client as hypertext — while providing more interactivity than a traditional server-side app.

                                          (I agree that there are use-cases where a more thoroughly client-side implementation is needed, like games or graphics editors, or what have you.)

                                          1. 1

                                            Well, there’s a difference between controller-level validation and model-level validation. One is about not fucking up by sending invalid data, the other is about not fucking up by receiving invalid data. Both are important.

                                          2. 4

                                            Spot on.

                                            this turns out to be tens (sometimes hundreds!) of requests because the general API is very normalized (yes we were discussing GraphQL at this point)

                                            There’s nothing about REST I’ve ever heard of that says that resources have to be represented as separate, highly normalized SQL records, just as GraphQL is not uniquely qualified to stitch together multiple database records into the same JSON objects. GraphQL is great at other things like allowing clients to cherry-pick a single query that returns a lot of data, but even that requires that the resolver be optimized so that it doesn’t have to join or query tables for data that wasn’t requested.

                                            The conclusion, which can be summed up as, “Shell art is over,” is an overgeneralized aesthetic statement that doesn’t follow from the premises. Even if the trade-offs between design choices were weighed fully (which they weren’t), a fundamentally flawed implementation of one makes it a straw man argument.

                                            1. 1

                                              The Twitter app used to lag like hell on my old Thinkpad T450. At the very least, it’d kick my fan into overdrive.

                                              1. 1

                                                Yay for badly written apps :-p

                                                Safari will notice when a page in the background is hogging the CPU, and either throttle or pause it after a while. It puts up a modal dialog on the tab telling you and letting you resume it. Hopefully it sends an email to the developer too (ha!)

                                                1. 1

                                                  It wound spin up on load, not in the background, because loading all that JS and initializing the page is what would cause the CPU usage. And then after I closed the page, 20 seconds later someone else would send me another Twitter link and I’d get to hear the jet engine again.

                                            1. 7

                                              I especially love the second half. ‘Haskell books that don’t exist but should’ is a nifty format to discuss an ecosystem; the little blurbs the author wrote do a good job of making you want those books; and you can even discover some existing gems, because every imagined book has some links to real books that cover the topic, just not in Haskell.

                                              1. 9

                                                “If it compiles, it works” is a common sentiment expressed by Elm users (as well as Rust and Haskell and Ocaml users). But this episode is more than its title suggests: the two hosts thoughtfully examine their own experiences and feelings to figure out

                                                • what they mean when they say ‘if it compiles it works’ – do they feel it with every program? No? What sort of programs do they feel it with?
                                                • and where does that feeling come from? How does the language/compiler contribute?
                                                • is it just programs that give you ‘yay, the compiler has my back’ feelings? No, it’s also certain changes; but what kind of changes?
                                                • and which of their Elm projects did not give them ‘if it compiles it works’ feelings?
                                                • and what was it about those programs that made them feel the compiler’s assurances were not enough?
                                                • which programming choice make it likely that you will still confuse yourself despite the compiler’s assurances?

                                                Super interesting stuff, discussed in a rigorous experiental way that I haven’t seen before in programming podcast or article.

                                                1. 6

                                                  Very impressive! I particularly admire your Summary of what kinds of schema changes are safe, and how asymmetric fields (required for the writer, but optional for the reader) allow safe schema evolution.

                                                  I would love to hear more about the inception of this project. What combination of research / colleagues / experiences / thoughts inspired you to build this? Did it require a lot of refining to put into practice?

                                                  1. 4

                                                    At the companies I’ve worked at, required fields were always treated with suspicion or banned completely. This meant you never knew which fields you had to set in order to use an API (at least not from the types), and the owner of the API could not rely on deserialized messages actually having their fields populated. There are analogous but lesser appreciated problems with enums and more generally with sum types: when pattern matching, you’re always supposed to have a fallback case for when the input isn’t recognized, even when it’s often not clear what to do in that situation. Everyone just gets used to this hand-waviness, and software quality suffers as a result. I started investigating why organizations are averse to having stronger type safety guarantees, and (after reading many internal debates at Google) I concluded it’s because the technology didn’t support them enough. So about a year ago I set out to rethink how we design and evolve APIs from first principles. The idea of asymmetric fields can be thought of as an application of Postel’s law to APIs. The concept isn’t new, but I think encoding it in the type system this way is. Perhaps one of the reasons why asymmetric fields weren’t invented sooner is because we don’t teach category theory to computer science students, which is a shame as I relied heavily on intuitions from category theory (especially duality) when designing Typical.

                                                  1. 3

                                                    […] until the users want to use the table with 1000 rows. […] when 3000 rows came up it was game over. Too many bindings for all rows, pure HTML file has about 3 MB […] JavaScript Client-Side application would be a much better choice.

                                                    My experience with large tables of data with client side JS frameworks is the exact same as the description of LiveView. You cant bind or have interactivity without it getting out of hand. Do folks have good strategies for that? My go-to is to avoid interfaces which might need that.

                                                    Sorta funny because plain old html has no problem with that style of data.

                                                    1. 3

                                                      Usually the strategy is to only put content into the visible subset of rows, plus some buffer, and then drop rows as they scroll off screen. It’s a pain to do though and you need to fake scrollbars and whatnot. It’s something you do out of necessity, not choice.

                                                      1. 1

                                                        This suggests that the web frontend library world might enjoy some sort of r1k challenge/yardstick: can your framework sort and redisplay one thousand table rows in under 100ms?

                                                      1. 6

                                                        Pretty! What was it like to make, did the medium cooperate, or resist?

                                                        To the spam-voters: this isn’t spam, this is somebody showing something they made. The front page will take care of itself, let’s keep some room to hang out and express our humanity, eh?

                                                        1. 3

                                                          I found the code of the verifyer-in-the-middle! It’s here, in a codebase of the Ocean Observatories Initiative’s cyber infrastructure:


                                                          It took me some work to find this, the artefact link in the paper’s References section went to a Conflunce page that is both dead and not in Wayback. Thanks be to SciHub; without them I wouldn’t have been able to access the paper with a better link.

                                                          Side note 1: dr. Neykova and prof. Yoshida both have super interesting publication lists.

                                                          Side note 2: there is a recent (2018) talk The Do’s and Don’ts of Error Handling, by Joe Armstrong, that also mentions MITM protocol verifyers/enforcers. (Link goes directly to that bit, it runs for about 2 minutes). His argument runs ‘protocols are contracts; contracts assign blame, which encourages both parties to fulfil the contract. What is sent on the wire is rarely observed; a contract checker in the middle lets you know which component is out of spec’. The paper “How to Verify Your Python Conversations” is an (earlier!) practical instance of such a contract checker.

                                                          1. 1

                                                            Further notes: this is not, in fact, a verifyer-in-the-middle, but something even cooler: local verifyers that are guaranteed to enforce global correctness.

                                                            The authors (Rumyana Neykova did the implementing in the OOI codebase; Nobuko Yoshida is a Very Important Researcher in the field of Multi-Party Session Types, calculus, and verification):

                                                          1. 2

                                                            Anything for C++? I have a project for which I want to hack together a quick and dirty interactive cross-platform UI as a prototype, before I go to the trouble of building a GUI.

                                                            Right now I’m just using stdout and readline, but I’m about to hit a wall where I need to be able to report incoming data asynchronously while accepting input (think IRC or a MUD client), which is beyond what I know how to do in a terminal.

                                                            1. 3

                                                              I have never programmed with it (or C++), but there is a modern C++ port of Turbo Vision:

                                                              I have used a Turbo Vision project: Borland’s Turbo Pascal IDE had clean, clear, looks, and using it was very pleasant. Outlines, drop shadows to distinguish pressed from unpressed, scrollbars, the whole kit and caboodle.

                                                              P.s. You can also experience Turbo Vision on the web, because somebody ported its looks to a CSS file.

                                                              1. 3

                                                                The output of the modern Turbo Vision looks nice, but I was slightly terrified of the example, which includes a load of bar operator new calls. The example included a load of those in a single expression, which is then not exception safe (if any of them throw, the others leak). I’d love to see that evolve into something in modern C++, where all of these calls took a std::unique_ptr, so you could create them with std::make_unique and have explicit ownership transfer.

                                                                The other one I’ve been looking at intermittently (after seeing it linked from here) was ImTui. This is mostly interesting because it’s actually a back end for Dear ImGui and so it should be relatively easy to write code that runs in a terminal or a GUI. Unfortunately, the documentation for Dear ImGui puts me off in a few ways: their explanation of what an immediate mode GUI framework is spends all of its time telling me what it isn’t, and then stops, they use raw char* for strings (not great even for small strings, because the ownership is unclear, and awful if you want a more complex text representation) and they claim that it’s a feature that they don’t use modern C++ features (i.e. the things that make C++ a not-so-terrible language).

                                                                1. 1

                                                                  ImTui is IMHO visually a lot more appealing than the other C++ libraries suggested, which seem to be clumsily aping GUI windows (with “shadows”, even) in a way that reminds me of DOS. The lack of modern-day C++ features in the API is off putting, though…

                                                              2. 2

                                                                This is one of the libs I am taking a look at:


                                                                1. 2

                                                                  cwidget, which is used by aptitude, still works. (Necessarily so, since aptitude works.) It uses libsigc++ for callbacks.

                                                                1. 2

                                                                  This is just a beautiful little blog post – a practical problem personal to the author (but shared by many), a practical solution, concisely written, entertainingly told, I loved it. It feels like one of those many small knowledge-building and -disseminating posts from the heyday of programming weblogs dropped through time to be here, with us, in the year 2021.

                                                                  1. 2

                                                                    Thanks 😊

                                                                  1. 7

                                                                    The title doesn’t do the interview justice, though. The page has a more representative summary of the many topics discussed:

                                                                    • The latest edition of Real World OCaml
                                                                    • The MirageOS library operating system
                                                                    • Docker for Mac and Windows , which is based on MirageOS
                                                                    • Cambridge University’s OCaml Labs
                                                                    • NASA’s Mars Polar Lander
                                                                    • The Xen Project, which made extensive use of OCaml in their control stack.
                                                                    • The Multicore branch of OCaml, and the multicore monthly updates.
                                                                    1. 4

                                                                      It’s a great episode, even for those of us who haven’t even seen a single line of ML code in their life!

                                                                    1. 15

                                                                      This isn’t “modern” or “ergonomic” so much as it is just bloated. Look at what the default starship prompt includes. Who can keep track of what that is currently showing? In a minimal configuration (e.g. just status, host name, pwd, etc), a standard/scripted fish prompt will be faster because fish is stateful and caches all these so there’s no need to launch an external process. Fisher is included but I don’t see any use of fisher in the config file - also, fish doesn’t need fisher, it’s modular enough as it is.

                                                                      This isn’t Unix-y because exa deliberately breaks compatibility with ls flags/switches even when they wouldn’t conflict, and maintainers have refused to add an option to fall back to ls arg handling if the binary is aliased to ls (I reported both issues years and years ago).

                                                                      zoxide I can admittedly get behind - I used auto jump for years but it’s no longer maintained. But you don’t need to use z instead of cd - it pulls the CWD from the prompt integration so you can only use it when you need to fuzzy jump, therefore being more Unix-y and more compatible.

                                                                      1. 5

                                                                        Look at what the default starship prompt includes. Who can keep track of what that is currently showing?

                                                                        Me, because 90% of those items are inactive at any point in time! I rarely set my python_env and lua_env simultaneously. But I’d love to have a prompt that shows neither or either or both, as appropriate.

                                                                        NB: this is me endorsing only the idea, I haven’t used this linked prompt specifically. Perhaps we are both opining naïvely, but I certainly am.

                                                                        1. 3

                                                                          I gave it a try. The text above the prompt gives you info about the “state” or “type” of project you’re currently cd’d into. This is what it shows after I cd ~:

                                                                          ~ via 🐘 v7.2.24 via 🐍 v3.6.9 

                                                                          WTF does that even mean? Why do I need to see this each time I press enter, before and after every command? Why do I need emoji (note, not glyphs like those in nerd font but actual, childish emoji)? Why does my home directory tell me what I now understand after significant head banging to be a postgres version (postgres isn’t even installed on this laptop! Not even just the client tools!) and a python version? I have four different python3 versions installed and one python2 version. Why show me 3.6.9 when python3.10 is the most recent I’ve used? Why do I need to see this again? And most importantly, how do I make this all go away?

                                                                          What a stupid execution of an OK idea (using an external process written in a fast language like rust to process things like git info faster). I’m fine with the default fish prompt, thanks.

                                                                          Edit: ahahaha it’s so slow in an actual (tiny!) git repository that this has now gone from ridiculous to funny. fish’s default git prompt with branch/status integration and all that is faster than the default starship git integration that shows you nothing more than the name of the branch you’re in. This is hilarious.

                                                                        2. 2

                                                                          Thanks for the input! I’ll consider your thoughts and feedback with great care.

                                                                          1. 2

                                                                            Sorry for being so harsh. I was triggered once I saw exa, because for me that’s personal since I contributed to the project and had high hopes for it but was completely turned off when they refused to support ls compatibility for a tool they self-declare as a modern ls replacement. You shouldn’t have to think twice before being able to ls -ahltr something.

                                                                            Please see my other reply in a sibling comment about starship. You may want to reconsider, I think you’ll find that if you’re using the latest version of fish, this will slow down your prompt rather than speed it up. The newest fish releases have a much-improved and very snappy git prompt.

                                                                            1. 2

                                                                              All good, friend! Sorry you had that experience with Exa and I really appreciate the tip about the latest Fish version. I’ll look into it right away!

                                                                        1. 3

                                                                          When I saw Python was gaining pattern matching, the first use case I thought of was Option¹ & Result² types, but I haven’t seen them mentioned yet in the discussions on What We Can Use Pattern Matching in Python For. Which surprised me, because I thought a use case so common in other languages (my personal experience is with Rust and Elm) would have been mentioned by now.
                                                                          ¹ Option[MyType] = Some(my_value) / None
                                                                          ² Result[MyType, MyErrorType] = Ok(my_value) / Err(my_error_value)

                                                                          Both have a usage pattern that you see in most pattern-matching use cases: choose a code path based on which variant you’ve got (e.g. Ok/Err); subsequently work with the value wrapped by the variant. Pattern-matching lets you do both at once, so, yay ergonomics – until you realize that a match statement does in ~five lines what if x is None / return / continue processing does in ~three.

                                                                          # traditional
                                                                          # name: Optional[str], i.e. 'my name' or None
                                                                          if name is None:  # Can't omit this, mypy will complain
                                                                               return None
                                                                          return name.upper()
                                                                          # modern cover
                                                                          # name: Option[str], i.e. Some('my name') or Empty
                                                                          match name:
                                                                              case Some(x):
                                                                                  return Some(x.upper())
                                                                              case Empty:
                                                                                  return Empty

                                                                          So maybe what I like about Option&Result is not that they enable pattern-matching usage – it’s that I know them from Rust, where the type system is not optional, and so type-correctness guarantees are pervasive. Move the pattern to Python, and maintaining pervasive type-correctness starts looking a lot more like work.

                                                                          But then I hit an example that Option[T] can do, and Optional[T] very much can not: using the method. exists just fine and returns Empty. could never, because None does not have methods.

                                                                          # name: Option[str]
                                                                          # Works whether name is Some('my name') or Empty

                                                                          ^ Nice, no? So now I’m just about convinced that bringing Option & Result to Python could actually be useful, and is not just noodling in the margins. More of Patina’s Option API here. And that you can end with pattern-matching on case Some(x): is just a bonus.