Threads for hashemi

  1. 12

    The lesson here sounds more like “bad protocols will make your client/server system slow and clumsy”, not “move all of your system’s code to the server.” The OP even acknowledges that GraphQL would have helped a lot. (Or alternatively something like CouchDB’s map/reduce query API.)

    I don’t really get the desire to avoid doing work on the client side. Your system includes a lot of generally-quite-fast CPUs provided for free by users, and the number of these scales 1::1 with the number of users. Why not offload work onto them from your limited and costly servers? Obviously you’re already using them for rendering, but you can move a lot of app logic there too.

    I’m guessing that the importance of network protocol/API design has been underappreciated by web devs. REST is great architecturally but if you use it as a cookie-cutter approach it’s non-optimal for app use. GraphQL seems a big improvement.

    1. 16

      Your system includes a lot of generally-quite-fast CPUs provided for free by users

      Yes, and if every site I’m visiting assumes that, then pretty quickly, I no longer have quite-fast CPUs to provide for free, as my laptop is slowly turning to slag due to the heat.

      1. 8

        Um, no. How many pages are you rendering simultaneously?

        1. 3

          I usually have over 100 tabs open at any one time, so a lot.

          1. 5

            If your browser actually keeps all those tabs live and running, and those pages are using CPU cycles while idling in the background and the browser doesn’t throttle them, I can’t help you… ¯\_(ツ)_/¯

            (Me, I use Safari.)

            1. 3

              Yes, but assuming three monitors you likely have three, four windows open. That’s four active tabs, Chrome put the rest of them to sleep.

              And even if you only use apps like the one from the article, and not the well-developed ones like the comment above suggests, it’s maybe five of them at the same time. And you’re probably not clicking frantically all over them at once.

              1. 2

                All I know is that when my computer slows to a crawl the fix that usually works is to go through and close a bunch of Firefox tabs and windows.

                1. 4

                  There is often one specific tab which for some reason is doing background work and ends up eating a lot of resources. When I find that one tab and close it my system goes back to normal. Like @zladuric says, browsers these days don’t let inactive tabs munch resources.

        2. 8

          I don’t really get the desire to avoid doing work on the client side.

          My understanding is that it’s the desire to avoid some work entirely. If you chop up the processing so that the client can do part of it, that carries its own overhead. How do you feel about this list?

          Building a page server-side:

          • Server: Receive page request
          • Server: Query db
          • Server: Render template
          • Server: Send page
          • Client: Receive page, render HTML

          Building a page client-side:

          • Server: Receive page request
          • Server: Send page (assuming JS is in-page. If it isn’t, add ‘client requests & server sends the JS’ to this list.)
          • Client: Receive page, render HTML (skeleton), interpret JS
          • Client: Request data
          • Server: Receive data request, query db
          • Server: Serialize data (usu. to JSON)
          • Server: Send data
          • Client: Receive data, deserialize data
          • Client: Build HTML
          • Client: Render HTML (content)

          Compare the paper Scalabiilty! But at what COST!, which found that the overhead of many parallel processing systems gave them a high “Configuration to Outperform Single Thread”.

          1. 4

            That’s an accurate list… for the first load! One attraction of doing a lot more client-side is that after the first load, the server had the same list of actions for everything you might want to do, while the client side looks more like:

            • fetch some data
            • deserialize it
            • do an in-place rerender, often much smaller than a full page load

            (Edit: on rereading your post your summary actually covers all requests, but missed how the request and response and client-side rerender can be much smaller this way. But credit where due!)

            That’s not even getting at how much easier it is to do slick transitions or to maintain application state correctly across page transitions. Client side JS state management takes a lot of crap and people claim solutions like these are simpler but… in practice many of the sites which use them have very annoying client side state weirdness because it’s actually hard to keep things in sync unless you do the full page reload. (Looking at you, GitHub.)

            1. 6

              When I’m browsing on mobile devices I rarely spend enough time on any single site for the performance benefits of a heavy initial load to kick in.

              Most of my visits are one page long - so I often end up loading heavy SPAs when a lighter, single page optimized to load fast from an un sched blank state would have served me much better.

              1. 4

                I would acknowledge that this is possible.

                But that’s almost exactly what the top comment said. People use framework of the day for a blog. Not flattening it, or remixing it or whatever.

                SPAs that I use are things like Twitter, the tab is likely always there. (And on desktop i have those CPU cores.)

                It’s like saying, I only ride on trains to work, and they’re always crowded, so trains are bad. Don’t use trains if your work is 10 minutes away.

                But as said, I acknowledge that people are building apps where they should be building sites. And we suffer as the result.

                What still irks me the most are sites with a ton of JavaScript. So it’s server-rendered, it just has a bunch of client-side JavaScript that’s unused, or loading images or ads or something.

            2. 4

              You’re ignoring a bunch of constant factors. The amount of rendering to create a small change on the page is vastly smaller than that to render a whole new page. The most optimal approach is to send only the necessary data over the network to create an incremental change. That’s how native client/server apps work.

              1. 5

                In theory yes but if in practice if the “most optimal approach” requires megabytes of JS code to be transmitted, parsed, executed through 4 levels of interpreters culminating in JIT compiling the code to native machine code all the while performing millions of checks to make sure that this complex system doesn’t result in a weird machine that can take over your computer then maybe sending a “whole new page” consisting of 200 kb of static HTML upon submitting a form would be more optimal.

                1. 4

                  In theory yes but if in practice if the “most optimal approach” requires megabytes of JS code to be transmitted, parsed, executed through 4 levels of interpreters culminating in JIT compiling the code to native machine code all the while performing millions of checks to make sure that this complex system doesn’t result in a weird machine that can take over your computer

                  This is hyperbole. Sending a ‘“whole new page” of 200 kb of static HTML’ has your userspace program block on the kernel as bytes are written into some socket buffer, NIC interrupts the OS to grab these bytes, the NIC generates packets containing the data, userspace control is then handed back to the app which waits until the OS notifies it that there’s data to read, and on and on. I can do this for anything on a non-embedded computer made in the last decade.

                  Going into detail for dramatic effect doesn’t engage with the original argument nor does it elucidate the situation. Client-side rendering makes you pay a one-time cost for consuming more CPU time and potentially more network bandwidth for less incremental CPU and bandwidth. That’s all. Making the tradeoff wisely is what matters. If I’m loading a huge Reddit or HN thread for example, it might make more sense to load some JS on the page and have it adaptively load comments as I scroll or request more content. I’ve fetched large threads on these sites from their APIs before and they can get as large as 3-4 MB when rendered as a static HTML page. Grab four of these threads and you’re looking at 12-16 MB. If I can pay a bit more on page load then I can end up transiting a lot less bandwidth through adaptive content fetching.

                  If, on the other hand, I’m viewing a small thread with a few comments, then there’s no point paying that cost. Weighing this tradeoff is key. On a mostly-text blog where you’re generating kB of content, client-side rendering is probably silly and adds more complexity, CPU, and bandwidth for little gain. If I’m viewing a Jupyter-style notebook with many plots, it probably makes more sense for me to be able to choose which pieces of content I fetch to not fetch multiple MB of content. Most cases will probably fit between these two.

                  Exploring the tradeoffs in this space (full React-style SPA, HTMX, full SSR) can help you come to a clean solution for your usecase.

                  1. 1

                    I was talking about the additional overhead required to achieve “sending only the necessary data over the network”.

            3. 4

              I don’t really get the desire to avoid doing work on the client side.

              My impression is that it is largely (1) to avoid JavaScript ecosystem and/or* (2) avoid splitting app logic in half/duplicating app logic. Ultimately, your validation needs to exist on the server too because you can’t trust clients. As a rule of thumb, SSR then makes more sense when you have lower interactivity and not much more logic than validation. CSR makes sense when you have high interactivity and substantial app logic beyond validation.

              But I’m a thoroughly backend guy so take everything that I say with a grain of salt.


              Edit: added a /or. Thought about making the change right after I posted the comment, but was lazy.

              1. 8

                (2) avoid splitting app logic in half/duplicating app logic.

                This is a really the core issue.

                For a small team, a SPA increases the amount of work because you have a backend with whatever models and then the frontend has to connect to that backend and redo the models in a way that makes sense to it. GraphQL is an attempt to cut down on how much work this is, but it’s always going to be some amount of work compared to just creating a context dictionary in your controller that you pass to the HTML renderer.

                However, for a team that is big enough to have separate frontend and backend teams, using a SPA decreases the amount of communication necessary between the frontend and backend teams (especially if using GraphQL), so even though there’s more work overall, it can be done at a higher throughput since there’s less stalling during cross team communication.

                There’s a problem with MPAs that they end up duplicating logic if something can be done either on the frontend or the backend (say you’ve got some element that can either be loaded upfront or dynamically, and you need templates to cover both scenarios). If the site is mostly static (a “page”) then the duplication cost might be fairly low, but if the page is mostly dynamic (an “app”), the duplication cost can be huge. The next generation of MPAs try to solve the duplication problem by using websockets to send the rendered partials over the wire as HTML, but this has the problem that you have to talk to the server to do anything, and that round trip isn’t free.

                The next generation of JS frameworks are trying to reduce the amount of duplication necessary to write code that works on either the backend or the frontend, but I’m not sure they’ve cracked the nut yet.

                1. 4

                  For a small team, a SPA increases the amount of work because you have a backend with whatever models and then the frontend has to connect to that backend and redo the models in a way that makes sense to it

                  Whether this is true depends on whether the web app is a client for your service or the client for your service. The big advantage of the split architecture is that it gives you a UI-agnostic web service where your web app is a single front end for that service.

                  If you never anticipate needing to provide any non-web clients to your service then this abstraction has a cost but little benefit. If you are a small team with short timelines that doesn’t need other clients for the service yet then it is cost now for benefit later, where the cost may end up being larger than the cost of refactoring to add abstractions later once the design is more stable.

                  1. 1

                    If you have an app and a website as a small team, lol, why do you hate yourself?

                    1. 4

                      The second client might not be an app, it may be some other service that is consuming your API.

                2. 4

                  (2) avoid splitting app logic in half/duplicating app logic.

                  The other thing is to avoid duplicating application state. I’m also a thoroughly a backend guy, but I’m led to understand that the difficulty of maintaining client-side application state was what led to the huge proliferation of SPA frameworks. But maintaining server-side application state is easy, and if you’re doing a pure server-side app, you expose state to the client through hypertext (HATEOAS). What these low-JS frameworks do is let you keep that principle — that the server state is always delivered to the client as hypertext — while providing more interactivity than a traditional server-side app.

                  (I agree that there are use-cases where a more thoroughly client-side implementation is needed, like games or graphics editors, or what have you.)

                  1. 1

                    Well, there’s a difference between controller-level validation and model-level validation. One is about not fucking up by sending invalid data, the other is about not fucking up by receiving invalid data. Both are important.

                  2. 4

                    Spot on.

                    this turns out to be tens (sometimes hundreds!) of requests because the general API is very normalized (yes we were discussing GraphQL at this point)

                    There’s nothing about REST I’ve ever heard of that says that resources have to be represented as separate, highly normalized SQL records, just as GraphQL is not uniquely qualified to stitch together multiple database records into the same JSON objects. GraphQL is great at other things like allowing clients to cherry-pick a single query that returns a lot of data, but even that requires that the resolver be optimized so that it doesn’t have to join or query tables for data that wasn’t requested.

                    The conclusion, which can be summed up as, “Shell art is over,” is an overgeneralized aesthetic statement that doesn’t follow from the premises. Even if the trade-offs between design choices were weighed fully (which they weren’t), a fundamentally flawed implementation of one makes it a straw man argument.

                    1. 1

                      The Twitter app used to lag like hell on my old Thinkpad T450. At the very least, it’d kick my fan into overdrive.

                      1. 1

                        Yay for badly written apps :-p

                        Safari will notice when a page in the background is hogging the CPU, and either throttle or pause it after a while. It puts up a modal dialog on the tab telling you and letting you resume it. Hopefully it sends an email to the developer too (ha!)

                    1. 23

                      Is a language good because it has many features? My current thesis is that adding features to languages can open up new ways to encode entire classes of bugs, but adding features cannot remove buggy possibilities.

                      1. 23

                        If you have a foot-gun in your arsenal and you add a new safe-gun, sure, technically that’s just one more way you can shoot yourself in the foot, but that’s missing the point of having a safe-gun.

                        Many features can be used as less bug prone alternatives to old constructs. E.g., match expression instead of a switch statement where you could forget the assignment or forget a break and get unintentional fall-through. Same way features like unique_ptr in C++ can help reduce bugs compared to using bare pointers.

                        1. 12

                          Another thing worth mentioning is that PHP has also grown some good linters that keep you away from the unsafe footguns. I believe it’s gotten really good over the years.

                          1. 7

                            Just to fill this out:

                            Psalm

                            PHPStan

                            EA Inspections Extended

                            Sonar

                            I actually run all of these. Obviously no linter is perfect and you can still have bugs but if you’re passing all of these with strict types enabled, you’re not writing the bad amateur code that got PHP it’s reputation from the “bad old days”. PHP’s not perfect but it’s no more ridiculous than, say, JavaScript, which curiously doesn’t suffer from the same street cred problems.

                            1. 6

                              …JavaScript, which curiously doesn’t suffer from the same street cred problems.

                              I see what you’re saying, but JS actually does kinda have serious street cred problems. I mean, there are a ton of people who basically view JS programmers as second-class or less “talented”. And JS as a language is constantly mocked. I think the difference is that JS just happens to be the built-in language for the most widely deployed application delivery mechanism of all time: the web browser.

                          2. 1

                            It’s not as if match replaced switch; and why did it have default falkthrough to begin with, whilst match doesn’t?

                            1. 2

                              It’s probably just taken verbatim from C. It’s funny because PHP seems to have taken some things from Perl, which curiously does not have this flaw (it does allow a fallthrough with the next keyword, so you get the best of both worlds).

                              1. 1

                                Switch has been in PHP since at least version 3.0 which is from the 1990s. Match doesn’t replace switch in the language but it can replace switch in your own code, making it better.

                            2. 15

                              I disagree. People saying this usually have C++ on their mind, but I’d say C++ is an unusual exception in a class of its own. Every other language I’ve seen evolving has got substantially better over time: Java, C#, PHP, JS, Rust. Apart from Rust, these are old languages, that kept adding features for decades, and still haven’t jumped the shark.

                              PHP has actually completely removed many of its worst footguns like magic quotes or include over HTTP, and established patterns/frameworks that keep people away from the bad parts. They haven’t removed issues like inconsistent naming of functions, because frankly that’s a cosmetic issue that doesn’t get in the way of writing software. It’s very objectionable to people who don’t use PHP. PHP users have higher-priority higher-impact wishes for the language, and PHP keeps addressing these.

                              1. 2

                                removed many of its worst footguns

                                or the infamous mysql API (that was replaced by mysqli)

                                edit: Also I like that the OOP vs functional interfaces keep existing. My old code just runs and I get the choice between OOP and functional stuff (and I can switch as I like)

                                1. 1

                                  I liked the original mysql api. Was the easiest to use with proper documentation back then. A footgun is good analogy. A gun can be used in a perfectly safe manner. Of course if you eyeball the barrel or have no regard for basic safety rules about it being loaded or where it is pointed to at any time, then yeah, things are going to go south sooner or later.

                                  Likewise, the old functional mysql api was perfectly usable and I never felt any worry about being hacked through sql injection. If you are going to pass numbers as string parameters or rely on things like auto-escape, then just like in the gun example, things are not going to end well. But let’s all be honest, at the point it is expected to be hacked.

                                  1. 1

                                    I haven’t been around the PHP community in any serious capacity for probably 17 years now, but “with proper documentation” was a double edged sword. The main php.net website was a fantastic documentation reference, except for the part where lots of people posted really terrible solutions to problems on the same page as the official documentation. As I grew as a developer, I learned where a lot of the footguns were, but starting out the easy path was to just grab the solution in the comments on the page and use it, with all of the accompanying downfalls.

                                    1. 1

                                      Already back in the day, it baffled me that the site even had comments, let alone people relying on them.nI would never blindly trust anything in the comments.

                              2. 8

                                There is only one way of modifying a language that works in practice: add new features. As one of my colleagues likes to say, you can’t take piss out of a swimming pool. Once a feature is in a language, you can’t remove it without breaking things. You can; however, follow this sequence:

                                1. Add new feature.
                                2. Recommend against using old feature.
                                3. Refactor your codebase to avoid the old feature.
                                4. Add static analysis checks to CI that you aren’t using the old feature.
                                5. Provide compiler options to make use of the old features a hard error.

                                At this point, the old feature technically exists in the language, but not in your codebase and not in new code. I’ve seen this sequence (1-4, at least) used a lot in C++, where unsafe things from C++98 were gradually refactored into modern C++ (C++11 and later), things like the C++ Core Guidelines were written to recommend against the older idioms, then integrated into static analysers and used in CI, so you the old usages gradually fade.

                                If you manage to get to step 5, then you can completely ignore the fact that the language still has the old warts.

                                1. 6

                                  I thought I was going crazy. Needed validation as no one would state the obvious.

                                  None of these features is a game changer for PHP. And even less so is all the composer and laravel craze that pretty much boils down to a silly explosion of javaesque boilerplate code.

                                  Heck, even the introduction of a new object model back in PHP 5 had marginal impact on the language at best.

                                  PHP’s killer features were:

                                  • Place script in location to deploy and map a URL to it
                                  • Out of the box support MySQL. Easy to use alternatives were payed back then, and connecting to MySQL or PostgreSQL was a PITA in most languages.
                                  • A robust template engine. It still is among the best and most intuitive to use our there. Although alternatives exist for every language.
                                  • Affordable availability on shared hosting with proper performance. This blew the options out of the water, with alternatives coating up to three orders of magnitude more for a minimum setup.

                                  These things are not killer features anymore. Writing a simple webapp with a Sinatra-like framework it’s easier than setting up PHP. The whole drop file to deploy only made sense in the days of expensive shared servers. It is counterproductive in the $3 vps era.

                                  I would prefer if the language would:

                                  1. Ship a robust production grade http server to use with the language instead of the whole mess it requires to be used via third party web servers

                                  2. Even better. Drop the whole http request and response as default input/output. It makes no sense nowadays. It is just a cute reliq from past decades. Which is more a source of trouble than a nicety.

                                  1. 1

                                    Place script in location to deploy and map a URL to it

                                    Which was possible for years before PHP via CGI and is no longer possible for PHP in many setups. PHP != mod_php

                                    1. 6

                                      Which was possible for years before PHP via CGI

                                      mod_php did this better than CGI did at the time.

                                      1. From what I remember from trying out this stuff at the time, the .htaccess boilerplate for mod_cgi was more hassle and harder to understand.
                                      2. CGI got a rep for being slow. fork/exec on every request costs a little, starting a new Perl interpreter or whatever on every request cost a lot. (and CGI in C was a productivity disaster)
                                      3. PHP had features like parsing query strings and form bodies for you right out of the box. No need to even write import cgi.

                                      Overall the barrier to entry to start getting something interactive happening in PHP was much lower.

                                      From what I remember the documentation you could find online was much more tutorial shaped for PHP than what you could find online for CGI.

                                      PHP != mod_php

                                      Sure now, but pm is discussing the past. PHP == mod_php was de facto true during the period of time in which PHP’s ubiquity was skyrocketing. Where pm above describes what PHP’s killer features “were”, this is the time period they are describing.

                                      1. 4

                                        mod_php did this better than CGI did at the time.

                                        It also did it much worse. With CGI, the web browser would fork, setuid to the owner of the public_html directory, and then execve the script. This had some overhead. In contrast, mod_php would run the PHP interpreter in-process. This meant that it had read access to all of the files that the web server had access to. If you had database passwords in your PHP scripts, then you’d better make sure that you trust all of the other users on the system, because they can write a PHP script that reads files from your ~/public_html and sends them to the requesting client. A lot of PHP scripts had vulnerabilities that let them dump the contents of any file that the PHP interpreter could read and this became any file the web server could read when deployed with mod_php. I recall one system I was using being compromised because the web server could read the shadow password file, someone was able to dump it, and then they were able to do an offline attack (back then, passwords were hashed with MD5 and an MD5 rainbow table for a particular salt was something that was plausible to generate) and find the root password. They then had root access on the system.

                                        This is part of where the PHP hate came from: ‘PHP is fast’ was the claim, and the small print was ‘as long as you don’t want any security’.

                                        1. 1

                                          This is completely irrelevant to the onboarding experience.

                                          Either way, empirically, people didn’t actually care all that much about the fact that their php webhosts were getting broken into.

                                          1. 1

                                            This is completely irrelevant to the onboarding experience.

                                            It mattered for the people who had their database credentials stolen because mod_php gave everyone else on their shared host read access to the file containing them. You’re right that it didn’t seem to harm PHP adoption though.

                                      2. 2

                                        Not to the same extent at all. CGI would spawn a process on the operative system per request. It was practically impossible to keep safe. PHP outsourced the request lifecycle out of the developer’s concern. And did so with a huge performance gain compared to CGI. While in theory you could to “the same” with CGI, in practice,.it was just not viable. When PHP4 arrived, CGi was already in a downwards spiral already, with most hosting providers disabling access to it. While Microsoft and Sun microsystems followed PHP philosophy by offering ASP and JSP, which had their own share of popularity.

                                        PHP is, by and large, mod_php and nowadays fpm. The manual introductory tutorial even assumes such usage. Had they packaged it early on as a regular programming language, with its primary default interpreter hooked up to standard streams, it might have been forgotten today. Although personally I think they should have made that switch long ago.

                                    2. 4

                                      “Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary.”

                                      https://schemers.org/Documents/Standards/R5RS/HTML/

                                      1. 1

                                        I think it’s the same principle as with source code: you want as little as possible while keeping things readable and correct

                                      1. 2

                                        Beautiful. Seems similar to Apple’s SF Mono.

                                        1. 8

                                          Wait till he connects to wifi

                                          1. 52

                                            I’m always relieved when I see a doctor look things up instead of just relying on what they remember from 20 years past.

                                            1. 24

                                              Doctors also don’t typically use Google/Wikipedia. They use a service called UpToDate*, which is commercial and has its articles written by verified MDs and researchers, but with Google-like search and Wikipedia-like organization. There’s an interesting profile/obituary of the company’s founder here:

                                              https://www.statnews.com/2020/04/25/remembering-uptodate-creator-burton-bud-rose/

                                              *: I know because my wife is an MD.

                                              1. 9

                                                I assume this varies by region, but some surveys suggest that a lot of doctors (especially junior doctors) end up using Wikipedia or other free resources in practice. E.g. a 2013 study of European doctors found that:

                                                In line with previous research [6], general-purpose search engines (eg, Google), medical research databases (eg, PubMed), and Wikipedia were popular resources, while specialized search engines were unpopular. […] Even though point-of-care databases (eg, UpToDate) provide reliable, evidence-based clinical information, the reported use among physicians was shown to be limited. A possible explanation could be that most physicians are not willing to pay high subscription fees to access medical information.

                                                1. 3

                                                  Yea, perhaps my word choice on “typically” is doing too much work there, or, assuming too much. I revise my statement to say, “Doctors have the option not to use Google/Wikipedia, but still be able to look up quality information with simple search queries.”

                                                  I’m sure you are right that, as the study suggests, many doctors don’t use a service like UpToDate. It is definitely a cost (although, a trivial cost, compared to other healthcare costs, and ridiculously easy to justify on ROI basis for a practice or hospital). Many of my wife’s friends who did career changes out of medicine even keep their UpToDate subscription (on a personal basis), simply to be able to guide their own (or their family’s) care a little when they are seen by other doctors. IMO, UpToDate is a great service and every MD should have access.

                                                  Also, I should mention that my wife is quite young, as far as doctors go, and there is a generational divide here. Many doctors who came of age before the information era were forced to “search their brain” for all the answers, so I imagine many of those doctors haven’t adapted to the internet age merely out of habit.

                                                2. 9

                                                  If you have it. I do a lot of Google still. No institutional site license here and I’m not wild about the per cost.

                                                  (source: also an MD)

                                                  1. 3

                                                    I use it to the tune of 100+ CME points a year so to me it’s worth the $52/mo. I know colleagues who split a subscription too.

                                                  2. 4

                                                    I’ve observed my GP doctor type stuff into Google and click on a few links. Usually whatever page they land on looks like an official source of some kind, rather than an SEO optimised opinion piece (so, they probably don’t click on the first result).

                                                    I’m generally fine with that - I trust my Dr to have enough background understanding to gauge whether an article is factual or not.

                                                1. 1

                                                  In oral medical fellowship exams, when a resident is asked a question they don’t know the answer to the correct response is “I’d look it up”.

                                                  1. 6

                                                    I rediscover the importance of strong typing every time I switch from Swift to Python.

                                                    1. 19

                                                      When Medium launched I was genuinely impressed by the quality of their content. Ditto Quora. Hard to believe, now that the VC vultures have turned both of those sites into a punchline.

                                                      1. 8

                                                        2000s- about.com

                                                        2010s- quora.com

                                                        2020s- medium.com

                                                      1. 1

                                                        Reminds me of the SPIN operating system, which used Modula-3 to enforce isolation at the programming language level but had all of the kernels subsystems running together in the same address space. It’s described in this paper and I wrote a summary of the paper as part of a course I was taking.

                                                        1. 4

                                                          I think we’ve all been through a phase like that, it’ll pass.

                                                          1. 4

                                                            This is so cool yet so sad. Why isn’t access to SQLite a web standard? It’s already included in every OS and browser under the sun.

                                                            1. 8

                                                              it once was but the problem is that there was no proper spec written - it was basically “whatever SQLite supports” which in turn was implemented by just linking against SQLite which in turn meant that there was only one actual implementation of the feature, but in order to become properly standardised, two independent implementations need to exist.

                                                              This issue came up in the middle of the NoSQL craze and so it was decided rather than to fully specify SQLite and rewrite it at least two times, it was easier and more with the times to just offer IndexedDB as its replacement.

                                                              1. 3

                                                                Indeed. Nobody wanted to write an actual spec of all SQL query features and semantics of a database in a way that pretends to be a real independent standard.

                                                                “Just do whatever SQLite does” had a high risk of sites relying on every quirk and bug of a specific version of SQLite, so soon it would be impossible to upgrade it. At that time sites being “bug-compatible” with IE6 and IE6 only were still a thing. And SQLite has lots of quirky features (very lax type parser, dynamically typed storage, rowid, and so on).

                                                            1. 4

                                                              Very cool project! Looking forward to reading your posts about the experience of building it.

                                                              Given your comments on other languages, you might want to give Swift a try. It’s strongly typed, supports ad-hoc polymorphisms, has enums with associated types very similar to ML family of languages, immutable value types, automatic reference counting for reference types, generics, protocols (similar to traits), a very lightweight syntax with type inference and in beta 5.5 they added a concurrency language features like async/await and actors.

                                                              1. 4

                                                                Ah yeah I’ve been meaning to give Swift a shot. I think that when I took a look, the compile times were bad and Linux support wasn’t great. I’m also curious how the library ecosystem is now.

                                                              1. 3

                                                                I actually chuckled. This is seriously a self aware wolf moment. This guy is so very, very close to realizing how to fix the problem but is skipping probably the most important step.

                                                                He mentioned single-core performance at least 5 times in the article but completely left out multi-core performance. Even the Moto E, the low end phone of 2020, has 8 cores to play with. Granted, some of them are going to be efficiency/low performance cores but 8 cores, nonetheless. Utilize them. WebWorkers exist. Please use them. Here’s a library that makes it really easy to use them as well.

                                                                ComLink

                                                                Here’s a video that probably not enough people have watched.

                                                                The main thread is overworked and underpaid

                                                                1. 7

                                                                  The article claims the main performance cost is in DOM manipulation and Workers do not have access to the DOM.

                                                                  1. 1

                                                                    if you’re referring to this:

                                                                    Browsers and JavaScript engines continue to offload more work from the main thread but the main thread is still where the majority of the execution time is spent. React-based apps are tied to the DOM and only the main thread can touch the DOM therefore React-based apps are tied to single-core performance.

                                                                    That’s pretty weak. Any javascript application that modifies the DOM is tied to the DOM. It doesn’t mean the logic is tied to the DOM. If it is then at least in react’s case it means that developers thought rendering then re-rendering then rendering again was a good application of user’s computing resources.

                                                                    I haven’t seen their code and I don’t know what kinds of constraints they’re being forced to program under but react isn’t their bottleneck. Wasteful logic is.

                                                                    1. 2

                                                                      The author’s point is that a top of the line iPhone can mask this “wasteful logic”. Unless developers test their websites on other, less expensive, devices they may not realize that they need to implement some of your suggested fixes to achieve acceptable performance.

                                                                      1. 1

                                                                        You’re right. I missed the point when I read into how he was framing the problem. Excuse me.

                                                                  2. 3
                                                                    1. iPhones also have many cores, so that’s not going to bridge the gap.

                                                                    2. From TFA: “Browsers and JavaScript engines continue to offload more work from the main thread but the main thread is still where the majority of the execution time is spent.”

                                                                    3. See also: Amdahl’s Law

                                                                    1. 1

                                                                      Gonna fight you on all of these points because they’re a bunch of malarkey.

                                                                      iPhones also have many cores, so that’s not going to bridge the gap.

                                                                      If you shift the entire performance window up then everyone benefits.

                                                                      From TFA: “Browsers and JavaScript engines continue to offload more work from the main thread but the main thread is still where the majority of the execution time is spent.”

                                                                      This shouldn’t be the case. If it is then people are screwing around and running computations in render() when everything should be handled before that. Async components should alleviate this and react suspense should help a bit this but right now I use Redux Saga to move any significant computation to a webworker. React should only be hit when you’re hydrating and diffing. React is not your bottleneck. If anything it should have a near constant overhead for each operation. You should also note that the exact quote you chose does not mention react but all of javascript. Come on.

                                                                      See also: Amdahl’s Law

                                                                      I did. Did you see how much performance you gain by going to 8 identical cores? It’s 6x. Would you consider that to be better than only having 1x performance? I would.

                                                                      1. 1

                                                                        Hmm..if you’re going to call what I write “malarky”, it would help if you actually had a point. You do not.

                                                                        If you shift the entire performance window up then everyone benefits.

                                                                        Yep, that’s what I said. If everyone benefits, it doesn’t close the gap. You seem to be arguing against something that nobody said.

                                                                        Amdahl’s law … 8 identical cores? 6x speedup

                                                                        Er, you seem to not understand Amdahl’s Law, because it is parameterised, and does not yield a number without that parameter, which is the portion of the work that is parallelizable. So saying Amdahl’s law says you get a speedup of 6x from 8 cores is not just wrong, it is non-sensical.

                                                                        Second, you now write “8 identical cores”. I think we already covered that phones do not have 8 high performance cores, but at most something like 4/4 high/efficiency cores.

                                                                        Finally, even with an exceedingly rare near perfectly parallelisable talks that kind of speedup compared to a non-parallel implementation is exceedingly rare, because parallelising has overhead and also on a phone other resources such as memory bandwidth typically can’t handle many cores going full tilt.

                                                                        but the main thread is still where the majority of the execution time is spent

                                                                        This shouldn’t be the case … React …

                                                                        The article doesn’t talk about what you think should be the case, but about what is the case, and it’s not exclusively about React.

                                                                  1. 1

                                                                    I’ve always wanted to write two tweets quoting one another.

                                                                    1. 4

                                                                      OT: This is the most beautiful blog I’ve seen in years.

                                                                      1. 2

                                                                        Thanks so much! I’m glad you like it.

                                                                        1. 1

                                                                          It looks like you’re using Jekyll? Would you be willing to share your template/theme?

                                                                          1. 2

                                                                            I built it entirely myself, and at the moment don’t have any plans to open source it.

                                                                      1. 7

                                                                        Great read but the section on “third-world country” and “non-English” speaking was disappointing. We’ve seen major “first-world” websites get hacked by kids in “third-world” countries.

                                                                        1. 1

                                                                          Yeah, there are probably plenty of resources on Host header injection in the contractors’ native language – they just didn’t care, or weren’t very good.

                                                                        1. 2

                                                                          This is really cool, but also doesn’t answer the question of why in the world you care about pretty-printing the output of your compiler at all…?

                                                                          1. 5

                                                                            Not following – have you ever worked with a code generator that outputs say everything on one line?

                                                                            Those code generators quickly get fixed, or don’t get used at all.


                                                                            edit: Looking more closely, I think the issue is that they wrote the original compiler assuming that its output would be run through a separate pretty printer. That is, they wrote it without regard to formatting.

                                                                            So it’s reasonable to ask why they wouldn’t just fix the original code to output indented text.

                                                                            In other words, I think that code generators should always output readable code (within the constraints of the problem, e.g. parse tables). But that doesn’t mean they should be pretty-printed with a separate tool!

                                                                            1. 2

                                                                              Maybe a better question is why would you care about the performance of your compiler in “pretty-print-mode” when the default should be “fast mode” and pretty mode is only used when you’re sending the compiler output to something other than another compiler?

                                                                              1. 2

                                                                                Having two modes is overkill – who is going to bother to set that up in their build system?

                                                                                That said, I agree the architecture described in this post is kinda weird:

                                                                                The ‘unformatted’ C code, by construction, already has line breaks in sensible places. The formatter only needs to fix up the horizontal formatting (i.e. indentation) due to nested {} braces and () parentheses.

                                                                                I don’t think it’s that hard to output correctly indented code in the first place. I’ve done this in several code generators. It might be annoying if your meta-language doesn’t have multiline strings… I think theirs is Go, and it might not?

                                                                                Wrapping code is a little trickier, but also not hard to reasonable job in the tool itself. I found a nice language-independent heuristic in a code generator I borrowed.

                                                                                1. 2

                                                                                  Go definitely has multiline strings, via backticks.

                                                                                2. 2

                                                                                  If the user wants the output pretty printed during debugging for example they can just pipe it through their favourite formatter. No modes necessary.

                                                                                3. 2

                                                                                  Not following – have you ever worked with a code generator that outputs say everything on one line?

                                                                                  One line? no, but the output of most code generators isn’t particularly pretty.

                                                                                  Most code generators only have output to assist debugging serious issues, and most users don’t care too much about the output of code generators – to the point that they often never even materialize the generated code, and just feed it directly into the compiler. (This is, essentially, what a macro system like Lisp’s, Rust’s, or even the C preprocessor does)

                                                                                4. 4

                                                                                  I assume to make debugging the intermediate output easier.

                                                                                1. 2

                                                                                  I have downloaded their repositories and it seems OK. Was some source code or commits deleted? Or just some tags were deleted or redirected to nowhere?

                                                                                  As long as they share the complete source code under a free software license, we could call it free software. But „free“ as in „freedom“ does not equal „free“ as in „free beer“. Free software does not mean that the author is obligated to provide you services (like packages or support) at zero price.

                                                                                  If they have deleted some commits/branches (I see last commit one year ago), it is not nice, but they can do it. This is one of reasons why I run my own Mercurial and Git servers and why I backup/clone interesting software into it. I do not use Mac OS (a proprietary software) so I have not backed this up, but you will probably find someone who has a complete backup.

                                                                                  I would recommend you using a free operating system like GNU/Linux. It has much more friendly culture than proprietary systems and besides the „free as in freedom“ you can usually also get more „free as in free beer“ here.

                                                                                  1. 10

                                                                                    friendly culture

                                                                                    I think this very up for debate.

                                                                                    Also, this doesn’t deal with the core problem - is the economics for free as in beer enforced by free as in freedom sustainable? It seems this closing is a last-ditch reluctant move.

                                                                                    1. 1

                                                                                      I think this very up for debate.

                                                                                      In GNU/Linux I can e.g. install my own kernel module without asking for a permission (a digital signature). And however it does not forbid you using proprietary software, the free software is the norm here which gives you all the rights (to study, modify, distribute, run for any purpose…).

                                                                                      free as in beer enforced by free as in freedom

                                                                                      No, „free as in freedom“ does not imply „free as in free beer“ and it does not force you to provide your services or distribute the software at zero price.

                                                                                      The „free as in freedom“ and copyleft just require you to do the business in an ethical way and be respectful and kind to others (regarding the software).

                                                                                      1. 2

                                                                                        In GNU/Linux I can e.g. install my own kernel module without asking for a permission (a digital signature). And however it does not forbid you using proprietary software, the free software is the norm here which gives you all the rights (to study, modify, distribute, run for any purpose…).

                                                                                        This is not what is meant by culture. There have been known controversies when it comes to “friendliness” in the free software community, involving toxic working environments.

                                                                                        No, „free as in freedom“ does not imply „free as in free beer“ and it does not force you to provide your services or distribute the software at zero price.

                                                                                        The „free as in freedom“ and copyleft just require you to do the business in an ethical way and be respectful and kind to others (regarding the software).

                                                                                        The problem is if I have the freedom to distribute and modify, I have the freedom to distribute it for free (That is, if I sell people a CD with GPLed source, then they have the freedom to distribute it and not give me money.). It’s easy to say “ethical way” without specifying one that can sustain a developer.

                                                                                        1. 1

                                                                                          known controversies when it comes to “friendliness” in the free software community

                                                                                          Good for you. It’s just not what franta was talking about, and he made that clear.

                                                                                        2. 1

                                                                                          In GNU/Linux I can e.g. install my own kernel module without asking for a permission (a digital signature).

                                                                                          Yes but often times you need a piece of information to get this to work and many times you have to put up with some verbal abuse and hazing to get to that piece of information.

                                                                                    1. 2

                                                                                      This would have been really easy to miss without your helpful title for the submission!

                                                                                      1. 2

                                                                                        The submission title is David’s description of the talk when he tweeted it.

                                                                                        1. 2

                                                                                          It more-or-less sank without a trace when I posted it with its own title a few weeks ago:

                                                                                          https://lobste.rs/s/qtaeir/talk_near_future_python_live_coding

                                                                                          1. 2

                                                                                            Hmm. Yeah these days a new post will show up near the bottom of page 1 and I keep finding missed gems in page 2.

                                                                                        1. 3

                                                                                          This is a fantastic talk, it makes me want to rewatch and implement alongside him.

                                                                                          1. 3

                                                                                            Yeah I’m doing that in Swift.