1. 2

    I use Feedly as well and have for years. Recently upgraded to Pro because of the email subscription and integration with Reddit.

    I agree that their UI isn’t always great. Fortunately, there seem to be many apps that support Feedly’s API. I use Reeder, personally.

    1. 3

      You can buy gigabit ethernet c-mount cameras for around the same price as a nice webcam, and less than a DSLR.

      Or alternatively Niklas Fauth is building one from scratch. https://twitter.com/FauthNiklas/status/1265017260575465474

      1. 2

        Can you easily use the video from those cameras with Zoom/Google Meet? (i.e. does it act like a local webcam?)

        1. 2

          Yes, on Linux at least. UV4L has an IP stream to v4lc converter that makes them available as standard camera sources. So it’s not total plug and play, but very achievable.

        2. 1

          What’s the latency for these ethernet cameras? Can they be used for video calling?

          1. 2

            Yes, plenty good enough for video calling. Way less lag than the call itself has.

        1. 4

          The benefit of sticking to RC is much-reduced memory consumption. It turns out that for a tracing GC to achieve performance comparable with manual allocation, it needs several times the memory (different studies find different overheads, but at least 4x is a conservative lower bound). While I haven’t seen a study comparing RC, my personal experience is that the overhead is much lower, much more predictable, and can usually be driven down with little additional effort if needed.

          This is highly questionable: Yes, RC requires less memory, but it’s baseline is much slower than GC.

          Plus, if one created new GCed systems these days, one certainly wouldn’t go with a language in which ~everything (e. g. Java) is a reference to be traced and collected.

          GC is fine, but if references make up more than 20% of your memory consumption, you are doing it wrong.

          1. 8

            I wonder if the “much slower” part applies when you have some measure of control over how long the operations take. Retaining and releasing an NSObject on the M1 is almost 5 times faster than on Intel, and even twice as fast when emulating Intel.

            Certainly makes it harder to make performance comparisons when the CPUs behave so differently.

            1. 2

              I’d expect that these performance improvements to also benefit GC, though not that much and depending on the GC algorithm used.

          1. 2

            Cool! Thanks for sharing!

            For anyone who doesn’t want to take on building such a thing, I can recommend the Stream Deck which gives you programmable buttons with color LCD keycaps.

            1. 1

              I agree that this is a nice bit of UX, but I’m not sure how much the typical user cares. Even so, if the tooling is there to do it, why not?

              Relatedly, the Go team recently accepted a proposal for the core go tool to support file embedding.

              1. 2

                He starts out by explaining that the typical user does care:

                Passing these around to friends and seeing some of them try to share the apps by copying the exes then wonder why they break made me realize something: to a lot of users, the app is the icon they click and everything else is noise. And the user is right.

                As he goes on to say, Mac apps have always been like this. In fact before Mac OS X (2001), an app really was a single file. This worked because the old Mac APIs had a “Resource Manager”, which was conceptually similar to an embedded archive, and apps made calls like GetResource('PICT',128) to load associated data, instead of the filesystem.

                During Mac OS X development (1998 IIRC; I was at Apple then but not in the exact area this was happening) there was internal debate about whether to keep this or whether to use bundles (directories that look like files in the GUI) as NeXTSTEP did. The Cocoa (OpenStep) APIs all assumed bundles, and it would have been hard to change them all away from the filesystem API.

                Apparently some people built a quick prototype that did exactly what the blog post describes — mounted a Zip archive in the filesystem so Cocoa could run unmodified, while still having a single-file app. I heard that the app’s launch time regressed a lot, so the idea was dropped (performance was already bad enough in 10.0.)

                But I wish they’d persevered on that approach. They could probably have optimized it a lot. As a bonus, apps would have been smaller (files in them wouldn’t be padded to 4K sector boundaries), copying them would have been a lot faster, and reading a bundled file into memory would be super fast if the entire app file were mmaped.

              1. 13

                It has become difficult for me to avoid the suspicion that this class of complaint is another way of saying that semver often doesn’t work very well in practice.

                1. 18

                  I think it does work well in practice, for packages that practice it. I think a lot of people still have this “only want to increment the major version for big stuff” mindset as opposed to “major version just denotes breaking changes and it’s okay if we’re releasing version 105.2”.

                  1. 4

                    And for packages which can practice it. Many packages can’t change anything without altering previous behavior. It’s hard to think “people might depend on this bug, so it’s a breaking change.”

                    1. 2

                      I was thinking about this recently too… normally you would think of adding a new function as being a minor change - not breaking compatibility but not just an internal fix either..

                      But, on the other hand, if you add a new function, it might conflict with an existing name in some third party library the user also imports and then boom they have a name conflict they must resolve.

                      So you could fairly argue that all changes are potentially breaking changes…

                      1. 5

                        Isn’t this why namespaces are a thing?

                        1. 3

                          Not in languages like C, which still does have libraries.

                          1. 2

                            They’re a thing but there are frequently ways to have this problem anyway


                             from dependency import *

                            in python. “Don’t do that” is fair to say, but if somebody downstream already has they have to deal with fixing the ambiguity.

                            You can have subtler versions of this for example in C++ ADL can bite you:

                            int foo(tpnamespace::Type v) { ... }

                            if your dependency later adds a function named foo in their namespace the meaning of


                            in your program may change.

                            A world where every identifier is fully qualified to avoid running into this after an upgrade starts to look similar to a world with no namespaces at all.

                            1. 1

                              This is precisely it, you can import all and get conflicts. In the D language I use, you can do it with decent confidence too: the compiler automatically detects it and offers ways to very easily specify it (you can just be like alias foo = mything.foo; to give it priority in this scope, among other things).

                              But nevertheless, if the conflict happens in a dependency of a dependency because one of its dependencies added something… it can be a pain. Some unrelated change caused a name conflict compile error that is halting your build.

                              (of course personally I say the real wtf is using dependencies with dependencies on dependencies. but meh)

                      2. 3

                        I think a lot of people still have this “only want to increment the major version for big stuff”…

                        This has been my experience as well. Forcing a major increment for breaking changes has a secondary benefit in that it encourages developers to think hard about whether they actually need to break an API, or can judiciously deprecate to provide a smooth path forward.

                      3. 11

                        I would like to point out that you’re far less likely to come across a blog post that says “I’ve been using SemVer for the past several years and it’s been working very well in practice”. SemVer is probably one of those things that, when it works, you get to not think about it much and carry on with whatever it was you were actually trying to do.

                        1. 4

                          This class of complaint is part of how the system works in practice.

                          Semver is basically a way of managing expectations between API producers and consumers. Complaining when API produces don’t follow the stated guidelines is part of the feedback loop to maintain consensus about what changes are allowed.

                          1. 2

                            Exactly. The only other thing I would add is something about scale in terms of the number of independent people working on independent projects that can be combined together in any one of a number of ways. Or in other words, a lack of centralization.

                            If the scale problem didn’t exist, then I absolutely would not want to deal with semver. But to some extent, the problems with semver are the problems with communication itself.

                        1. 7

                          In addition to the complaint about not following the breaking change requirement, I also dislike when packages spend years with tons of production users but refuse to reach “1.0” because they don’t want to commit to the semantic versioning requirement (lookin’ at you Hugo and Buffalo).

                          By leaving things at 0.x.y, users have to assume that every single 0.x change could break them, and that’s annoying.

                          1. 2

                            I’ll add Terraform to this list. It’s otherwise a great tool, but version upgrades from 0.n to 0. n+1 have been a pain. That said, I believe the developers think this is the best way to maintain the project at the moment.

                            1. 1

                              I think it’s less annoying than companies locking into something arbitrarily. I prefer this honesty in projects because, hey, maybe they will break stuff whenever they want. I want to know that a project might do this.

                              I usually interpret this as the company not taking time to commit to not breaking. With projects like Hugo that’s perfectly fine as I get what I pay for. I’d much rather them take this approach than releasing a new major version every month and not actually breaking anything (lookin’ at you Firefox). Functionally, it’s the same as 0.x.y, but it’s hard or even impossible to tell when they really release breaking stuff.

                            1. 4

                              I agree with the ideas presented here, though I don’t care for the particulars around Gemini. That said, imperfect things have been known to take off :)

                              The author talks about possibly wanting something that’s weekly or monthly. This is actually pretty common and popular today: the email newsletter. The podcast is another example of this. It’s possible to go overboard even with this slower-to-update content.

                              I still think the idea of more of a push toward content that has a chance to go beyond hot takes is worthwhile. Even commenting systems like this one kind of encourage quick responses … some of those responses are quite helpful and interesting. Others, especially depending on the platform and moderators, can be toxic. It’s much more labor intensive, but “letters to the editor” style of comments seem potentially more valuable.

                              So far, I think Lobsters strikes a nice balance. The front page doesn’t move that quickly. Comment threads don’t tend to get long and out of control, or filled with vitriol and noise (which I’m sure is due to good moderation and a community that supports better discourse).

                              In summary, I think there are already ways for people to jump off of the endless scrolling treadmill if they wish, at least when it comes to “news”.

                              1. 1

                                ^ This. I subscribe to stack overflow weekly newsletters and read LWN only once a week on Thursdays when the weekly edition is released. I browse SlowerNews.com. I much prefer Lobsters over Hacker News for this reason. Currently, I’m thinking of unsubscribing from most NYTimes newsletters due to news overload.

                              1. 3

                                I really like this, and can see myself using this in some form.

                                One feature that’s missing here that would push it over the line to “killer app” for me, would be the ability to share data as well as applications. That way, you could sync your tasks from a todo app from your laptop to your phone, for example.

                                Anyway, thanks for sharing!

                                1. 3

                                  You could probably get jlongster’s CRDT implementation running in here.

                                  1. 4

                                    schism would probably be a better options since it’s also in ClojureScript :)

                                    1. 2

                                      Oh very cool I’ll check these out. Thanks!

                                1. 4

                                  Wouldn’t it make more sense to have some kind of HTTP header and/or meta tag that turns off javascript, cookies and maybe selected parts of css?

                                  If we could get browser vendors to treat that a bit like the https padlock indicators, some kind of visual indicator that this is “tracking free”

                                  Link tracking will be a harder nut to crack. First we turn off redirects. Only direct links to resources. Then we make a cryptographic proof of the contents of a page - something a bit fuzzy like image watermarking. Finally we demand that site owners publish some kind of list of proofs so we can verify the page is not being individually tailored to the current user.

                                  1. 11

                                    The CSP header already allows this to an extent. You can just add script-src none and no JavaScript can run on your web page.

                                    1. 1

                                      very true. not visible to the user though!

                                    2. 5

                                      Browsers already render both text/html and application/pdf, and hyperlinking works. There is no technical barrier to add, say, text/markdown into mix. Or application/ria (see below), for that matter. We could start by disabling everything which already requires permission, that is, audio/video capture, location, notification, etc. Since application/ria would be compat hazard, it probably should continue to be text/html, and what-ideally-should-be-text/html would be something like text/html-without-ria. This clearly works. The question is one of market, that is, whether there is enough demand for this.

                                      1. 5

                                        Someone probably should implement this as, say, Firefox extension. PDF rendering in Firefox is already done with PDF.js. Do the exact same thing for Markdown by: take GitHub-compatible JS Markdown implementation with GitHub’s default styling. Have “prefer Markdown” preference. When preference is set, send Accept: text/markdown, text/html. Using normal HTTP content negotiation, if server has text/markdown version and sends it, it is rendered just like PDF. Otherwise it works the same, etc. Before server supports arrive, the extension probably could intercept well known URLs and replace content with Markdown, for, say Discourse forums. Sounds like an interesting side project to try.

                                        1. 8

                                          Browsers already render both text/html and application/pdf, and hyperlinking works. There is no technical barrier to add, say, text/markdown into mix.

                                          Someone probably should implement this as, say, Firefox extension.

                                          Historical note: this is how Konqueror (the KDE browser) started. Konqueror was not meant be a browser, but a universal document viewer. Documents would flow though a transport protocol (implemented by a KIO library) and be interpreted by the appropriate component (called KParts) (See https://docs.kde.org/trunk5/en/applications/konqueror/introduction.html)

                                          In the end Konqueror focused on being mostly a browser, or an ad-hoc shell around KIO::HTTP and KHTML (the parent of WebKit) and Okular (the app + the KPart) took care of all main “document formats” (PDFs, DejaVu, etc).

                                          1. 2

                                            Not saying it’s a bad idea, but there are important details to consider. E.g. you’d need to agree on which flavor of Markdown to use, there are… many.

                                              1. 2

                                                Eh, that’s why I specified GitHub flavor?

                                                1. 1

                                                  Oops, my brain seems to have skipped that part when I read your comment, sorry.

                                                  The “variant” addition in RFC 7763 linked by spc476 to indicate which of the various Markdowns you’ve used when writing the content seems like a good idea. No need to make Github the owner of the specification, IMHO.

                                                2. 1

                                                  What’s wrong with Standard Markdown?

                                              2. 2


                                                Markdown is a superset of HTML. I’ve seen this notion put forward a few times (e.g., in this thread, which prompted me to submit this article), so it seems like this is a common misconception.

                                              3. 4

                                                Why would web authors use it? I can imagine some small reasons (a hosting site might mandate static pages only), but they seem niche.

                                                Or is your hope that users will configure their browsers to reject pages that don’t have the header? There are already significant improvements on the tracking/advertising/bloat front when you block javascript, but users overwhelmingly don’t do it, because they’d rather have the functionality.

                                                1. 2

                                                  I think the idea is that it is a way for web authors to verifiably prove to users that the content is tracking free. Markdown renderer would be tracking free unless buggy. (It would be a XSS bug.) The difference with noscript is that script-y sites still transparently work.

                                                  In the invisioned implementation, like HTTPS sites getting padlock, document-only sites will get cute document icon to generate warm fuzzy feeling to users. If icon is as visible as padlock, I think many web authors will use it if it is in fact a document and it can be easily done.

                                                  Note that Markdown renderer could still use JavaScript to provide interactive features: say collapsible sections. It is okay because JavaScript comes from browser, which is a trusted source.

                                                2. 3

                                                  Another HTTP header that maybe some browsers will support shoddily, and the rest will ignore?

                                                  1. 2

                                                    I found HTTP Accept header to be well supported by all current relevant softwares. That’s why I think separate MIME type is the way to go.

                                                  2. 2
                                                    1. 2

                                                      I think link tracking is essentially impossible to avoid, as are redirects. The web already has a huge problem with dead links and redirects at least make it possible to maintain more of the web over time.

                                                    1. 4

                                                      This (at least judging from the examples and the introduction) makes a mess of progressive enhancement and accessibility. We do not want other elements to become interactive; on the contrary, we want to erase from collective memory the fact that we ever used divs and anchors as buttons in the first place. There’s not a single keyboard event listener in the source code…

                                                      1. 4

                                                        At least the “quick start” example on the site’s front page uses a <button>. I don’t think this is incompatible with a11y, but the examples certainly don’t do a good job of promoting good habits there.

                                                        1. 1

                                                          i asked author on hn about nojs, and they said, when possible

                                                          1. 1


                                                            Note that when you are using htmx, on the server side you respond with HTML, not JSON. This keeps you firmly within the original web programming model, using Hypertext As The Engine Of Application State without even needing to really understand that concept.

                                                            If you’re returning HTML, I’m assuming you’ll be returning the whole page (i.e. the way no-JS websites usually work), and to truly make the JS optional, you’d want to:

                                                            <form method='post' action='clicked/' hx-swap='outerHTML' hx-select='#myWidget'>
                                                              <button>Click me</button>

                                                            That being said, I’m sure the HTML-attribute-driven approach has merits (also of interest, from the bookmark archive: https://mavo.io/).

                                                            1. 1

                                                              There’s no specific need to return the whole page.

                                                              Detect the request as coming via XHR, and omit the parts you don’t need.

                                                              1. 1

                                                                Yes, you could have the library add a header you can pick up on the server, and respond more succinctly. I was thinking about the minimal amount of markup / extra work that still maintains compatibility with no-JS.

                                                                (Also, I guess I was a bit in disbelief that the meaning of that quoted paragraph was that you’re supposed to respond with HTML fragments, but the other examples make clear that this is in fact the intention)

                                                                1. 1

                                                                  X-Requested-With has been a defacto standard for JS libs that wrap XHR for years, and a bunch of backend frameworks already support it (i.e. just rendering the view specific to an action without any global layout or header+footer type templating) https://en.wikipedia.org/wiki/List_of_HTTP_header_fields#Requested-With

                                                                  This approach works very well for maintaining no-JS compatibility: the header is only set by JS, so for no-js requests it’s just a regular request, and the ‘full page’ render is completed as per usual.

                                                        1. 2

                                                          I was inspired a bit by Roam. I’ve recently started using a frankensystem… I use Bear for notes that are only useful to me.

                                                          For notes that could potentially be useful to others, I’ve started using NValt + a small bit of Go code + Hugo + GitHub + Netlify to generate my website, complete with backlinks between notes.

                                                          1. 1

                                                            I appreciate the teaching you’re doing here to help people get going with a clean, simple static site. I also like the suggestion to put a checklist in the template.

                                                            I don’t know about your plans for the end of the series, but I wonder if you’d want to recommend a simple static site generator to automate the checklist pieces. I like Hugo, but I don’t think it qualifies as “simple”… I wonder if there’s a generator that basically does just the bare minimum above a static site like this? (I assume there’s gotta be.) You could point people at staticgen, but that seems overwhelming given that you’ve created a resource which works well for those starting out.

                                                            Edited to add: I’d hate to see RSS go away, and maintaining index pages and RSS feeds is something that static site generators do well.

                                                            Also: you’re certainly entitled to any license you want to choose, but by choosing a ShareAlike license, you’re essentially requiring the folks who build on your light site to release all of their content under a ShareAlike license. While I’m happy to share lots of things, I don’t know that I’d want all of my writing to be endlessly reproducible.

                                                            1. 1

                                                              I’ve been thinking about how I can wrap it all up at the end. Integrating it with something like Hugo or jekyl is what I’m thinking about doing, but as you said, that’s a whole different ballgame.

                                                              I’ve seen some scripts on GitHub that automate blog post generation, so I may look into adapting one of those. TL;DR I’m not sure yet, but I agree there needs to be something to improve this workflow.

                                                              Great point on the license - I need to either change it, or clarify that it relates to the website’s code, not any blog posts produced. Thanks!

                                                            1. 11

                                                              I like Go’s version of this: they declared it 1.0 when they were ready to guarantee backwards compatibility and really work for it. They didn’t have a lot of things on your list, and it clearly wasn’t a problem.

                                                              What Go had was that compatibility guarantee and features people wanted.

                                                              1. 3

                                                                I think this is key. Dependency management years or decades down the line is a much bigger concern in production systems than any fleeting technical problem (it’s always possible to roll your own solution, in any language, for any problem the language doesn’t solve for you) & the biggest factor in dependency management is whether or not new code and old code can coexist. Fears about old code getting broken by upgrades is why a lot of big companies are still running linux 2.4 kernels, standardizing on python 1.x, or maintaining OS/360 machines decades after nobody in their right minds would use that tech for new projects.

                                                                Where I work, there was a push for experiments in new features to be done in julia because julia was a lot better suited to our problems (doing simple statistics on large data sets in close to real time, where plenty of hardware was potentially available) than the languages we were using (a mix of java and perl). When julia announced that they were about to decide on a 1.0 standard, this was very exciting, because it meant we could write production code in julia that wasn’t tied to a particular minor version of the language & theoretically there could be independent implementations of the compiler that adhere to the spec.

                                                                On that subject – number of independent implementations is sometimes a concern. Occasionally, with a language where there’s only one usable implementation, the developers will make a change that makes existing code infeasible (either broken or too slow for your use case) and you’ll need to decide between rewriting your code & keeping an old version of the language (which, over enough time, eventually becomes maintaining a fork of the old version, as builds eventually break against newer versions of dependency libraries and such). When there are multiple independent implementations, it’s less likely that they will all break your code in the same way at the same time, so you have the additional option of switching to a different implementation. This is less common than one would hope, though – there aren’t even very many serious C compilers anymore, there never were very many serious fully-independent implementations of javascript or perl, the attempts at alternative impementations of C# and Java have fallen out of date with the flagship implementations, and in a strange twist of fate, python leads the pack here (with the odd company of prolog, smalltalk, and apl in tow)!

                                                                1. 1

                                                                  Yeah I’m pretty sure Rust was the same way. They declared 1.0 when they felt like they weren’t going to break anything. I’m pretty sure they knew at the time that major features like async/await would be added after 1.0.

                                                                  I think Go had:

                                                                  • editor support but no IDE spuport.
                                                                  • no package manager. Did the “go” tool even exist at 1.0?
                                                                  • no package index
                                                                  • good testing tools
                                                                  • no debugger
                                                                  • some integration with Google monitoring tools but probably nothing external

                                                                  And very importantly they had good docs.

                                                                  I think a good test of 1.0 is you can write good docs with a straight face. Explaining a language helps debug the corners.

                                                                  Despite all those missing things, people started using it anyway. So there are two different questions: 1.0 and “production ready”. But I would say there will be early adopters who use a 1.0 in production (and some even before that!)

                                                                1. 2

                                                                  PR’s are only a thing because devs can’t manage branches and personal workflow, and would rather lean on the tool to solve their social problems.

                                                                  Look, if you work in a team and are well organized, and the team is communicating well within itself, you don’t have to fret about PR’s. You can just let the devs have their own branches, and when necessary, have a merge party to produce a build for testing.

                                                                  Alas, not all projects can attain this level of competence. PR’s are there to allow community, even if folks can’t organise well enough to know each other and how to contribute to each others work in a positive flow.

                                                                  For many projects, a rejected PR is just as valuable as an auto-merge. It gets people communicating.

                                                                  1. 2

                                                                    One potential problem with giving each dev their own branch and merging all branches at once to make a build is that build has a lot of potential changes and when something inevitably goes wrong it isn’t always clear what exactly caused it to break. Good communication and code quality can mitigate issues with integrating the code, but I don’t think it completely eliminates it. If the alternative is releasing a build with multiple PRs anyway then this might not be a problem, but if your alternative is releasing a build with every single change then its a distinct disadvantage.

                                                                    1. 2

                                                                      Why would you want to have a merge party where you merge in a whole bunch of stuff at once rather than reviewing and merging smaller changes one at a time?

                                                                      1. 1

                                                                        Maybe because your team is productive.

                                                                        Because you’re pushing features forward, trust your fellow devs, and everyone is working well enough that it doesn’t matter - and it means that features can be tested in isolation. Plus, its very rewarding to get a branch merge done, you suddenly get a much bigger and better app for the next round of work ..

                                                                        1. 1

                                                                          Not in the long run and not if your code is in production. One of the goals of review process is to get the others in the team acquainted with the changes, so they could support that in the future, when the author goes on a vacation or leaves the company. Merge parties cram way to much information for a purposeful comprehension. Remember, coding is mainly joint activity of solving business problems, and its product will require support and maintenance.

                                                                          1. 1

                                                                            Our Merge parties include team review, so .. not really encountering the issues you mention.

                                                                            1. 1

                                                                              How big are your change requests, how long do these reviews last?

                                                                              1. 1

                                                                                Weekly, takes a day for the team, and then we start the 3-day tests ..

                                                                                But look, whatever, not everyone’s use case is the same, The point is, scale according to your needs, but don’t ignore the beauty of PR’s as a mechanism, when you need them. If you don’t need them, do something else that works.

                                                                    2. 1

                                                                      You can just let the devs have their own branches, and when necessary, have a merge party to produce a build for testing.

                                                                      Is that really a thing? I’ve never been on a team that did that, and it sounds like a mess (to put it lightly). I can’t imagine that it would scale well.

                                                                      1. 1

                                                                        For sure. I do it with the 3 other devs in my office space. We just tell each other ‘hey, I’m working on branch’, then when the time is right, we all sit together and merge.

                                                                        I mean, its probably the easiest flow ever.

                                                                        If you don’t do this, I wonder why? I guess its communication.

                                                                        1. 3

                                                                          I do it with the 3 other devs in my office space.

                                                                          That makes sense: the “3” and “in my office space” are a particular set of constraints. If you’ve got a system worked on by more people, and distributed across locations, that doesn’t scale in quite the same way, I think.

                                                                          1. 1

                                                                            Get back to me when you’ve tried it with more than 10 people on the team or a project with more than 500k lines of code…

                                                                            1. 1

                                                                              Yeah, works just as fine at that scale too. Key thing is: devs communicating properly.

                                                                      1. 2

                                                                        Bold move, I hope that their reasons for it are justified and they’ll come out OK on the other end as I really love what Khan Academy is doing.

                                                                        How will you handle the server-side rendering for React if the services are in Go?

                                                                        1. 1

                                                                          Thanks, and I agree that it’s a challenge.

                                                                          We’ve been doing server-side React rendering for quite a while already. We’re changing the flow a bit, because it used to be that requests would go to Python which would then call out to Node for server side rendering. Going forward, our CDN will talk to the Node React render server directly to get complete pages.

                                                                        1. 2

                                                                          I’d be happy to answer any questions you have. We’ll definitely follow up with more blog posts as we go along.

                                                                          1. 3

                                                                            Thank you for writing the article.

                                                                            I have two questions:

                                                                            1. Why did you decide to migrate all APIs to GraphQL?
                                                                            2. How do you generate optimal database queries from a GraphQL request? ¹

                                                                            ¹ I only have superficial knowledge about GraphQL, so my questions could be a bit naïve.

                                                                            1. 3

                                                                              Good questions!

                                                                              First of all, I’ll give a shoutout to Michael Nygard for his documenting architecture decisions blog post. I didn’t have to create my own answer to the question of migrating to GraphQL. I was able to just look up the architecture decision record (from September 2017).

                                                                              We saw the benefits as:

                                                                              • Data from GraphQL queries is typed, which can help the front-end operate more reliably on it. Our REST endpoints also have type information, but we aren’t currently exporting this information to the front-end.
                                                                              • GraphQL queries can collect a variety of data in one request. This should improve performance since the same data might take multiple REST API calls to fetch. Also GraphQL queries can omit unneeded data.
                                                                              • Performance can also be improved since that web front-end client Apollo does caching, and this can help avoid repeated calls to the backend for data.
                                                                              • Manually testing GraphQL queries is easier than testing REST APIs because of the built-in query explorer.
                                                                              • Front-end developers can make some changes to queries without requiring any backend work to provide extra data.

                                                                              and the drawbacks as:

                                                                              • It can be harder to debug GraphQL queries when implementing them, since by default errors are handled rather than causing a traceback. We may want to explore improving the debugging story.
                                                                              • For one-off queries, it may be easier to implement a REST endpoint, because to implement a GraphQL query involves building classes that describe the types of the different parts of the response rather than just describing the signature.
                                                                              • Our current implementation of GraphQL makes us reliant on Apollo and Graphene. Both libraries have quirks that have to be learned and worked around.

                                                                              As far as generating optimal database queries, GraphQL server implementations generally have the idea of writing a “resolver” which knows how to look up entities in the database. Since GraphQL’s schema is a graph, that means you could have an entity that’s an “invoice” and you could ask for a specific invoice by ID and request its line items (which are each distinct entities as well). In an RDBMS, there’s a normal 1:many relationship between invoice and line items, and the resolver would be able to collect up all of the line items with a single query.

                                                                              Generally speaking, you can design your GraphQL schema to make lookups reasonably efficient for whatever kind of database tech you’re using. (If you’re using a graph database, it becomes really natural, I should think. We’re not, though.)

                                                                              I’ll note that our new GraphQL setup is more complex because of federation. Looking up entities in our datastore doesn’t change, but the queries themselves go through a query planner which distributes parts of the query to different services.

                                                                          1. 7

                                                                            Rik Arends has been working on makepad which he describes as “Makepad is a Rust IDE for VR that compiles to wasm/webGL, osx/metal, windows/dx11 linux/opengl”.

                                                                            I have an Oculus Quest, which is also 1600x1440 per eye. It’s a completely wireless, standalone unit which is great, and Makepad is being built to run on it. I quite like it for games and experiences like that, but text wouldn’t be super sharp. That said, Rik has been actively working on this stuff, and I’m just guessing what the experience might be like.

                                                                            If I’m actually coding something for VR, I think it could be awesome. But I don’t think I’d find much value in it over a 2D interface for normal coding.

                                                                            1. 3

                                                                              Thanks for the link to Makepad! I’ll be excitedly following along.

                                                                              The main advantage I see to using VR for coding (in a traditional setup with a text editor, not some awesome visualisation of your software) is “real estate”. I use two 4k 27” monitors, but I’ve worked with three before in an office (before the company ran out of money, go figure) and found it even better. I like being able to see different parts of my code side by side.

                                                                              I’m envisioning coding in VR to be like having a whole bunch of seamless monitors floating around my head.

                                                                              I work from home now, but if I worked in an open plan office the distraction-elimination aspect of VR would be a big plus. Imagine working in a Japanese Zen garden instead of being surrounded by other people under flourescent lighting!

                                                                              1. 1

                                                                                Sorry for a super delayed reply…

                                                                                I think that VR needs a whole lot more pixels before we’ll get to that sort of feeling. I like the image you’re projecting, and would love to see that happen but the hardware definitely has a ways to go.

                                                                                1. 1

                                                                                  For sure. After doing more research I found that the useful metric here is “pixels per degree”. Retina devices do ~60. Anything higher isn’t really noticeable by humans. Mainstream VR devices don’t even crack 20.

                                                                                  Worse, Carmack said in his Oculus keynote last year that up until now they’ve been riding the coattails of the phone industry, but from now on will have to foot the bill for developing higher pixel density screens.

                                                                            1. 7

                                                                              The devil is in the details, though. The web-technology-based toolkits of course are huge, but the native toolkits vary widely in size. The problem is that under a cetain size, you start sacrificing very real things like non-Western-language support and accessibility for disabled users.

                                                                              But man…the web-technology-based toolkits are huge.

                                                                              1. 1

                                                                                But man…the web-technology-based toolkits are huge.

                                                                                Well, yea.. using a full featured web browser to draw a UI for a ‘native’ app is a silly idea. That’s the trade-off for lowering the “UI toolkit” bar.

                                                                                1. 1

                                                                                  The web-technology-based toolkits of course are huge, but the native toolkits vary widely in size.

                                                                                  Sciter is interesting because it’s in between. They basically made their own HTML+CSS engine specifically for making desktop apps, and apparently that approach has worked for memory consumption at least.

                                                                                1. 9

                                                                                  Additionally, the technology landscape had shifted away from the tools we chose in late 2012 (jQuery, Signals, and direct DOM manipulation) and toward a paradigm of composable interfaces and clean application abstractions.

                                                                                  I always find statements like this amusing. Composable interfaces and clean application abstractions are what I always heard was what programs were supposed to be. Did people in 2012 not care about writing good programs?

                                                                                  Are they going to look back in another seven years and say “we shifted away from react, redux, and virtual DOM manipulation toward a paradigm of composable interfaces and clean application abstractions” as the winds shift again? Or will it shift to “we shifted away from X toward performant interfaces and clean application implementations?”

                                                                                  Just silly.

                                                                                  1. 5

                                                                                    Judging by [other] tech companies’ blog posts, there always seems to be enough time to move to $CURRENT_JS_ZEITGEIST_FRAMEWORK versus actually writing clean code to make your choice of libraries/frameworks irrelevant in the long run.

                                                                                    To their credit, it looks like Slack did the less sexy thing here while also upgrading to the current hotness.

                                                                                    1. 7

                                                                                      FWIW, React is 5 years old now. Where I work, we’ve been using it for all of that time and don’t have any plans to switch. Sure, maybe we’d use Preact to save some bytes on the wire sometimes, but it’s still fundamentally the same.

                                                                                      I’m not saying there will never be a thing that replaces React or that there aren’t people out there using some new hotness. My point is more that React is fundamentally easier to reason about that jQuery DOM manipulation, and until someone offers a real step change, I’d expect a lot of folks to stick with it.

                                                                                      1. 3

                                                                                        Related to this, I’m always surprised when a game company is able to switch rendering libraries (OpenGL -> Vulkan) in a few man-weeks but then I remember they usually abstract over it in their framework/engine.

                                                                                      2. 7

                                                                                        Did people in 2012 not care about writing good programs?

                                                                                        No they did not and they don’t now.

                                                                                        1. 3

                                                                                          A more accurate way of phrasing it would be “Additionally, as the original team got expanded and replaced, the Slack team had shifted away from the rapid prototyping tools we used in late 2012 and toward a paradigm of composable interfaces and clean application abstractions.” Apparently, they think their company is the whole universe.

                                                                                          1. 1

                                                                                            It was a lot more work and discipline to develop and maintain a nicely organized codebase with their old toolsthan with their new tools, partly because composability and clean abstractions weren’t explicit design goals for those tools. React/redux really was a major improvement over previous front-end arrangements.