1. 5

    Yet the new fs API doesn’t support contexts at all, so you have to embed them in your fs struct.

    1. 8

      The io package in general doesn’t support contexts (io.Reader and io.Writer are examples of this), and they’re a bit of a pain to adapt. In particular, file IO on Linux is blocking, so there’s no true way to interrupt it with a context. Most of the adapters I’ve seen use ctx.Deadline() to get the deadline, but even that isn’t good enough because a context can be cancelled directly. I’d imagine that’s why it’s not in the fs interfaces.

      For every Reader/Writer that doesn’t support Context directly, you need a goroutine running to adapt it which is not ideal. There is some magic you can do with net.Conn (or rather *net.TCPConn) because of SetDeadline, but even those would need a goroutine and other types like fs.File (and *os.File) would leave the Read or Write running in the background until it completes which opens you up to all sorts of issues.

      1. 1

        You can you use the fs.FS interface it to implement something like a frontend for WebDav, “SSH filesystem”, and so forth, where the need for a context and timeouts is a bit more acute than just filesystem access. This actually also applies to io.Reader etc.

      2. 1

        I’m not entirely sure why the new fs package API doesn’t support contexts, but you could potentially write a simple wrapper for that same API which does, exposing a new API with methods like WithContext, maybe?

        Especially considering what the documentation for context states:

        Contexts should not be stored inside a struct type, but instead passed to each function that needs it.

        But ideally I’d like to see context supported in the new fs package too.

      1. 4

        Resisting the urge to rewrite my blog with Next.js

        Trying to find a good way to use my computers without my hands (occupied by baby) e.g. scrolling, focusing, writing

        (Maybe) creating a Hacker News comment graph (with comments linking to other comments counted as links) as a Roam Research(-esque) database

        1. 6

          Do it. It’s worth experimenting with stuff. I’ve rewritten my blog site 4 times and I’ve learned so much each time.

          1. 3

            A foot pedal actually sounds perfect for such a scenario, maybe combined with eye tracking!

            1. 2

              I rewrote my website in Next.js and it was worth it

              1. 1

                Perhaps you are already aware of this, but did you look into baby slings? They can be very comforting for your child and they free your hands.

              1. 2

                Let’s switch Gentoo et al to using a modern build system like Bazel that describes the complete dependency graph in a scalable way and be done with it.

                Most of these complains boil down to “90s era perl we use for packaging tooling doesn’t work with 2020s software”.

                EDIT: my perspective is mostly about working on/around Debian, I haven’t tried to maintain Gentoo packages. I think the most promising ideas here are Nix/Guix and Distri.

                1. 4

                  Rust and Go are still brittle in nixpkgs, too. All of the problems described in the article are applicable. The fact is that we had to tame their build toolchains, including dependency management, in order to make them compose nicely with packages written in other languages.

                  As a relevant example to the article, if one wants to build a Rust extension module for CPython, then one must use buildRustPackage with a custom call to the Python build system (example). This is partially due to Rust and Go not defaulting to C linkage, and also partially due to not using standard GCC-style frontends which would allow for Rust or Go code to be transparently mixed with C code.

                  That last point might sound strange, but compare and contrast with C++ or D for older languages, or Nim or Zig for newer languages. When two languages have roughly similar views of the same low-level abstract machine, then compiling their modules together into a single linked application becomes much easier.

                  1. 2

                    FWIW I believe Gentoo’s portage uses Python.

                    Perl up until version 5 has a very strong commitment to backwards compatibility.

                    1. 3

                      “90s era perl” was intended as a pejorative about an era of programming thinking, not a specific dig about perl. If you remember Perl fondly, we are probably very different ideas ¯_(ツ)_/¯

                      1. 2

                        Thanks for the clarification!

                        I do believe OpenBSD uses Perl for its ports/packaging system, and isn’t interested in changing it right now. While Gentoo probably made the right choice in using Python instead, it’s come back to bite them a bit. A big part of why Gentoo specifically is pissed about the cryptography component/module/library/package[1] introducing a Rust dependency is that Python is a core dependency of portage.

                        Maybe it’s time to Linux distros to take a step back and consider whether offering “everything but kitchen sink” to end users is really a good idea anymore.

                        [1] I’m on hip to the Python lingo here

                  1. 2

                    It seems like Litestream and DQLite/RQLite are perfectly complementary. You can shard your data into lots of Litestream databases, and use a central DQLite/RQLite replicated cluster to keep track of which data goes in which shard.

                    1. 1

                      This would be very interesting to build. It’s a type of architecture that I’ve been considering for Notion’s user-facing database feature, but unfortunately these systems don’t seem to support heterogeneous database schema management.

                    1. 2

                      And this is why you don’t let buzzword kids/scientists push new language features into your language.

                      They will be gone, and you’ll be stuck doing maintenance of these features forever.

                      1. 4

                        It is still in Pitch stage: https://forums.swift.org/t/differentiable-programming-for-gradient-based-machine-learning/42147

                        The things got in, such as the DynamicMemberLookup, dynamicCallable, callable are generally useful. The PythonKit binding is actually pretty brilliant. The swift-jupyter notebook integration is passable and generally just wraps the main Swift REPL.

                        Overall, I don’t think Swift as a language carried much baggage due to the ambition of S4TF.

                        1. 4

                          buzzword kids/scientists

                          Are you referring to Chris Lattner as a buzzword kid? He created LLVM and Swift.

                          1. 2

                            Remember how Guido joined dropbox and dropbox started working on some SOOPER FAST python version which they later killed after they couldn’t make it work? And unless I’m outdated they still haven’t completely migrated from py2 -> 3? Chris Lattner isn’t the buzzword kid, it’s the fans.

                            1. 2

                              I believe pyston is developed outside of Dropbox now, and it seems to be a python3 compatible implementation.

                              source: https://blog.pyston.org/2020/10/28/pyston-v2-20-faster-python/

                              1. 2

                                But it’s also closed-source, so it might as well be dead. They’re no longer contributing to the rest of the community.

                                I think that the parent post’s point was that corporate development of Python JITs seems to have a lot of conceptual overlap with corporate development of automatic differentiation; it’s useful technology in a particular climate, but requires a long-standing community tradition in order to stay maintained, and involves deep knowledge of how to write interpreters for the complete language under study.

                                1. 1

                                  Right, they last sponsored the project in 2017. My point was that the project probably gained steam because the language designer was there.

                              2. 1

                                No, why?

                              3. 1

                                It wasn’t ever mainlined, right? Differentiable Swift was a first-party sanctioned fork afaik.

                              1. 13

                                I’m baffled that none of the frustrated developers use a lock file to pin transitive dependencies. Some, including a Nokia developer, attempt to build and run a staging environment containing “latest” code from the internet in their transitive dependencies. This hardly seems like the first time a transitive dependency might break you — and at least in this instance, it’s a VERY clearly worded build-time breakage! What if there’s a runtime issue caused by transitive dependency silent version update?

                                1. 2

                                  Is anyone else holding off from using the new lua hotness because they’re hesitant to move their neovim config further away from what vim supports? I have no intention (right now) of moving off neovim and back to vim, but the loss of compatibility is leaving me with second thoughts about whether it’s worth it.

                                  1. 1

                                    I moved to neovim fulltime a couple years ago, and so far have never opened plain vim except by accident. A completely seamless transition, in my anecdotal experience. I do mostly web application programming with it, YMMV.

                                    1. 1

                                      Initially I had this same hesitation regarding many Neovim things, such as Neovim only plugins. But what’s the point of having nice things if you don’t actually use them? At this point I’ve been on Neovim for three or four years and haven’t had any regrets.

                                      If I go back to vim, or any other editor, I’ll view that as an opportunity to rebuild my config from scratch or near scratch - I find it a good way to clean up unused plugins, settings, themes etc and also a forcing function to discover new plugins and workflows.

                                    1. 13

                                      Let me be the first to say “finally!”. I’ve always struggled with Vimscript’s various weird quirks and all the different types of things. I’d much rather use Lua even if it’s mostly to string-concat together Vimscript snippets — at least I’ll be able to feel confident about how variables and functions work. The augroup example function delivered great inner peace. Thanks for the clear write-up. I also enjoyed the fun tone.

                                      1. 3

                                        Thank you! You’ve described my sentiments quite precisely. Exactly why I jumped ship to Lua even though it’s kind of half-baked right now.

                                        1. 1

                                          I absolutely agree; Lua (even in its incomplete form) has been a game changer for me in writing plugins and customizing Neovim. Yes, you can do everything that you can do in Lua in Vimscript, but the performance and ergonomics improvements make a substantial difference (in my experience).

                                        1. 5

                                          I’m excited for languages like Zig that bring a more sane general model for compile time computing.

                                          1. 1

                                            I literally switched Notion’s docker build to DOCKER_BUILDKIT=1 last night. I didn’t know about this magic # syntax=docker/dockerfile:1.2 comment - I’ll be adding that to my next PR. I’ve very happy with the build parallelism - it cuts our cold build time in half.

                                            1. 41

                                              Boring prediction: TypeScript. By the end of 2021 TypeScript will broadly be seen as an essential tool for front-end teams that want to move faster without breaking stuff.

                                              The alternatives which launched around the same time (Reason, Elm, Flow, etc…) have all fallen by the wayside, and at this point TS is the clear winner. The investment is there, the ecosystem is there, the engineers (increasingly) are there. The broad consensus isn’t quite there in wider world of all Javascript engineers, but I think it’s coming.

                                              Eventually good engineers will leave teams that won’t switch to TypeScript, ultimately hobbling those companies. Their lunch will be eaten by the competitors who’re using TypeScript. But there’ll be money to made dealing with legacy messes of JS / Coffeescript, Backbone, jQuery etc, for the people who’re willing to do it. It’ll be a long-lived niche.

                                              Knock-on effects will include decreased use of Python in an application / API server role (I know there’s MyPy, but I think TypeScript is ahead) except where it’s coupled to data-sciency stuff. I think something similar will be seen with Go. I don’t know how big these effects will be.


                                              Unrelated prediction: Mongo will make a comeback. I’ve really disliked working with Mongo, but I was completely wrong about the price of Bitcoin this year so I assume Mongo’s comeback is inevitable.

                                              1. 9

                                                Eventually good engineers will leave teams that won’t switch to TypeScript, ultimately hobbling those companies. Their lunch will be eaten by the competitors who’re using TypeScript. But there’ll be money to made dealing with legacy messes of JS / Coffeescript, Backbone, jQuery etc, for the people who’re willing to do it. It’ll be a long-lived niche.

                                                This is quite the prediction. I think I can see it happening. Working with a large, untyped JS codebase is a nightmare and can eat through morale quickly.

                                                1. 9

                                                  I work in a place with a lot of Node JS, but it’s not my day to day. I quickly went from enjoying javascript to hating it. Recently I have been enjoying doing some small stuff on my own again. I think I’ve decided that hell is other people’s javascript.

                                                  1. 6

                                                    I’ve decided that most new JS code I write actually should be TypeScript these days. The tooling around the language is too nice.

                                                    1. 5

                                                      As a hobby I write Ableton plugins in Max MSP. It has JS support, but only ES5, that’s from 2009, and gross. Turns out the best modern transpiler to target that is TS! I was so happy when I found out.

                                                      1. 2

                                                        I should probably give TypeScript a go. I was pretty annoyed with JS for a while so I didn’t want to spend my spare time learning TypeScript. Unfortunately, the bits of Javascript that I tend to touch are largely agreed to be the worst in the org and there isn’t a lot of energy to move them over to TypeScript. C’est la vie.

                                                    2. 1

                                                      yep. I’m in the midst of a large feature-add to an angular 1.5.5 site. I’m a bit envious of other teams working with new angular and Typescript.

                                                      I think vosper is right - there’s going to be plenty of work available to those who are willing to maintain older frontend tech.

                                                    3. 7

                                                      Eventually good engineers will leave teams that won’t switch to TypeScript

                                                      Any engineer who will leave purely because the company doesn’t switch to $my_favourite_tech_choice is, by definition, not a good engineer. We’re supposed to professionals for crying out loud, not children insisting on our favourite flavour of ice cream.

                                                      1. 14

                                                        I’d argue switching companies in order to use better tech is what a professional engineer does. Strong type systems are equivalent to constraints in CAD for mechanical engineering or ECAD for electrical engineering. They are absolutely crucial for proper engineering.

                                                        A mechanical engineer that wants to use something like Solidworks or Onshape at a company using TinkerCAD would not be looked down upon. Engineers need to use the right tools to actually engineer things.

                                                        So yes, switching companies to use and practice with proper tooling is a damn good engineer.

                                                        1. 3

                                                          I interviewed a bunch of professional engineers a couple years back. Most of them were stuck on using Excel spreadsheets. One was yelled at by his boss for using python scripts.

                                                          1. 1

                                                            I mean, Excel is a battle-tested functional programming environment which has intermediate states easily visualized and debugged! It has its faults, but I’d imagine it was being used for something like BOM inventory management? In that case it is definitely the right tool compared to Python.

                                                            In any case, yes there are many engineering jobs like that in other engineering fields, but there are also many software engineering jobs which deal exclusively with k8s yaml files, which I’d argue is similar but worse.

                                                            1. 2

                                                              In that specific interview it was for finite element analysis.

                                                          2. 2

                                                            They [Strong type systems] are absolutely crucial for proper engineering.

                                                            Says you. The rigor and engineering approaches “crucial” to systems depends HEAVILY on the domain, but I’m not sure I can identify ANY domain that “requires” strong typing (which is how I interpret “crucial”). Plenty of critical software has been and is being built outside of strong type guarantees. I’m not prepared to dismiss them all as bad or improper engineering. There may be a weak correlation, or just plain orthogonal - hard to say, but your staked position seems to leave no room for nuance.

                                                            I think your analogy muddles the conversation, as most software engineers are not familiar enough with those fields to be able to evaluate and understand the comparison beyond taking your statement as true (maybe it is, maybe it isn’t).

                                                            Defining “a good engineer” and “proper engineering” seems heavily rooted in opinion and personal experience here. That’s not to say it’s fundamentally un-study-able or anything, but I’m not sure how to make any headway in a conversation like this as it stands.

                                                            1. 1

                                                              Says you. The rigor and engineering approaches “crucial” to systems depends HEAVILY on the domain, but I’m not sure I can identify ANY domain that “requires” strong typing (which is how I interpret “crucial”). Plenty of critical software has been and is being built outside of strong type guarantees. I’m not prepared to dismiss them all as bad or improper engineering. There may be a weak correlation, or just plain orthogonal - hard to say, but your staked position seems to leave no room for nuance.

                                                              Sure, no domain “requires” strong typing. No domain “requires” a language above raw assembly either. We use higher level languages because it makes it significantly easier to write correct software, compared to assembly. A strong type system is a significant step above that, making it much easier to write correct software compared to languages without it.

                                                              Having worked at companies using languages such as Python, Javascript and Ruby for their backends, and having worked at companies that have used C++, Java, and Rust for their backends (even given the faults and issues with Java and C++‘s type systems) the difference between the two types of companies is night and day. Drastic differences in quality, speed of development, and the types of bugs that occur. Strong type systems, especially the ones derived from ML languages, make a massive difference in software quality. And sure, few domains really “need” good software, but shouldn’t we strive for it anyway?

                                                              I think your analogy muddles the conversation, as most software engineers are not familiar enough with those fields to be able to evaluate and understand the comparison beyond taking your statement as true (maybe it is, maybe it isn’t).

                                                              I mean, if we can’t pull lessons from other engineering fields what is the point of calling it software engineering? I don’t really know how to respond to this point, because clearly we have different opinions on this topic, each derived from our past experiences. My point is no less strong because you or others are unfamiliar with a domain.

                                                              I encourage you to try out various CAD tools, TinkerCAD and Onshape are both free to use and online available, so there isn’t even a need to download software. I think you will very quickly see the difference between the two, TinkerCAD you will master in a few minutes, and Onshape will likely be unapproachable without tutorials. But if you look at the examples, Onshape is used to produce production, intricately designed mechanical parts. And simply without the tools and constraints that Onshape provides you can’t do proper mechanical engineering.

                                                              And I really want to highlight that I don’t mean to say that TinkerCAD is bad, or worse in any way. It is incredible for what it is and for opening the door to the world of mechanical design to those who aren’t familiar with it. It is simply not an engineering tool, while Onshape is.

                                                              My analogy really stems from how each tool is used. TinkerCAD you just build the thing you want, using the shapes and manipulation tools it provides. Onshape is different, everything has to be specified. You have to set lengths of everything. You have to specify what the angles of things are. Certain things are really hard to do, especially organic shapes, because everything has to parameterized. I hope you’ll agree this is nearly identical to programming languages with and without strong type systems!

                                                              Defining “a good engineer” and “proper engineering” seems heavily rooted in opinion and personal experience here. That’s not to say it’s fundamentally un-study-able or anything, but I’m not sure how to make any headway in a conversation like this as it stands.

                                                              Of course it is rooted in opinion, at least in the US. I believe some other countries have licensed engineering which makes it more of a distinction. Personally I think it is extremely unfortunate that engineering is not a protected term, because I genuinely really think it should be. The state of software engineering feels very similar to the situation of doctors before modern medicine, where the majority were hacks and frauds besmirching the industry as a whole.

                                                              1. 2

                                                                I mean, if we can’t pull lessons from other engineering fields what is the point of calling it software engineering? I don’t really know how to respond to this point, because clearly we have different opinions on this topic, each derived from our past experiences. My point is no less strong because you or others are unfamiliar with a domain.

                                                                That’s the thing: we shouldn’t be pulling lessons until we learn what those lessons are. I had very strong opinions about what software engineering was supposed to look like, then I started interviewing other engineers. Most of what we think “engineering” looks like is really just an ultra-idealized version that doesn’t match the reality. In some ways they are better, true, but in some ways they are much worse.

                                                                EDIT: I just checked and it looks like you used to be a professional electrical engineer? If so then my point about “most people talk about engineering don’t understand engineering” doesn’t apply here.

                                                                Of course it is rooted in opinion, at least in the US. I believe some other countries have licensed engineering which makes it more of a distinction. Personally I think it is extremely unfortunate that engineering is not a protected term, because I genuinely really think it should be. The state of software engineering feels very similar to the situation of doctors before modern medicine, where the majority were hacks and frauds besmirching the industry as a whole.

                                                                Several of the people I interviewed called out the US system as being much better than the European, a sort of “grass is always greener thing”. The only country I know of with ultra-strict requirements on who can be called an engineer is “Canada”. In pretty much all other countries, you can call yourself whatever you want, but only a licensed engineer can sign off on an engineering project. This means that most professional engineers don’t need to be licensed, and many aren’t.

                                                                1. 1

                                                                  Do you have a write-up of your interviews anywhere? I would be really interested in reading it! You are right I used to be an electrical engineer, and still tinker on the side, but went into software simply because the jobs in electrical engineering are not very interesting. I think there is a lot that software does better than other engineering fields (although I wonder if that is simply due to how new the field is, and that it will inevitably end up with the more bureaucratic rules eventually), but the extreme resistance to strong typing by so many software engineers feels extremely misguided in my opinion and not something you see in other engineering fields.

                                                                  Canada was the primary place I was thinking of, but I’m not particularly familiar with the details of it all. I know the US has some licensing requirements for civil engineering projects (and most of those requirements are written in blood unfortunately). It is almost certainly a “grass is greener,” and I should certainly educate myself on some of the downsides of other countries systems. If you have any resources here I’d appreciate it.

                                                                  1. 2

                                                                    Working on it! I’m in the middle of draft 3, which will hopefully be the final draft. Aiming to have this done by end of February.

                                                                2. 1

                                                                  if we can’t pull lessons from other engineering fields what is the point of calling it software engineering? […] My point is no less strong because you or others are unfamiliar with a domain.

                                                                  Fair. Your expanded explanation here helped clarify a lot, I appreciate that. One point of interest in your explanation was

                                                                  Certain things are really hard to do, especially organic shapes, because everything has to parameterized

                                                                  Doesn’t this seem an admission of there existing domains where - according to analogy - strong type systems might be less suitable? Or would you consider this a breakdown in the analogy?

                                                                  As far as the meaning of “engineer” I take that as a very different conversation. I do think that’s murky territory and personally would be in favor of that being a more-regulated term.

                                                                  I’ve been doing software development myself for 20 years, and have worked in both strong typed and duck typed. I think they’re both good - there’s trade offs. Rather than wholesale dismissing either, I think discussing those trade offs is more interesting.

                                                                  Drastic differences in quality, speed of development, and the types of bugs that occur

                                                                  I agree with this statement. I don’t think it implies a clear winner for all development though. While a strong type system guarantees a certain kind of error is not possible, in my practice those kinds of errors are rarely a significant factor, and in fact the “looseness” is a feature that can be leveraged at times. I don’t think that’s always the right trade off to make either, but sometimes creating imperfect organic shapes is ok.

                                                                  I would also offer that there can be approaches that give us “both ways” - ruby now has the beginnings of an optional type system.

                                                                  I’m tapped out for this thread.

                                                                  1. 2

                                                                    Fair. Your expanded explanation here helped clarify a lot, I appreciate that.

                                                                    Yes, sorry about not expanding in my original comment. As you can probably tell, I’m extremely passionate about this idea, and my original comment was made in a bit of haste. I’m glad my expansion was helpful in clarifying my points!

                                                                    Doesn’t this seem an admission of there existing domains where - according to analogy - strong type systems might be less suitable? Or would you consider this a breakdown in the analogy?

                                                                    I completely agree there are domains where strong type systems are less suitable. Especially the more exploratory areas, such as data exploration and data science (although Julia is showing that types can be valuable there as well). I think my distinction is that there is a difference between domains where strong types are not as suitable, like prototyping, and production engineering systems where I think strong types are extremely important.

                                                                    As far as the meaning of “engineer” I take that as a very different conversation. I do think that’s murky territory and personally would be in favor of that being a more-regulated term.

                                                                    I’ve been doing software development myself for 20 years, and have worked in both strong typed and duck typed. I think they’re both good - there’s trade offs. Rather than wholesale dismissing either, I think discussing those trade offs is more interesting.

                                                                    It is truly a murky term, which I think leads to a lot of the communication issues in this area of discussion. I should probably start future conversations like this with a clear definition of what I mean in terms of “engineering,” because I mean it more in terms of production engineering, rather than prototyping engineering. (Unfortunately not a distinction in software, but is in all of the other engineering fields. Prototyping engineering values malleable materials, like 3D printing, while production engineering is what is shipped to users and its very particular about materials used which minimize cost and maximize strength. Different fields, different requirements, and oftentimes different tools used to even CAD/design).

                                                                    So using the production engineering vs prototyping engineering distinction, I would completely agree that there are trade-offs between strong typed and duck typed languages. I think one is fantastic for prototyping, while the other is fantastic for production. And similar to say 3D printing and injection-modling processes, the two can bleed into each other’s fields given the right opportunity. I should not have used “proper engineering” in my original comment, and I should have clarified I meant “production engineering.”

                                                                    However, the number of companies that I’ve been at that have actually treated the two types of languages as different parts of the development life-cycle is exactly one, my current company, and that was after fighting to use Python for a prototype project. (Of course, when the prototype was successful, there was then reluctance to rewrite it in a strongly typed language because “its already built!”)

                                                                    Similar to how you would feel frustrated if the new computer you bought had all of its internals built with 3D printed parts and breadboards, even if it worked identically to a computer with injection molded parts and PCBs, that is how I feel about using duck typed languages in production systems used by real users. Sure it works, but they are inherently less reliable and I think its poor engineering to ship a system like that (although, of course, there are situations where it is applicable, but not to the extent of the software world today).

                                                                    I agree with this statement. I don’t think it implies a clear winner for all development though. While a strong type system guarantees a certain kind of error is not possible, in my practice those kinds of errors are rarely a significant factor, and in fact the “looseness” is a feature that can be leveraged at times. I don’t think that’s always the right trade off to make either, but sometimes creating imperfect organic shapes is ok.

                                                                    I would also offer that there can be approaches that give us “both ways” - ruby now has the beginnings of an optional type system.

                                                                    Fair enough, although my experience with errors in duck typed languages is certainly different! The jury is still out on the optional type systems, and I’m certainly curious how that effects things. I have a hard time believing they will make a significant difference, simply because the style of coding varies so drastically between the two. With strong types I design type-first, whereas with optional/incremental type systems its the other way (unless started from the get-go, but at that point what is the purpose of using a duck-typed language?).

                                                                    I’m tapped out for this thread.

                                                                    I completely agree here. This is an exhausting topic to talk about, probably because everyone has strong opinions on it and so many of the arguments are based on anecdotal data (of which I am definitely guilty sometimes!). In any case, I appreciate you taking the time to go back and forth with me here for a bit. I’ve certainly learned that in order to properly discuss this topic its important to be very careful with language and definitions, otherwise its just a mess. I guess the English language could use some strong typing, but then poetry would suck huh? :)

                                                            2. 4

                                                              Start caring about that when everything else about the company makes you happy. Your happiness is more important than being a “good engineer”.

                                                              TS has evolved from a flavor to simply being a better version. The switch to TS is so easy there’s no reason not to. There’s a difference between a challenge and intentionally handicapping yourself.

                                                              1. 5

                                                                I was under the impression that my job is to solve problems for customers, not to “be happy”. And of all the things that make me happy or unhappy at the workplace, something like this is pretty far down the list; using JavaScript is hardly some sort of insufferable burden.

                                                                TypeScript may very well be better; but the last time I used it I found it came with some downsides too, such as the code in your browser not being the same as what you’re writing so the REPL/debugger is a lot less useful, hard to inspect Typescript-specific attributes (such as types) since the browser has no knowledge of any of that, a vastly more complicated build pipeline, and I found working with it somewhat cumbersome with all the type casting you need to do for the DOM API (i.e. this kind of stuff). And while the wind has been blowing in the statically typed languages in the last few years, let’s not pretend like the good ol’ “dynamic vs. static languages” debate is a done and solved deal. I like typing, but dynamic languages do come with advantages too (and TypeScript’s typing may be optional, but if you’re not using it much then there’s little reason to use it at all).

                                                                Perhaps there are some solutions to these kind of issues now (or will be soon), but about a year and a half ago I found it all pretty convoluted and decided to just stick with JS for the time being.

                                                                In the end, I understand why it exists and why it’s popular, but like much of today’s frontend I find it hacky, cludgy, and a very suboptimal solution.

                                                                1. 5

                                                                  You’re more than your job, you are a human being first. As engineers we have the huge privilege of being able to quit a job and find a new one easily. Happiness is a completely valid reason to do this.

                                                                  JS vs TS is a minor point, too small compared to other unknowns in switching companies. But OP said “leave teams”. I actually did just that, we have multiple teams at my company and one started using TS, so I switched. A year later the JS team now solely does bug fixes and no-one is willing to write new JS code. At first management thought it was like when Coffeescript happened but dev response was so much bigger that even they understand it is different this time.

                                                                  1. 4

                                                                    You’re more than your job, you are a human being first. As engineers we have the huge privilege of being able to quit a job and find a new one easily. Happiness is a completely valid reason to do this.

                                                                    Sure, but if you’re made deeply unhappy because you have to write JavaScript instead of TypeScript then you are either in a position of extreme luxury if that’s a problem that even rates in your life, or there seem to be some rather curious priorities going on. I think all of this is classic programming self-indulgent navel-gazing.

                                                                    Is TypeScript a better tool? Probably. Would it be a good idea to start a reasonable percentage of new projects in TypeScript? Also probably (depending on various factors). Should we drop all JavaScript projects ASAP just so that we rewrite everything to TypeScript? Ehh, probably not.

                                                                    I don’t have any predictions for 2021, but I’ll have one for 2026: we’ll be having the same discussion about WhatNotScript replacing boring old TypeScript and we should all switch to that. And thus the cycle continues.

                                                                    1. 2

                                                                      “no-one is willing to write new JS code.”

                                                                      until something goes very wrong. Not taking a side here, just “no-one is willing” struck me as an odd statement. Who has the unhappy chore of taking care of all that old boring JS?

                                                                      1. 2

                                                                        We still do bug fixes. But even then it’s often an opportunity to write TS. The nature of TS makes it so that you gradually port your application. Lots of JS is valid TS and if it isn’t then that’s usually easy to split up or refactor to become valid TS, and TS will help with that refactoring even. JS IDEs provide hints by pretending it’s TS anyways.

                                                                    2. 4

                                                                      We’re supposed to professionals for crying out loud, not children insisting on our favourite flavour of ice cream.

                                                                      I was under the impression that my job is to solve problems for customers, not to “be happy”.

                                                                      This is a red herring. You are an economic agent making decisions about your employment to maximize your own utility function, and trying to get what you can out of the market. Some try simply to maximize earnings. Some maximize some combo of earnings and hours worked/stress. Others care about finding meaning in their work. And others care about working with specific technologies, or at least not working with ones they hate.

                                                                      All of these things are orthogonal to the concept of “being a professional.”

                                                                  2. 2

                                                                    I generally strongly agree with your sentiment, and am not a fan of the degree to which developers place their identity in one particular technology over another.

                                                                    But in this specific case I imagine OP is associating untyped JS projects with tech debt and difficulty of maintenance that all too often can contribute to low morale. I’ve definitely been there before when it comes to large codebases with foundational technical debt and no type system to help one find their way around. There’s something uniquely frustrating about e.g. a poorly formatted stack trace coming back from a bug monitoring tool due to broken or buggily-implemented sourcemaps. We can definitely debate whether language choice vs other factors (bad processes, low technical budget) contribute more to system quality. My guess is that language is not one of the largest factors, but it’s probably nonetheless significant. Otherwise we wouldn’t hear about stories where people left their jobs due to being sick of the shop tooling.

                                                                  3. 4

                                                                    Agreed. Although TypeScript is a bit OO for my taste, JavaScript libraries have grown sufficiently complex as to warrant strong typing. The adoption rate is undeniable. Vue, Deno, Babylon… When your stack is written in TypeScript, the cost/benefit scale tips in favor of adopting it downstream.

                                                                    Also, Cosmos is heating up, so you could make a case for Mongo’s revival by extension.

                                                                    1. 6

                                                                      At Notion we have some OO-as-state-encapsulation Typescript on the front end, but we have even more functional Typescript, and plenty of code somewhere in-between. We use the advanced Typescript types like as const inference, mapped types, and conditional types much more than we use inheritance or implements interface.

                                                                      Honestly writing a large from-scratch codebase in Typescript focused on type correctness and make-invalid-states-unrepresentable has been very fun and productive. Our biggest issue with the language is error handling - dealing with all the errors from external libraries, the DOM, exceptions vs Result<S, F>, etc is the most annoying and error-prone aspect of our codebase. Shoehorning optionals into our style has left me paining for Rust’s try features… and I’ve never really written rust either…

                                                                      1. 6

                                                                        Some stuff we’ve done:

                                                                        • write third-party types that basically force you to pass values through validators by saying “this actually returns an opaque InvalidatedResult style thing”
                                                                        • remove functions we deem “bad” from the type signatures
                                                                        • codegen definition files
                                                                        • heavy usage of stuff like never

                                                                        I think it’s actually pretty easy to wrap third-party libs for the most part, and it’s basically the “real way” to do most of this. Too many people hem at this idea but it resolves a lot of stuff come “oh no this lib is actually totally busted” o’clock.

                                                                        1. 1

                                                                          That sounds amazing! Have you or Notion written any articles describing this setup in more detail? Are there any by others you recommend?

                                                                          1. 1

                                                                            Unfortunately when I go looking for Typescript advice on the internet, I find mostly shallow blogspam tutorials. I have an idea to take notes whenever I use an advanced TS feature and write an article called “Applying Advanced Typescript Types” — but that’s remained just an idea for a couple of years.

                                                                            1. 1

                                                                              I’ll keep an eye out for your article in the lobste.rs feed 😉.

                                                                        2. 2

                                                                          Although TypeScript is a bit OO for my taste

                                                                          My limited experience with TypeScript is that it’s only as OO as it would be if you were writing plain JavaScript. Not sure if that makes sense - another way of saying it would be: JS has adopted some OO trappings, like class, but if you aren’t using them in your JS then TypeScript isn’t going to push you in that direction - you can write functional TS to the extent that you could write functional JS; and OO TS to the extent that you could write OO JS.

                                                                          Unless you’re refering more to the naming of new keywords, like interface? I see how those could be associated with popular OO languages, but really there’s nothing making you write actual OO code.

                                                                          1. 3

                                                                            My limited experience with TypeScript is that it’s only as OO as it would be if you were writing plain JavaScript.

                                                                            Anecdotally, after working at a Java and C# shop that picked up TypeScript, everyone’s happy having things work more like those languages (well, mostly C# ;) than like JS. I just wish TypeScript would get typed exceptions already.

                                                                            1. 2

                                                                              Yes, it is possible to write non-OO TypeScript. And yes, I’m pointing out its emphasis on interfaces and other OO features like class property modifiers.

                                                                              I realize that the choice to make TypeScript a superset of JavaScript means that its roots in Scheme are still present. I also realize that typing a Scheme-ish language makes it (if one squints hard enough) an ML-ish language. Nevertheless, we should not be surprised if most TypeScript in the wild looks a lot more like C# and a lot less like fp-ts.

                                                                              1. 1

                                                                                Nevertheless, we should not be surprised if most TypeScript in the wild looks a lot more like C# and a lot less like fp-ts.

                                                                                Makes sense. Perhaps some of this is also due to TypeScript being palatable to people who are comfortable in languages like C# and Java; maybe they’d have stayed away from vanilla JS before (especially if they were exposed in the pre-ES6 days) but might be willing to write TypeScript today? That’s total speculation, though, and I’ve no idea how many people like that there are.

                                                                        1. 3

                                                                          Why generate the id from the body instead of just having an INTEGER PRIMARY KEY? You effectively have two indexes on nodes now: the implicit index on rowid, and id_idx. From your examples, you are only using numeric ids (and none of them are part of the original JSON). In addition, SQLite can automatically generate values for rowid (or rowid alias ) columns. And what do you do if two nodes have duplicate ids? Shouldn’t id_idx be UNIQUE? I would also suggest giving edges a composite primary key and making it WITHOUT ROWID.

                                                                          1. 1

                                                                            Agree that WITHOUT ROWID sounds like a good idea for the edge table, but perhaps the scheme allows for multiple edges between the same nodes? Eg one edge is “parent - child”, another edge is “taxpayer - dependent”, versus forcing a single large row for all the relationships between two nodes.

                                                                            As for IDs, It’s really useful to allow clients to come up with ids, using eg UUID. Centralizing ID allocation leads to headaches where code needs temporary IDs or some equivalent surrogate like a unique ActiveRecord in-memory object when first creating data, or anti-patterns like persisting data in multiple phases to create relationships. For example, if I want to create an edge between two nodes I have yet to create, instead of two simple insert statements, I need a more complex write node 1, write node 2, reading back row IDs, then create edge. Forget it when it comes to distributed / peer-to-peer, etc.

                                                                            1. 2

                                                                              Agree that WITHOUT ROWID sounds like a good idea for the edge table, but perhaps the scheme allows for multiple edges between the same nodes? Eg one edge is “parent - child”, another edge is “taxpayer - dependent”, versus forcing a single large row for all the relationships between two nodes.

                                                                              Well, since SQLite is still a relational database and not a graph database, I think you would have to add a column for that. For example, the schema could look like

                                                                              CREATE TABLE IF NOT EXISTS edges (
                                                                                  source INT NOT NULL REFERENCES nodes (id),
                                                                                  target INT NOT NULL REFERENCES nodes (id),
                                                                                  action TEXT NOT NULL GENERATED ALWAYS AS (json_extract(properties, '$.action')) STORED,
                                                                                  properties TEXT NOT NULL CHECK (properties = json(properties)),
                                                                                  PRIMARY KEY (source, target, action)
                                                                              ) WITHOUT ROWID;
                                                                              

                                                                              Which is a pretty good use for a generated column (unlike what was shown in the OP).

                                                                              edit:

                                                                              Generated columns may not be used as part of the PRIMARY KEY. (Future versions of SQLite might relax this constraint for STORED columns.)

                                                                              Looks like I overlooked this. In this case, I would just add an action column and not store it in json at all. If you already know that all your rows will need to have an action, there is no point in json for that data.

                                                                              For example, if I want to create an edge between two nodes I have yet to create, instead of two simple insert statements, I need a more complex write node 1, write node 2, reading back row IDs, then create edge.

                                                                              The easiest way is to have some kind of identifying information about the data already. For example, if names are unique, then you could do

                                                                              INSERT INTO nodes (body) VALUES (json('{"name": "Bill Gates"}')), (json('{"name": "Microsoft"}'));
                                                                              INSERT INTO edges (source, target, properties) VALUES (
                                                                                  SELECT id FROM nodes WHERE json_extract(body, '$.name') = 'Bill Gates',
                                                                                  SELECT id FROM nodes WHERE json_extract(body, '$.name') = 'Microsoft',
                                                                                  json('{"action": "founded"}')
                                                                              );
                                                                              

                                                                              Now this looks bad because of the two sub-selects. However, because you just inserted those very values, it’s likely that the necessary rows are still in cache. If you have the appropriate indexes (e.g. CREATE INDEX nodes_name_idx ON nodes (json_extract(body, '$.name'));) this can be pretty fast.

                                                                              Of course, if you have no such identifying information, then you will need to fall back on multiple inserts. All is not lost, though: you can use last_insert_rowid to get the rowid (or rowid alias) of the last inserted row without an additional query. For example, if you are using python (like OP), then you could do

                                                                              cur = conn.cursor()
                                                                              cur.execute("INSERT INTO nodes (body) VALUES (json(?));", ('{"name": "Bill Gates"}',))
                                                                              source = cur.lastrowid
                                                                              cur.execute("INSERT INTO nodes (body) VALUES (json(?));", ('{"name": "Miscrosoft"}',))
                                                                              target = cur.lastrowid
                                                                              cur.execute("INSERT INTO edges (source, target, properties) VALUES (?, ?, json(?));", (source, target, '{"action": "founded"}'))
                                                                              

                                                                              Which is only one query more than the default implementation. Unfortunately, SQLite has no RETURNING clause like PostgreSQL, so you need to be careful with operations like INSERT OR IGNORE INTO. If nothing ends up being inserted because of a UNIQUE clause, then the lastrowid will still have the value of the last successful insert. However, given that you have a UNIQUE column, hopefully you can use that column to use the first strategy.

                                                                              Forget it when it comes to distributed / peer-to-peer, etc.

                                                                              Fortunately this is SQLite. If you need distributed anything, you should really be looking at a Real Database (TM).

                                                                          1. 6

                                                                            You don’t need generated columns to do this; they’re just syntactic sugar. All you have to do is use json_extract(...) in place of a column name, in any query or CREATE INDEX command. SQLite has supported indexes on expressions for years.

                                                                            1. 2

                                                                              What is the behavior of inserts into a table where a row causes the index expression to error? The nice thing about the column method in the OP is that it can enforce constraints.

                                                                              1. 3

                                                                                IIRC json_extract returns NULL on a parse error. I agree, the syntax checking constraint is nice. You could do it with a TRIGGER too, probably. I’m just pointing out that you don’t need the latest SQLite to do this stuff.

                                                                            1. 5

                                                                              That may be the simplest practical use for the new string literal type features. The most mind blowing use I’ve seen is this type-level SQL engine: https://github.com/codemix/ts-sql

                                                                              1. 2

                                                                                WebP/WebP2—Baidu image format with lossy compression based on Baidu VPx codec (VP8 or VP10) and lossless compression from the French division of Baidu.

                                                                                Did the author s/Google/Baidu/g in this article?

                                                                                1. 3

                                                                                  BaidUTube

                                                                                  I think they’re just grinding an axe.

                                                                                1. 11

                                                                                  Aside: please don’t use a fixed-width font with justified spacing. This is the worst of both worlds!

                                                                                  1. 1

                                                                                    Linux man page utils does this by default.

                                                                                  1. 14

                                                                                    I’m wondering what this means for Mozilla’s priorities. Are they no longer interested in Servo’s goals, or has the project just reached enough maturity that it can live on outside of Mozilla?

                                                                                    1. 26

                                                                                      Mozilla laid off the Servo team. Some of the libraries that came out of Servo like the CSS engine are also used in Firefox, and those parts will likely see continued development support from Mozilla. Mozilla is no longer working on Servo itself - the completely packaged browser engine.

                                                                                      1. 9

                                                                                        Probably the latter. Servo was always a test bed for new implementations of web technologies, which they slowly merge into the main browser, library by library.

                                                                                        The Linux Foundation announcement (while otherwise worse), also lists a number of additional stakeholders: “Industry support for this move is coming from Futurewei, Let’s Encrypt, Mozilla, Samsung, and Three.js, among others.”

                                                                                      1. 2

                                                                                        When the Geekbench scores were posted some wanted to see ”real” performance, and I suppose this is it. Seems pretty decent for an entry-level laptop.

                                                                                        1. 2

                                                                                          Anandtech has run at least part of the SPEC2017 suite on it and the numbers track there, as well. It’s really astonishing.

                                                                                          1. 1

                                                                                            Most comparable to the score of a Intel Core i7-10850H, with six cores (twelve threads) and a TDP of 45W.

                                                                                            1. 2

                                                                                              Seems very impressive considering the power draw.

                                                                                              Edit: My desktop computer (6700k) is much slower in the same benchmark, 1124 single-core and 5640 multi-core.

                                                                                              Sure, it’s a few years old but it’s a 91 W TDP desktop processor!

                                                                                              1. 2

                                                                                                Yeah, my big machine is a 6950x, and it remains faster in multicore, by a fair amount, but good lord the M1 is like 40% faster single threaded.

                                                                                                1. 1

                                                                                                  Indeed. They’ve got a bit left to match the multi performance of the best Threadripper CPUs, so it’ll be interesting to see what they come up with for the Mac Pro and iMac Pro replacements. “Just” adding 60 more cores is probably not a workable solution…

                                                                                                  1. 6

                                                                                                    This is a 24w SoC. That they’re even within spitting distance of a 250w desktop chip is just stunning.

                                                                                                    1. 4

                                                                                                      HEDT CPU’s are very different animals compared to these tablet CPUs. We’ll probably have to wait the entire transition period until we know how their desktop CPU’s will perform, but I could probably be convinced to go back to Mac if this level of performance is maintained. :)

                                                                                                      1. 5

                                                                                                        Note that there’s no requirement for them to build their own cores for desktops. They can easily license the latest Neoverse core from Arm and produce 128-core SoCs with all of their other stuff (secure element, ML coprocessor, and so on). This is one of the advantages of Apple going with Arm: they can license things for any market segments that aren’t their core business. Given how small Apple’s workstation business is in comparison to mobile, it may not be worth their while investing effort in producing a custom desktop CPU core. The N1 scales to 128 cores per socket and, judging from the benchmarks, the versions Amazon is buying are nice and fast. The V1 is announced, is faster, and supports dual-socket machines.

                                                                                                        A 128-core per Apple SoC using an Arm IP core would be pretty competitive with their Xeon offerings and Apple marketing would have a lot of fun selling a 256-core desktop.

                                                                                                        1. 2

                                                                                                          If Apple doesn’t make their own desktop cores, you know they don’t care about that product line at all.

                                                                                                          1. 2

                                                                                                            If Apple doesn’t make their own desktop cores, you know they don’t care about that product line at all.

                                                                                                            You know only that they decided not to build a desktop core. CPU design resources are limited and it doesn’t make sense to target every price point. Apple’s desktops compete in the market against commodity CPUs with much larger volume; they may not be able to make the numbers work.

                                                                                                            John Mashey, a founder of MIPS, wrote a great post that explains the economics of CPU design. Search for “Microprocessor economics 101” on this page.

                                                                                                            1. 2

                                                                                                              No, it means that the profit from that line doesn’t justify investment in a custom core. It makes sense for Apple to design custom low-power cores because they can tune them for exactly the thermal / power envelope that they want and for their workloads to get the best performance per Watt on Apple-specific workloads. What’s their motivation for building a custom desktop core? High-end desktops / workstations are closer to servers then to mobile devices. They depend heavily on NoC topology, cache coherence protocol, cache and memory controller design and so on - things that are much less important when you’re only building 4-8 or cores. Power saving on mobile is about turning off as much as possible and getting into suspend states to maximise battery life. Power saving on the desktop is more about staying within your thermal envelope. These are very different design constraints and Apple benefits a lot in their mobile cores from not having to try to address both.

                                                                                                              The Neoverse line has a very good set of IP cores that scale to 128 cores on a single SoC. The question for Apple is whether their in-house design team could beat that performance (and, for a Mac Pro, perf is the only thing that matters: no one buts a Mac Pro because of power efficiency) by a sufficiently large margin that it would increase sales by enough to pay for the cost of the core design. They can get the vertical integration benefits by building an SoC with their own accelerators, secure element, and so on with an off-the-shelf core.

                                                                                                              I would be absolutely shocked if the Mac Pro line sold enough to justify the existence of a custom core.

                                                                                                              1. 1

                                                                                                                I think that’s a given.

                                                                                                                Th question is largely “outsource or abandon?”.

                                                                                                                1. 1

                                                                                                                  I am sure that they will have at least a 40-70w part, for the big iMacs. If they don’t replace the Mac Pro, that’s one thing, but I would be very surprised if they don’t roll something out in a year or so.

                                                                                                                2. 1

                                                                                                                  I really doubt they’re going to license a core given how far ahead of the rest of the ARM market they are.

                                                                                                                  1. 2

                                                                                                                    They are doing better than the rest of the Arm ecosystem for performance within a small power and thermal envelope. This is very different from the 200-300W envelope that a high-end Xeon operates in. Don’t assume that they could take their existing core designs and just scale them up. They have a great CPU design team, so I don’t doubt that they could make a custom core that would do well in this space, but I do doubt that the Mac Pro generates enough profit to justify having the team work on such a core.

                                                                                                      1. 4

                                                                                                        Hi, I work at Notion, a collaborative note-taking and project management company. We use many similar ideas.

                                                                                                        • we use “operations” for the majority of the changes in our app; same concept as “mutations”;
                                                                                                        • we use IndexedDB k/v store or SQLite store to cache records on the client device in a similar way as described.
                                                                                                        • we apply changes locally first on the cache and then store the operation queue for next time were online, but we don’t replay changes on the client when the server sends new versions yet - because the client sends the mutations, and the server pushes those updates back down again. As you might imagine, consistency leaves something to be desired.
                                                                                                        • We’re in the research phase of improving our caching system. Your approach is interesting, but we have far more data than 30MB of JSON to sync per client (easily upwards of 10x more, depending on the user data) that we want to maintain on the client; it’d prohibitively expensive to use your fetchall-then-diff for every one of our users. We may need to take the “subscribe-to-query” architecture all the way to the backend source of truth data stores.

                                                                                                        It’s a very interesting space to look at because offline-first and collaboration are now critical table-stakes features, and many of the vendors out there right now are quite rudimentary. For example, Firebase seems like a joke due to scale limits, and Firestore doubly so because of the lack of serious tooling. It’s heartening to see something with similar overall model to our system hopefully it means we’re both on the right tract.

                                                                                                        1. 1

                                                                                                          Thank you for the substantive reply Jake.

                                                                                                          I spoke with Chet Corcos and a few others (maybe you? if so, apologies for forgetting) at Notion early on in the development of Replicache about the system used there. I am not sure if it has changed since then. It was encouraging then as now to find that Replicache is basically a generalization of what you are doing.

                                                                                                          Your approach is interesting, but we have far more data than 30MB of JSON to sync per client (easily upwards of 10x more, depending on the user data) that we want to maintain on the client; it’d prohibitively expensive to use your fetchall-then-diff for every one of our users. We may need to take the “subscribe-to-query” architecture all the way to the backend source of truth data stores.

                                                                                                          A few thoughts:

                                                                                                          1. It is possible to have the client view return a “coarse diff” to the diff server, rather than a full snapshot. The application can progressively increase the granularity of diff it returns to the diff server as it wants to trade complexity for performance. In an application like Notion, a nice place to draw boundaries would be the document level: when a document is updated, return a diff that contains the entire state of that document, but no other documents.

                                                                                                          2. I agree that the subscribe-to-query at the backend is the dream that really makes a design like this complete. We think of the diff server as a sort of sledgehammer that makes the overall design of Replicache possible for many customers today, without dramatic server-side rearchitecture. However, if you have a more principled way to get those diffs, that is better. I’m hopeful that databases like FaunaDB (or maybe Materialized) will enable this functionality over time, and Replicache can become purely client-side technology.

                                                                                                          1. 1

                                                                                                            You’re totally right about course-grained sync; we’re discussing it as our next step.

                                                                                                            I think Cloudflare Durable Objects are also very interesting here, although it remains to be seen what the practical limits are.

                                                                                                        1. 1

                                                                                                          TL;DR: The conflict resolution algorithm seems to be app-specific code that runs in the database. I didn’t get more details, documentation is unclear.

                                                                                                          Pricing is ridiculous.

                                                                                                          1. 3

                                                                                                            What would you price this at? It looks high for my company’s current scale [and at this point we want to own the whole stack anyways], but an earlier Notion might have found this offering attractive.

                                                                                                            1. 2

                                                                                                              I realize you’re being derisive, but in a sense, yeah:

                                                                                                              The fact that conflict resolution is handled by running normal, arbitrary functions serially against the database on client and server is the point. Other systems either restrict you to specialized data structures that can always merge (e.g., Realm), or force you to write out-of-band conflict resolution code that is difficult to reason about (e.g., Couchbase). In Replicache you use a normal transactional database on the server, and a plain old key/value store on the client. You modify these stores by writing code that feels basically the same as what you’d write if you weren’t in an offline-first system. It’s a feature.

                                                                                                              ===

                                                                                                              TL;DR: Replicache is a versioned cache you embed on the client side. Conflict resolution happens by forking the cache and replaying transactions against newer versions.

                                                                                                              When a transaction commits on the client, Replicache adds a new entry to the history, and the entry is annotated with the name of the mutation and its arguments (as JSON).

                                                                                                              During sync, Replicache forks the cache and sends pending requests to your server, where they are handled basically like normal REST requests by your backend. You have to defensively handle mutations server-side (but you were probably already doing that!). Replicache then fetches the latest canonical state of the data from your server, computes a delta from the fork point, applies the delta, and replays any still pending mutations atop the new canonical state. Then the fork is atomically revealed, the UI re-renders, and the no-longer needed data is collected.

                                                                                                              It is not rocket science, but it is a very practical approach to this problem informed by years of experience.

                                                                                                              As for the price, it’s weird. Teams that have struggled with this have basically no problem at all with the price, if anything they seem to think it’s too low.