I’ve reached the conclusion that the only way out of this insanity is to continually develop viable web applications without builds and frameworks. Every month or so I’ve been picking a project that most might find impossible to do without all of that overhead and doing it. That seems to be the only thing that might get people’s attention. Even then, there are a lot of devs that just don’t want to know or believe these things are possible.
There’s the usual “but you can’t scale up like this” argument, which this approach does not address. I think the underlying problem is that we don’t do a very good job at scaling in general, no matter what tools we’re using. (If we did, we wouldn’t still be creating so many tools to help us scale). A much more important question than “will it scale”, however, is “how do we only scale as much as we need to?” Still, these projects are a good start on opening much-needed lines of conversation.
What, exactly, do you mean by scale here? I doubt this approach will compare negatively at all when it comes to number of users. So I suppose code size… but like I kinda doubt it there too, normal programming techniques should work equally well here as with library/framework X (though don’t they say true elegance is when there’s nothing left to take away? lol). I suppose this is what the critics mean when they say you just reinvent your own framework but… meh, code has structure that fits your project, this shouldn’t be a negative.
What developers usually mean when they say this, and therefore what I’m responding to, is scaling out the number of programmers. As you pointed out, there’s no scaling issue here at all when it comes to users. I’m all for teaching folks how to wisely decide how much complexity to add to their projects. I love this example because it starts conversations that help us do that. There’s a lot of hand-waving around scaling users and scaling devs that goes on in the community. It’s killing productivity and making the job of developer miserable for many.
Sadly it seems the two concerns, users and developers, play off of one another. “We going to have to use X, Y, and Z! Look at the number of users we might have!” Then they head down that route, only to follow up a few weeks or months later with “Look at all of the coders we have/might-have! We’re going to have to add A, B, and C to our frameworks and build in order to be able to coordinate all of that!”
Besides the weaknesses I already pointed out (which are somewhat solvable), I actually believe this codebase could scale well with more developers. I just think the entry-level is a lot higher: I lay out a set of practices that need to be learned (and that’s only after having good knowledge of DOM APIs). Practices may be harder to learn, teach and enforce (no code completion, no particular “version” of a technique you can read from a package.json) and require some discipline. And great documentation. But there’s also no framework to learn, no dependencies to manage, no build step to run, etc.
This! I hoped to achieve this by thinking in components/behaviors, idempotent rendering, state separated from the DOM, and one-way data flow (not intentionally trying to sound like React). I suspect if we allowed a reconciliation helper and ES6, the codebase would accumulate a little verbosity as it grows, but not end up spaghetti.
In particular, having drag & drop and FLIP animations without serious impact on the remaining codebase indicates that cross-cutting concerns may be implemented orthogonally with this architecture.
I’m not objective anymore, of course; I’ve been working on it for a couple months now ;) if you see any problems that would lead to (or solutions to prevent) future spaghetti, let me know!
I’ve reached the conclusion that the only way out of the insanity of systems programming is to continually develop viable programs without GCC or clang. Every month or so I’ve been picking a project that most might find impossible to do without all of that overhead and writing it in pure assembly.
There’s the usual “but you can’t scale up writing programs in pure assembly” argument, which I don’t address. But to also address it, I think the underlying problem is that “scaling” is hard, period, hence it’s equally as easy to scale a program written in assembly as a program written in C, no matter the toolchain (if we were good at scaling our programs, we wouldn’t need tools like GDB or valgrind).
For context - I am being tongue-in-cheek and playful here, and am not trying to make fun of anyone. On lobsters especially, I see a lot of users pushing a narrative that it’s rarely worth it to use JS libraries, frameworks, compilers, or other advanced toolchains. There seems to be an overarching disdain for not only the web, but the progression of web technology over time. As someone who has worked in this space, I have no lack of criticism for the multitudinous problems of the web ecosystem both technical non-technical.
But I really do feel that it’s time to examine this idea that “vanilla JS is good enough” in the context of building an interactive web application. No doubt many of us can build a TODO app in vanilla JS, and it would be fun and terse. The problem I often see with these arguments is that they often don’t really engage with the actual complexities and constraints of UI programming, especially over time, which gave birth to these toolchains, compilers, frameworks, and libraries to begin with. As requirements change and features are added, and contributors onboard and offboard, it becomes really useful to have things like:
A common, idiomatic, and predictable approach to updating the DOM that code doesn’t constantly need to be aware of
A structuring of this approach that is testable
The ability to support many browsers without manually writing code in the patchwork of deprecated JS variants that nobody no longer knows how to write
A type system to help with correctness, especially as the system grows
A module system for organizing the team’s code
A bundling strategy for these modules to ensure that the app works on the first page load to minimize the chance of HTTP failures preventing the app from functioning
Minimization and optimization strategies for making sure that the page loads and is parsed as quickly as possible
A package manager and CLI toolchain for managing the installation of these various tools
Tooling for debugging the app that don’t require the user to constantly reload the entire app
Many of these things have been very useful in building out rich interactive apps on the web. I have experience on teams which idealized the concept of vanilla JS, and all of their projects were bug ridden and repeatedly re-written. There’s a reason that things like babel and react exist. The modern JS ecosystem is certainly confusing, sometimes unstable, and lacking basic comforts of other languages like a standard library or common documentation format. But there’s also a reason it has evolved in the way that it has, and as is generally the case with big complex systems, one has to be thoughtful when asking “why does this exist the way it does? why can’t we just get rid of this complicated bit over here?” because those structures more often than not exist to solve real problems. Just because someone isn’t familiar with particular problem, or hasn’t yet run into it, doesn’t mean that the problem doesn’t exist. It just means that person has yet to require a solution for it.
Thanks for this, as I agree with most of your reply. I want to make clear that the case study is by no means intended to be normative in any way. All of it is a huge may and zero should (maybe I need to revisit my wording to make this clearer). As I’ve said in other replies I use toolchains, build steps, and frameworks professionally on a daily basis and will continue to do so, for many of the reasons you laid out. On many occasions in the study I point out what I dearly miss from my regular toolchain.
The case study somewhat feeds the narrative you mentioned, but only coincidentally; it was a more or less formal experiment for which I didn’t have a particular outcome in mind. It was only designed to find shortest paths to solve things with raw vanilla tech. Any result in the range between “vanilla is good enough, forget everything else” and “vanilla is terrible and should never be used for anything” was acceptable to me. And if the results, especially the bad, convince anyone that toolchains/frameworks are the right tool for the job, then that’s a good outcome. In a way it validates frameworks, just from a different angle that I haven’t seen before.
(The subject may be small but I don’t think it’s trivial (unlike e.g. TodoMVC). The challenges it presents are probably interesting to get right with whatever technology you use.)
As for your list of problems:
I do propose a common (mount functions have repeatable structure), predictable (idempotent updates) way to updating the DOM with vanilla; however it’s super verbose and not idiomatic at all.
I still need to deliver on testing.
I repeatedly note that browser testing will require a lot more effort with vanilla. I believe frameworks/libraries are deprecated far more often than standard APIs.
I repeatedly wish for TypeScript (and never init a professional project without TS, for that matter)…
… or at least ES6 (with modules).
Bundling, minification, optimization can be done with zero impact on the codebase (concat, terser).
Nothing wrong with NPM.
Debugging with hot reloading should be toolchain agnostic (not sure if there is a vanilla way, will check).
Finally, the possible UX benefit should not be ignored. A to-do app might not need this level of optimization but performance matters (more than most people think and in unexpected ways). GitHub for example did the switch in 2018. Of course they use build steps, which I clearly recommend as well. As I said earlier, the case study should be understood as an experiment that yielded a couple of interesting (IMO) insights - nothing more.
Your last thought about unneccessarily challenging the status quo somehow validates the study as well. I didn’t convince a professional team to build or rewrite an actual product in vanilla JS for my sake just to “see what happens” and used neutral ground instead.
Thanks a lot for your feedback!! I will make some changes to reflect our discussion :)
I will simplify: there is a complexity dial, from 1 to 10. Sure, you might need to turn the dial to 10 sometimes. And sometimes you can just use 1.
My argument is that I see the dial set far too often at 10 when it doesn’t need to be. That’s not an argument for never using tools! All of your points have merit.
Don’t pick 1 by default and don’t pick 10. Pick 1 and scale up as needed. That’s all.
I’ve never seen any shops with too simple of a build/deploy system. But I’ve seen dozens with nightmare systems that only a couple of leads really know inside and out. In each of those shops, there were people with lists like yours. These reasons can no longer be sufficient to justify this kind of dysfunction. Nobody is arguing that you should code you site on notepad. The only thing I know that people are arguing is that nobody seems to know how the hell to scale up a system, instead just doing a lot of hand-waving about problems that might arise and then dumping a crap ton of stuff into the process path and architecture in order to protect them from these possible future dangers.
Elephants might destroy my team room next week as well. I can assure you I’m not digging any elephant traps this week. You gotta have a reproducible and reasonable process to scale complexity. That’s far more important than any of the other stuff.
I agree with much of what you’ve written, with a few points of departure.
Don’t pick 1 by default and don’t pick 10. Pick 1 and scale up as needed. That’s all.
IMO I’d substitute “pick 1 and scale up as needed” with “design a solution that addresses the problem’s particular details within its constraints right now without shutting the door on future modification”. To take an example, perhaps I’m working on a tool for performing a common task on a bunch of hosts. I could write this tool in bash, but I believe it will be easier, more legible, and more testable if I do so in python, so I choose python. I also know that I’ll need to make network requests, and so I make the reasonable choice of using the well-known requests library as opposed to the builtin http.client module because I and my team are more familiar with it. At this point, we need a reproducible way of installing the dependencies, and we decide to go with pipenv because its easy and gets us reproducible builds.
We could have “started at 1 and scaled up” which could have meant writing a bash script with cURL. But if requirements change at all in the future, we’d be continually updating POSIX shell code, and maybe installing extra CLI tools, and at some point are likely to wish we had chosen a higher-level language. And by setting ourselves up with a package manager/virtual environment, it will be trivial to add dependencies in the future and lock them to specific versions. I don’t think this is over-engineering, but to be clear, I’m not claiming you do – just wanted to illustrate what I’m getting at. There’s clearly a balance between simplicity/tightly solving a problem with minimal waste while still leaving the door open to future enhancement and maintainability.
I’ve never seen any shops with too simple of a build/deploy system. But I’ve seen dozens with nightmare systems that only a couple of leads really know inside and out. In each of those shops, there were people with lists like yours.
This is interesting to me, because we have very different experiences here. The worst build/deploy systems I’ve ever seen were ad-hoc collections of shell scripts that were not checked into version control and lived on hardware servers in a closet, some of which could not be accessed due to long-departed sysadmins having left without having documented any of their work. So I think it’s fair to say that one can produce a poorly engineered system both by “over”-doing it and by “under”-doing it. I think the common thread is just a lack of critical thinking and engagement in well accepted engineering practices.
Elephants might destroy my team room next week as well. I can assure you I’m not digging any elephant traps this week. You gotta have a reproducible and reasonable process to scale complexity. That’s far more important than any of the other stuff.
I’m 100% with you here. To be clear, I have had to talk people out of solving for problems that we don’t and will not have. I’m just objecting to a certain aesthetic of tech minimalism which is so obsessed with avoiding “complexity” that at best, it takes a circuitous path to solving the problem at hand, and at worst it implicitly ignores failure modes and constraints of the actual problem itself.
I like some fun criticism :) Assembly (many different flavors) vs. C (standard) is an entirely different situation than vanilla DOM (standard) vs. frameworks (no standards, many different flavors, lots of fluctuation).
Not super serious, but a fun parallel: It’s like we were building our own version of the tower of babel (AKA web standards) for decades but then somehow willingly, without divine intervention, chose to speak a myriad of ever-changing languages which scatter our focus and productivity. We may feel productive as individuals but collectively, if we all spoke just web standards, software would probably eat the world a little less…
Also, the level of abstraction that C gives over assembly is huge compared to what React does. It’s almost a direct result of the case study that React’s level of abstraction is low, as only reconciliation is actually hard to do vanilla.
Hey, big props (no pun intended) on the project, it’s really impressive. To be clear I definitely thing I stretched the analogy w/ C. I mostly wanted to push back on the above implication (if I’m reading it correctly) that modern web tech is “insanity”. In my own experience, I’ve worked on apps in both vanilla JS and react/TS/babel/etc., and in all instances I just found the react projects easier to hack on.
I think that having some structure or framing for UI patterns is useful, and clearly you came up with one in your vanilla JS project! Something that’s interesting to me here, that I don’t feel we discuss enough, is the idea that there’s a ton of grey area in between “vanilla JS” and “react project”. If you have a set of helper functions somewhere that comprises an API for rendering UI, at what point does that become a “library”? Or how about “framework”? Anyone remember backbone.js? You could read the entire source code in a reasonably short amount of time. I wouldn’t really consider backbone to be much different from “vanilla JS”. I think that good engineering is all about solving specific problems, but also knowing when to identify patterns that can be factored out in order to solve common problems, and clearly one can do that well (or poorly) in both “vanilla JS” and in react.
Thank you, very much appreciated. I feel we have very similar perspectives, I just think my project came out as too “normative” (see my other reply, took some time :D) as I would never build a professional product in the exact way I describe there. And yeah, I share the idea that a small library like backbone where you can actually read the entire code in one sitting cannot hurt any project. Thanks again!
Where do you spend your time? Do you spend it solving problems or honking around with your tools? How much time have you spent in the last year taking secondary education to solve users’ problems better, versus how much time have you spent learning about your toolkit?
I don’t care what the tool is, and you shouldn’t either, the question is whether or not it is stealing your most precious resource, your brain, from you. Whether that’s GCC or Haskell doesn’t matter at all. You’re looking at the question the wrong way, as if every increase in abstraction level is automatically an improvement in the resulting. That’s a wonderful thing to believe, it’s simply not true. Not even close.
If you can write 3 lines in assembly and solve a problem, never to come back to that code again? Who the hell cares, and why should the fact it’s assembly be important? Am I trying to somehow look good for the other coders? Should I be ashamed to have simply solved a problem and moved on to the next one?
Focus on what’s important. As long as you’re picking the right tool for the job, the tool you’re using isn’t what’s important.
I’m not the person you’re replying to, but: I do that.
I use (and have been a contributor to) the Django web framework for backend web development. I don’t do this because it’s trendy or cool (increasingly, it isn’t). I do it because I remember what it was like writing database-backed web apps back before the Django/Rails generation of frameworks came along. There’s so much tedium and repetitive work in writing these kinds of applications that it’s next to impossible to justify essentially building your own framework to deal with it, which is what “frameworkless” development always leads to (that, or just writing all that repetitive boilerplate code over and over and over again, which also isn’t great). So I don’t do that; I use the framework off the shelf, because it has the common patterns all already handled for me, freeing me up to spend my time, thought, and effort on the uncommon aspects of whatever project I’m working on.
Admittedly I haven’t really written JavaScript professionally since 2006 or so, and I’m only vaguely familiar with the modern frontend ecosystem (a couple years back I did a tiny bit of stuff with React, on a codebase where I had help from colleagues who knew it well), but it strikes me that for all the criticism, that’s basically the same thing the modern JS world is trying to do: solve the repetitive/tedious parts of frontend development, so people are free to focus on the things that can’t easily be handled by a library or framework.
And as a result I find myself nodding in perfect agreement with the parent comment’s point. It simply makes no sense to me to insist on doing everything “by hand” as anything other than an initial learning exercise to understand what the library/framework will be doing for you, and seems to me to be a huge amount of wasted effort to re-invent those wheels yet again in the name of not using whatever’s cool/trendy/popular.
Thanks! I certainly agree that scaling is a core problem that should be addressed. As it stands, scaling up the study’s subject (e.g. add features like a backend connection, user management, etc.) would accumulate verbosity and considerable duplication. Do you think scaling up would produce additional problems, other than these two?
As outlined, verbosity could be reduced with a simple build step (allowing ES6 or even TypeScript). If we drop the artificial rule of “no general-purpose helpers”, introducing a (very) limited set of helpers (e.g. reconciliation) would likely eliminate most of the duplication. Do you see other concrete areas that should be addressed with respect to scaling?
I was thinking of creating GitHub issues for missing features with each describing a clear path to its implementation. I suppose this could further validate maintainability and scalability of the codebase.
Scanning through this, you definitely want to move to ES6. This thing screams templates, and there’s plenty of dupes and extra code. (Betcha didn’t think I’d say that!)
I don’t know if avoiding all external libraries is overall a good move. I use Vue.js when necessary, but that’s about it. Although the standards bodies are going the way of Vue and other libraries, there’s still some things there I like, like a universal templated object model. But YMMV. Even then, I just take the minified version and stick it in the code somewhere. It’s pretty small.
The rest of this is really more of a conversation than a text-based code review. Until I know what you want the app to do, there’s no advice to give. If it’s just “show how complex apps can be built simply”, that’s cool. If it’s “show how apps that scale-up massively can be easily built”, that’s cool too, but it leads to a different set of design choices. We also need to clarify terms. For instance, “scaling up” can mean more users, or it can mean more developers, or both.
The problem with my goal, showing how we over-engineer everything, is that if I only take one example, many devs will say “But that’s nothing like what I’m working on” and move on, forgetting about it. So I’m forced to take example-after-example, hoping to find some combination of features and code that will resonate. That’s a much different goal than yours, I think, which is building out this one particular app. (Of course, neither goal is better or worse than the other)
If you’d like to have a conversation, ping me and we can do some video. I’d love a walkthrough from the person who coded it. (I see you’re new here. You can/should use the messaging system for stuff like that)
Thanks again. I love reading other people’s code when it’s clear and easy-to-read. This is a cool project.
Thanks a lot for the kind words, much appreciated. I agree that avoiding libraries isn’t effective in an actual project. The goal of the study was not to show that reinventing the wheel is a good idea. I just needed a formal method to enforce finding viable (and scalable) patterns during the process, and that’s how I came up with the rather hard set of rules (which prevent using escape hatches like libraries or a hidden custom framework). I would never recommend following the study’s ruleset/method religiously for a professional project. But it did help me distill what’s bad and what’s manageable with vanilla (for the given subject).
I find your thoughts regarding example-based discussion interesting, as my motivation was literally the contrary. Things like https://htmldom.dev/ and http://youmightnotneedjquery.com/ show general-purpose snippets in isolation, and have attracted quite some attention, but in my opinion fail to describe viable end-to-end practices. That’s why I wanted to build a more complete example and follow through up to a working product.
(I will ping you some time this week, thanks for your interest!)
Apologies. You’ve misunderstood me on a couple of counts.
It’s not “reinventing the wheel” vs. “toy projects” It’s using just as much as you need, ie, being able to tell how much you need for whatever project you’re on, large or small
“Example based discussion” does not mean coding examples, it means example projects. Find a problem, write an app to solve that problem. I am specifically not into taking code out of context and showing off concepts. E-gads no!
I’ve already used this to create several projects: time-tracking, static-code analysis, modular blogging, chess. I’m considering doing email as well. There should be no code examples in what I do, only examples of solving problems end-to-end. My goal is to show that what we normally think of as large problems that need large, complex tools might not be that way at all. In many cases, we may be buying a platoon of howitzers and a deep-water navy only to kill a pesky gnat in the office. We need to be able to reason better about scaling complexity.
I don’t know if avoiding all external libraries is overall a good move.
So I wouldn’t say “all”, but I probably would say most libraries are indeed not worth the hassle, and this is more true today than ever with more and more features working reliably in the browser api itself.
No library comes free - it has a learning curve for devs, build hassles and/or load time to deploy, and likely its own bugs and update/versioning management. Not to mention evaluating and selecting appropriate libraries up front. So like adding a library automatically has a -100 score to start and needs to outweigh that. (And I would even ask if the feature is worth it anyway before diving into this, but assuming it is still leaves you with debt that needs to be justified.)
The automatic assumption that library = win needs to be questioned.
I think it’s interesting that you create custom events and let them bubble up–it seems pretty convenient. My impression is that in most frameworks, people prefer passing callbacks in through props. How did you decide on the bubbling approach?
And since you’re using the DOM for events… could it make sense to use the DOM for state, too? If you stored each component’s state in some data- attributes, maybe it would be easier to debug (in the dev tools / Elements panel) without losing the top-down data flow.
it works similar to standard elements (for instance, you don’t pass a change callback to an <input>),
triggering some action from a deeply nested component should not require you to pass callbacks through all its ancestors (events yield lower coupling IMO),
and events resulted in less code (which I know because I tried callback-passing in the early stages of the study).
I think using data attributes for state is an interesting alternative to look into. I suspect problems regarding types though; data attributes are always strings so you’ll need some extra work when reading/writing state.
As a side note I’m using getAttribute('data-key') here and there, and since dataset is well supported I should probably revisit that.
This is very useful, thanks! I am looking at this as someone who doesn’t do frontend, and who only understands vanilla web.
I have a couple of beginner questions, which I’ve googled some answers to myself, but would still appreciate authorative answer, if you want to give one:
Why ES5? I was under the impression that ES6 is pretty universally supported for quite some time now.
My understanding is that there’s no CSS reset/normalization. How does this work in practice? For me, browsers rendering elements differently is one of the biggest problems when I try to HTML/CSS. I wish I could just write a minimal amount of reset myself which I understand.
I see that the code uses a mixture of em and px for sizing (for example, padding is specified using both in various places). Is there some nice rule to determine which one is pedantically correct at any given case?
Definitely not authoritative answers, but here we go ;)
Not yet, unfortunately: https://caniuse.com/?search=es6 - depends on how you define “universally”. There’s also always the question of minimum required web APIs for a project. If you’re building a 3D or webcam-reliant app, then any browser that supports that will support ES6 as well.
To be honest, I didn’t put too much effort into the design for this study. Professionally I do it by testing and optimizing for actual browsers, though. I never use resets, but sometimes use normalize.css.
(See above, the sizing probably needs some improvement.) I don’t think there’s a general rule. I found it most effective to use a root font size and mostly rem instead of px in professional projects. Also SCSS variables (and math) are very nice for these things. Sometimes I really want pixels, sometimes I actually want a spacing relative to the current element’s font size. For example, line-height will almost always be in em for me.
I too try to stay close to vanilla. But there are libraries that have such a huge impact on productivity, that I end up using them. There should be some sort of way to rank libraries with usefulness (productivity gain? efficiency gain?) per cost (size? complexity? potential security problems?).
I used JQuery before. Now the time has thankfully passed.
I currently use Vue.js. Seems to offer quite a lot for a small price.
I agree! If the study only validates the value of frameworks, that’s not a bad thing at all and for sure one of the possible outcomes I had in mind when starting it. I use React professionally all the time and it provides great ergonomics and productivity.
A particular story I like to share in these discussions: I once helped build a full-stack team from developers with varying backgrounds. One of them had 10 years of experience as a Java backend engineer, but had never done frontend professionally.
He was productive and implementing bugfixes/features after 2 or 3 days and said to me: “I never wanted to touch frontends, I hate jQuery, but this setup is so easy to work with and makes frontend actually fun.” All of this, of course, was after setting up a solid/complete project structure and build pipeline, but nonetheless I’m sure a key to this success story was React’s ergonomics.
Thank you for helping restore my faith in online forums and developer communities through this discussion here. So welcoming and yet thoughtful (and a very interesting original post / write up).
It’s always interesting to see a project you work on used as a case study :) Great work. There’s lots of thought and effort that you put into this study, and it’s a great experiment!
We’ve always said that our competition for TeuxDeux is pencil & paper, and I like that.
The app has also been running for over 8 years now, which is an eternity in the world of web apps. We’re slowly modernizing the various moving pieces that inevitably become out-of-date as the wheel of technological progress continues rolling forward. And as you’ve noticed, there are some improvements being made – we’re going to have some really nifty new features and enhancements in the near future :)
As an aside, I passed around your repo to the rest of the TeuxDeux team, and everyone loves it!
Excellent. Thanks.
I’ve reached the conclusion that the only way out of this insanity is to continually develop viable web applications without builds and frameworks. Every month or so I’ve been picking a project that most might find impossible to do without all of that overhead and doing it. That seems to be the only thing that might get people’s attention. Even then, there are a lot of devs that just don’t want to know or believe these things are possible.
There’s the usual “but you can’t scale up like this” argument, which this approach does not address. I think the underlying problem is that we don’t do a very good job at scaling in general, no matter what tools we’re using. (If we did, we wouldn’t still be creating so many tools to help us scale). A much more important question than “will it scale”, however, is “how do we only scale as much as we need to?” Still, these projects are a good start on opening much-needed lines of conversation.
What, exactly, do you mean by scale here? I doubt this approach will compare negatively at all when it comes to number of users. So I suppose code size… but like I kinda doubt it there too, normal programming techniques should work equally well here as with library/framework X (though don’t they say true elegance is when there’s nothing left to take away? lol). I suppose this is what the critics mean when they say you just reinvent your own framework but… meh, code has structure that fits your project, this shouldn’t be a negative.
What developers usually mean when they say this, and therefore what I’m responding to, is scaling out the number of programmers. As you pointed out, there’s no scaling issue here at all when it comes to users. I’m all for teaching folks how to wisely decide how much complexity to add to their projects. I love this example because it starts conversations that help us do that. There’s a lot of hand-waving around scaling users and scaling devs that goes on in the community. It’s killing productivity and making the job of developer miserable for many.
Sadly it seems the two concerns, users and developers, play off of one another. “We going to have to use X, Y, and Z! Look at the number of users we might have!” Then they head down that route, only to follow up a few weeks or months later with “Look at all of the coders we have/might-have! We’re going to have to add A, B, and C to our frameworks and build in order to be able to coordinate all of that!”
Works the other way too.
Besides the weaknesses I already pointed out (which are somewhat solvable), I actually believe this codebase could scale well with more developers. I just think the entry-level is a lot higher: I lay out a set of practices that need to be learned (and that’s only after having good knowledge of DOM APIs). Practices may be harder to learn, teach and enforce (no code completion, no particular “version” of a technique you can read from a package.json) and require some discipline. And great documentation. But there’s also no framework to learn, no dependencies to manage, no build step to run, etc.
More importantly, scaling a codebase could also be about the number of features you can add before it turns into a big ball of mud and/or spagetti.
This! I hoped to achieve this by thinking in components/behaviors, idempotent rendering, state separated from the DOM, and one-way data flow (not intentionally trying to sound like React). I suspect if we allowed a reconciliation helper and ES6, the codebase would accumulate a little verbosity as it grows, but not end up spaghetti.
In particular, having drag & drop and FLIP animations without serious impact on the remaining codebase indicates that cross-cutting concerns may be implemented orthogonally with this architecture.
I’m not objective anymore, of course; I’ve been working on it for a couple months now ;) if you see any problems that would lead to (or solutions to prevent) future spaghetti, let me know!
Having seen a lot of code over the years, I see zero correlation between ugly code and framework use…. people manage to write bad code in any system.
I think it means adding more devs to the project.
I’ve reached the conclusion that the only way out of the insanity of systems programming is to continually develop viable programs without GCC or clang. Every month or so I’ve been picking a project that most might find impossible to do without all of that overhead and writing it in pure assembly.
There’s the usual “but you can’t scale up writing programs in pure assembly” argument, which I don’t address. But to also address it, I think the underlying problem is that “scaling” is hard, period, hence it’s equally as easy to scale a program written in assembly as a program written in C, no matter the toolchain (if we were good at scaling our programs, we wouldn’t need tools like GDB or valgrind).
👹
For context - I am being tongue-in-cheek and playful here, and am not trying to make fun of anyone. On lobsters especially, I see a lot of users pushing a narrative that it’s rarely worth it to use JS libraries, frameworks, compilers, or other advanced toolchains. There seems to be an overarching disdain for not only the web, but the progression of web technology over time. As someone who has worked in this space, I have no lack of criticism for the multitudinous problems of the web ecosystem both technical non-technical.
But I really do feel that it’s time to examine this idea that “vanilla JS is good enough” in the context of building an interactive web application. No doubt many of us can build a TODO app in vanilla JS, and it would be fun and terse. The problem I often see with these arguments is that they often don’t really engage with the actual complexities and constraints of UI programming, especially over time, which gave birth to these toolchains, compilers, frameworks, and libraries to begin with. As requirements change and features are added, and contributors onboard and offboard, it becomes really useful to have things like:
Many of these things have been very useful in building out rich interactive apps on the web. I have experience on teams which idealized the concept of vanilla JS, and all of their projects were bug ridden and repeatedly re-written. There’s a reason that things like babel and react exist. The modern JS ecosystem is certainly confusing, sometimes unstable, and lacking basic comforts of other languages like a standard library or common documentation format. But there’s also a reason it has evolved in the way that it has, and as is generally the case with big complex systems, one has to be thoughtful when asking “why does this exist the way it does? why can’t we just get rid of this complicated bit over here?” because those structures more often than not exist to solve real problems. Just because someone isn’t familiar with particular problem, or hasn’t yet run into it, doesn’t mean that the problem doesn’t exist. It just means that person has yet to require a solution for it.
Thanks for this, as I agree with most of your reply. I want to make clear that the case study is by no means intended to be normative in any way. All of it is a huge may and zero should (maybe I need to revisit my wording to make this clearer). As I’ve said in other replies I use toolchains, build steps, and frameworks professionally on a daily basis and will continue to do so, for many of the reasons you laid out. On many occasions in the study I point out what I dearly miss from my regular toolchain.
The case study somewhat feeds the narrative you mentioned, but only coincidentally; it was a more or less formal experiment for which I didn’t have a particular outcome in mind. It was only designed to find shortest paths to solve things with raw vanilla tech. Any result in the range between “vanilla is good enough, forget everything else” and “vanilla is terrible and should never be used for anything” was acceptable to me. And if the results, especially the bad, convince anyone that toolchains/frameworks are the right tool for the job, then that’s a good outcome. In a way it validates frameworks, just from a different angle that I haven’t seen before.
(The subject may be small but I don’t think it’s trivial (unlike e.g. TodoMVC). The challenges it presents are probably interesting to get right with whatever technology you use.)
As for your list of problems:
concat
,terser
).Finally, the possible UX benefit should not be ignored. A to-do app might not need this level of optimization but performance matters (more than most people think and in unexpected ways). GitHub for example did the switch in 2018. Of course they use build steps, which I clearly recommend as well. As I said earlier, the case study should be understood as an experiment that yielded a couple of interesting (IMO) insights - nothing more.
Your last thought about unneccessarily challenging the status quo somehow validates the study as well. I didn’t convince a professional team to build or rewrite an actual product in vanilla JS for my sake just to “see what happens” and used neutral ground instead.
Thanks a lot for your feedback!! I will make some changes to reflect our discussion :)
I will simplify: there is a complexity dial, from 1 to 10. Sure, you might need to turn the dial to 10 sometimes. And sometimes you can just use 1.
My argument is that I see the dial set far too often at 10 when it doesn’t need to be. That’s not an argument for never using tools! All of your points have merit.
Don’t pick 1 by default and don’t pick 10. Pick 1 and scale up as needed. That’s all.
I’ve never seen any shops with too simple of a build/deploy system. But I’ve seen dozens with nightmare systems that only a couple of leads really know inside and out. In each of those shops, there were people with lists like yours. These reasons can no longer be sufficient to justify this kind of dysfunction. Nobody is arguing that you should code you site on notepad. The only thing I know that people are arguing is that nobody seems to know how the hell to scale up a system, instead just doing a lot of hand-waving about problems that might arise and then dumping a crap ton of stuff into the process path and architecture in order to protect them from these possible future dangers.
Elephants might destroy my team room next week as well. I can assure you I’m not digging any elephant traps this week. You gotta have a reproducible and reasonable process to scale complexity. That’s far more important than any of the other stuff.
I agree with much of what you’ve written, with a few points of departure.
IMO I’d substitute “pick 1 and scale up as needed” with “design a solution that addresses the problem’s particular details within its constraints right now without shutting the door on future modification”. To take an example, perhaps I’m working on a tool for performing a common task on a bunch of hosts. I could write this tool in
bash
, but I believe it will be easier, more legible, and more testable if I do so inpython
, so I choosepython
. I also know that I’ll need to make network requests, and so I make the reasonable choice of using the well-knownrequests
library as opposed to the builtinhttp.client
module because I and my team are more familiar with it. At this point, we need a reproducible way of installing the dependencies, and we decide to go withpipenv
because its easy and gets us reproducible builds.We could have “started at 1 and scaled up” which could have meant writing a bash script with
cURL
. But if requirements change at all in the future, we’d be continually updating POSIX shell code, and maybe installing extra CLI tools, and at some point are likely to wish we had chosen a higher-level language. And by setting ourselves up with a package manager/virtual environment, it will be trivial to add dependencies in the future and lock them to specific versions. I don’t think this is over-engineering, but to be clear, I’m not claiming you do – just wanted to illustrate what I’m getting at. There’s clearly a balance between simplicity/tightly solving a problem with minimal waste while still leaving the door open to future enhancement and maintainability.This is interesting to me, because we have very different experiences here. The worst build/deploy systems I’ve ever seen were ad-hoc collections of shell scripts that were not checked into version control and lived on hardware servers in a closet, some of which could not be accessed due to long-departed sysadmins having left without having documented any of their work. So I think it’s fair to say that one can produce a poorly engineered system both by “over”-doing it and by “under”-doing it. I think the common thread is just a lack of critical thinking and engagement in well accepted engineering practices.
I’m 100% with you here. To be clear, I have had to talk people out of solving for problems that we don’t and will not have. I’m just objecting to a certain aesthetic of tech minimalism which is so obsessed with avoiding “complexity” that at best, it takes a circuitous path to solving the problem at hand, and at worst it implicitly ignores failure modes and constraints of the actual problem itself.
I like some fun criticism :) Assembly (many different flavors) vs. C (standard) is an entirely different situation than vanilla DOM (standard) vs. frameworks (no standards, many different flavors, lots of fluctuation).
Not super serious, but a fun parallel: It’s like we were building our own version of the tower of babel (AKA web standards) for decades but then somehow willingly, without divine intervention, chose to speak a myriad of ever-changing languages which scatter our focus and productivity. We may feel productive as individuals but collectively, if we all spoke just web standards, software would probably eat the world a little less…
Also, the level of abstraction that C gives over assembly is huge compared to what React does. It’s almost a direct result of the case study that React’s level of abstraction is low, as only reconciliation is actually hard to do vanilla.
Hey, big props (no pun intended) on the project, it’s really impressive. To be clear I definitely thing I stretched the analogy w/ C. I mostly wanted to push back on the above implication (if I’m reading it correctly) that modern web tech is “insanity”. In my own experience, I’ve worked on apps in both vanilla JS and react/TS/babel/etc., and in all instances I just found the react projects easier to hack on.
I think that having some structure or framing for UI patterns is useful, and clearly you came up with one in your vanilla JS project! Something that’s interesting to me here, that I don’t feel we discuss enough, is the idea that there’s a ton of grey area in between “vanilla JS” and “react project”. If you have a set of helper functions somewhere that comprises an API for rendering UI, at what point does that become a “library”? Or how about “framework”? Anyone remember backbone.js? You could read the entire source code in a reasonably short amount of time. I wouldn’t really consider backbone to be much different from “vanilla JS”. I think that good engineering is all about solving specific problems, but also knowing when to identify patterns that can be factored out in order to solve common problems, and clearly one can do that well (or poorly) in both “vanilla JS” and in react.
Thank you, very much appreciated. I feel we have very similar perspectives, I just think my project came out as too “normative” (see my other reply, took some time :D) as I would never build a professional product in the exact way I describe there. And yeah, I share the idea that a small library like backbone where you can actually read the entire code in one sitting cannot hurt any project. Thanks again!
I’m reading your post as satire and enjoying it that way, but I’m not sure I take your intended point?
Where do you spend your time? Do you spend it solving problems or honking around with your tools? How much time have you spent in the last year taking secondary education to solve users’ problems better, versus how much time have you spent learning about your toolkit?
I don’t care what the tool is, and you shouldn’t either, the question is whether or not it is stealing your most precious resource, your brain, from you. Whether that’s GCC or Haskell doesn’t matter at all. You’re looking at the question the wrong way, as if every increase in abstraction level is automatically an improvement in the resulting. That’s a wonderful thing to believe, it’s simply not true. Not even close.
If you can write 3 lines in assembly and solve a problem, never to come back to that code again? Who the hell cares, and why should the fact it’s assembly be important? Am I trying to somehow look good for the other coders? Should I be ashamed to have simply solved a problem and moved on to the next one?
Focus on what’s important. As long as you’re picking the right tool for the job, the tool you’re using isn’t what’s important.
I’m not the person you’re replying to, but: I do that.
I use (and have been a contributor to) the Django web framework for backend web development. I don’t do this because it’s trendy or cool (increasingly, it isn’t). I do it because I remember what it was like writing database-backed web apps back before the Django/Rails generation of frameworks came along. There’s so much tedium and repetitive work in writing these kinds of applications that it’s next to impossible to justify essentially building your own framework to deal with it, which is what “frameworkless” development always leads to (that, or just writing all that repetitive boilerplate code over and over and over again, which also isn’t great). So I don’t do that; I use the framework off the shelf, because it has the common patterns all already handled for me, freeing me up to spend my time, thought, and effort on the uncommon aspects of whatever project I’m working on.
Admittedly I haven’t really written JavaScript professionally since 2006 or so, and I’m only vaguely familiar with the modern frontend ecosystem (a couple years back I did a tiny bit of stuff with React, on a codebase where I had help from colleagues who knew it well), but it strikes me that for all the criticism, that’s basically the same thing the modern JS world is trying to do: solve the repetitive/tedious parts of frontend development, so people are free to focus on the things that can’t easily be handled by a library or framework.
And as a result I find myself nodding in perfect agreement with the parent comment’s point. It simply makes no sense to me to insist on doing everything “by hand” as anything other than an initial learning exercise to understand what the library/framework will be doing for you, and seems to me to be a huge amount of wasted effort to re-invent those wheels yet again in the name of not using whatever’s cool/trendy/popular.
Couldn’t agree more. I think the “right tool for the job” question is where the center of discussion is at.
Thanks! I certainly agree that scaling is a core problem that should be addressed. As it stands, scaling up the study’s subject (e.g. add features like a backend connection, user management, etc.) would accumulate verbosity and considerable duplication. Do you think scaling up would produce additional problems, other than these two?
As outlined, verbosity could be reduced with a simple build step (allowing ES6 or even TypeScript). If we drop the artificial rule of “no general-purpose helpers”, introducing a (very) limited set of helpers (e.g. reconciliation) would likely eliminate most of the duplication. Do you see other concrete areas that should be addressed with respect to scaling?
I was thinking of creating GitHub issues for missing features with each describing a clear path to its implementation. I suppose this could further validate maintainability and scalability of the codebase.
Scanning through this, you definitely want to move to ES6. This thing screams templates, and there’s plenty of dupes and extra code. (Betcha didn’t think I’d say that!)
I don’t know if avoiding all external libraries is overall a good move. I use Vue.js when necessary, but that’s about it. Although the standards bodies are going the way of Vue and other libraries, there’s still some things there I like, like a universal templated object model. But YMMV. Even then, I just take the minified version and stick it in the code somewhere. It’s pretty small.
The rest of this is really more of a conversation than a text-based code review. Until I know what you want the app to do, there’s no advice to give. If it’s just “show how complex apps can be built simply”, that’s cool. If it’s “show how apps that scale-up massively can be easily built”, that’s cool too, but it leads to a different set of design choices. We also need to clarify terms. For instance, “scaling up” can mean more users, or it can mean more developers, or both.
The problem with my goal, showing how we over-engineer everything, is that if I only take one example, many devs will say “But that’s nothing like what I’m working on” and move on, forgetting about it. So I’m forced to take example-after-example, hoping to find some combination of features and code that will resonate. That’s a much different goal than yours, I think, which is building out this one particular app. (Of course, neither goal is better or worse than the other)
If you’d like to have a conversation, ping me and we can do some video. I’d love a walkthrough from the person who coded it. (I see you’re new here. You can/should use the messaging system for stuff like that)
Thanks again. I love reading other people’s code when it’s clear and easy-to-read. This is a cool project.
Thanks a lot for the kind words, much appreciated. I agree that avoiding libraries isn’t effective in an actual project. The goal of the study was not to show that reinventing the wheel is a good idea. I just needed a formal method to enforce finding viable (and scalable) patterns during the process, and that’s how I came up with the rather hard set of rules (which prevent using escape hatches like libraries or a hidden custom framework). I would never recommend following the study’s ruleset/method religiously for a professional project. But it did help me distill what’s bad and what’s manageable with vanilla (for the given subject).
I find your thoughts regarding example-based discussion interesting, as my motivation was literally the contrary. Things like https://htmldom.dev/ and http://youmightnotneedjquery.com/ show general-purpose snippets in isolation, and have attracted quite some attention, but in my opinion fail to describe viable end-to-end practices. That’s why I wanted to build a more complete example and follow through up to a working product.
(I will ping you some time this week, thanks for your interest!)
Apologies. You’ve misunderstood me on a couple of counts.
It’s not “reinventing the wheel” vs. “toy projects” It’s using just as much as you need, ie, being able to tell how much you need for whatever project you’re on, large or small
“Example based discussion” does not mean coding examples, it means example projects. Find a problem, write an app to solve that problem. I am specifically not into taking code out of context and showing off concepts. E-gads no!
I’ve already used this to create several projects: time-tracking, static-code analysis, modular blogging, chess. I’m considering doing email as well. There should be no code examples in what I do, only examples of solving problems end-to-end. My goal is to show that what we normally think of as large problems that need large, complex tools might not be that way at all. In many cases, we may be buying a platoon of howitzers and a deep-water navy only to kill a pesky gnat in the office. We need to be able to reason better about scaling complexity.
Look forward to the chat.
So I wouldn’t say “all”, but I probably would say most libraries are indeed not worth the hassle, and this is more true today than ever with more and more features working reliably in the browser api itself.
No library comes free - it has a learning curve for devs, build hassles and/or load time to deploy, and likely its own bugs and update/versioning management. Not to mention evaluating and selecting appropriate libraries up front. So like adding a library automatically has a -100 score to start and needs to outweigh that. (And I would even ask if the feature is worth it anyway before diving into this, but assuming it is still leaves you with debt that needs to be justified.)
The automatic assumption that library = win needs to be questioned.
Very well put! You should write about this some time, if you haven’t already ;)
I ranted about it a while ago in my little blog but it was kinda an angry rant short on real substance lol
http://dpldocs.info/this-week-in-d/Blog.Posted_2019_01_28.html#my-rant-of-the-week
I like this for its purity, however for my sanity I would still opt for utilising third party libraries rather than writing everything myself.
Definitely, see my other answer :)
I think it’s interesting that you create custom events and let them bubble up–it seems pretty convenient. My impression is that in most frameworks, people prefer passing callbacks in through props. How did you decide on the bubbling approach?
And since you’re using the DOM for events… could it make sense to use the DOM for state, too? If you stored each component’s state in some
data-
attributes, maybe it would be easier to debug (in the dev tools / Elements panel) without losing the top-down data flow.Great questions. I chose bubbling because
<input>
),I think using data attributes for state is an interesting alternative to look into. I suspect problems regarding types though; data attributes are always strings so you’ll need some extra work when reading/writing state.
As a side note I’m using
getAttribute('data-key')
here and there, and sincedataset
is well supported I should probably revisit that.This is very useful, thanks! I am looking at this as someone who doesn’t do frontend, and who only understands vanilla web.
I have a couple of beginner questions, which I’ve googled some answers to myself, but would still appreciate authorative answer, if you want to give one:
Definitely not authoritative answers, but here we go ;)
rem
instead ofpx
in professional projects. Also SCSS variables (and math) are very nice for these things. Sometimes I really want pixels, sometimes I actually want a spacing relative to the current element’s font size. For example,line-height
will almost always be inem
for me.I too try to stay close to vanilla. But there are libraries that have such a huge impact on productivity, that I end up using them. There should be some sort of way to rank libraries with usefulness (productivity gain? efficiency gain?) per cost (size? complexity? potential security problems?).
I used JQuery before. Now the time has thankfully passed.
I currently use Vue.js. Seems to offer quite a lot for a small price.
I agree! If the study only validates the value of frameworks, that’s not a bad thing at all and for sure one of the possible outcomes I had in mind when starting it. I use React professionally all the time and it provides great ergonomics and productivity.
A particular story I like to share in these discussions: I once helped build a full-stack team from developers with varying backgrounds. One of them had 10 years of experience as a Java backend engineer, but had never done frontend professionally.
He was productive and implementing bugfixes/features after 2 or 3 days and said to me: “I never wanted to touch frontends, I hate jQuery, but this setup is so easy to work with and makes frontend actually fun.” All of this, of course, was after setting up a solid/complete project structure and build pipeline, but nonetheless I’m sure a key to this success story was React’s ergonomics.
Thank you for helping restore my faith in online forums and developer communities through this discussion here. So welcoming and yet thoughtful (and a very interesting original post / write up).
Thanks :)! I’m new to Lobsters and very happy about the discussion and thoughts by others in this thread as well, very sensible and constructive.
It’s always interesting to see a project you work on used as a case study :) Great work. There’s lots of thought and effort that you put into this study, and it’s a great experiment!
Very much appreciated, thank you! I can only praise the original TeuxDeux as well, it has the best concept of all the to-do apps out there :)
We’ve always said that our competition for TeuxDeux is pencil & paper, and I like that.
The app has also been running for over 8 years now, which is an eternity in the world of web apps. We’re slowly modernizing the various moving pieces that inevitably become out-of-date as the wheel of technological progress continues rolling forward. And as you’ve noticed, there are some improvements being made – we’re going to have some really nifty new features and enhancements in the near future :)
As an aside, I passed around your repo to the rest of the TeuxDeux team, and everyone loves it!
That’s awesome! Very glad you all like it :)
[Comment removed by author]