1. 9

    After 2 years my experience with Vue has been positive. I’m still confident it was a good decision to move my team from React to Vue. Not because Vue is better, but because it is a better fit for us.

    Why though? These comparisons of libraries/frameworks that amount to “differences of technical details” are not worth writing. Did the author have some problem that couldn’t be solved with React and could be solved with Vue? That would be worth hearing about. I’m talking real business-value-added problem solving. Not “the syntax is nicer”. If I was employing this author and read this article, I would feel like the migration from React to Vue had been a waste of company resources.

    1. 1

      Well ultimately they are both Javascript frameworks, so what you can do in one you can do in another, the whole idea of a framework is the syntax in which you interact with it to the base language. As such, some things will be easier in framework X and other will be easier in framework Y. In my opinion you have the Angular way to solve JS “framework projects”, then the React way, Vue sits a bit in the middle (if not closer to React).

      1. 2

        what you can do in one you can do in another

        If I’m already using a tool that can do what I need it to do, why would I invest in a new tool? And I don’t mean to reduce things to “it’s all JavaScript so you don’t need a library”. React, Angular, Ember, Vue, etc., are compelling to me to the degree that they enable a developer/organization to deliver useful software. But if React and Vue are compelling to the same degree, I’m not sure there’s anything to talk about.

      2. 1

        Author here.

        The article was getting pretty long already and I didn’t want to get into those points.

        Did the author have some problem that couldn’t be solved with React and could be solved with Vue?

        There were 3 major paint points for us with React. That was in 2015-2016, at the end of 2016 we moved to Vue.

        1. JSX is ok for JS devs, but terrible for designers that work with HTML and CSS. We tried to solve that using a Wix library called react-templates but it introduced its own set of problems.

        2. React Router was cumbersome compared to Vue Router. For example if you need to access the router from a component you needed to create a HOC. We also hit a bug that could only be solved by using setTimeout() with 0 delay. We quickly found that the React Router team was not very welcoming to feature requests or bug reports, to put it mildly. Vue Router in comparison made a lot more sense and was much easier to deal with.

        3. Managing local state compared to Vue is tedious. We considered moving to MobX, which is what a lot of React guys are doing, but at that point I gave Vue a try and it was immediately clear that it was a better fit for us.

        If I was employing this author and read this article, I would feel like the migration from React to Vue had been a waste of company resources.

        We didn’t rewrite our React projects to Vue, if that’s what you are implying.

      1. 1

        Always useful to see how real problems can be solved, particularly in this case through the use of the Observable pattern. Found the link to the tc39 observable discussion invaluable - many thanks.

        1. 1

          I’m glad. Thanks for reading!

        1. 8

          Things like this are why I laugh whenever people claim javascript is “taking over”.

          Even fucking php has const’s that are actually constant.

          1. 3

            Honestly the only people that believe JavaScript is taking over are people who haven’t used anything else.

            1. 2

              And yet it’s everywhere. If any language is really “taking over”, it’s JS. It’s mandatory in the browser, and it’s optional almost everywhere else.

              1. 1

                That’s why it’s so big. If you want to learn only one language it has to be JS.

                1. 1

                  You think javascript is mandatory to make a website or web app?

              2. 2

                But this feature (preventing mutation of mutable structures recursively) is uncommon. So far I’ve seen it only in PHP and Rust. What other languages has it too?

                1. 3

                  recursive mutation is not even the issue, const in JavaScript just means “I will not reassign this reference” so even the top-level value is mutable. You can const a=[]; a.push({});

                  1. 1

                    If top-level mutation prevention can be implemented, then all levels down is not that hard, I think. But I can’t imagine how to do it in languages with pass-everything-by-reference semantics, and most languages have such semantics (js, java, python, ruby, etc).

                    Some of such languages has “constant binding” feature too, i.e. java’s final, which lots of users find useless, and still I see lots of final in java code. Clojure has let which is similar to having only const in js, and it also can’t prevent mutation of underlying objects:

                    => (let [a (java.util.LinkedList.)] (.add a "foo") a)
                    ("foo")
                    

                    So while mutation by calling methods can’t be prevented at all in js, and it’s not surprising behavior, mutation of captured var bindings can be prevented and I think it’s useful. Lots of gotchas in js happen when closures accidentally mutate captured vars (or when captured environment mutate them).

                    Languages with multiple passing semantics such as PHP, which have by-value and by-reference passed function args, or Rust which have & and &mut, can have mutation prevention. In Rust you can see if object method does not mutate object if it has &self and not &mut self, for example. BTW, I written lots of PHP code and still don’t understand its semantics, it’s feels on the same level of complexity as Rust.

                  2. 1

                    Its not so much about whether you can have an immutable structure, it’s whether you think you can.

                    Before PHP’s const or define() supported e.g. constant arrays, you couldn’t assign them - it would error at compile time (which is effectively runtime for php).

                  3. 1

                    It’s not uncommon for things with undesirable qualities to emerge as the dominant entity in their domain. The flaws of JavaScript should not be assumed to correlate with its rate of adoption, whether or not they should.

                    1. 0

                      Do you laugh because despite all of this Javascript is arguably actuslly taking over (for whatever that means), while PHP, with its real consts, is slowly fading away?

                      1. -1

                        would this be the same javascript community that was just recently ass fucked in public because they needed a third party module to pad strings, or the one that needed a third party module to determine if a number is even?

                        I use php but I’m not attached to it. I also write shell regularly and I’ve used some Lua, a bit of java even. I’d like to try D soon.

                        Even then I’ll use php or shell or lua or whatever where it’s appropriate.

                        The javascript community has no such concept. That’s why you end up with ridiculous nodejs “alternatives” for things that could be achieved out of the box with a little shell on most *nix systems.

                    1. 1

                      If all variables are const except those that are reassigned, let becomes a strong signal to expect the reassignment. I don’t see a difference between assignments that “should” never change and assignments that “just happen to never change”. From the perspective of someone reading/debugging code, the question is simply “did the assignment change?” If I use const everywhere, this question can be quickly and confidantly answered. If I use let for assignments that don’t actually change, then I lose the certainty that let signals a reassignment somewhere further in the execution.

                      “Const isn’t useful because the assigned value is still mutable” is a confusion about the semantics of const. Mutability and side effects create many traps. Mutable assignment is one trap. Mutable values is a second trap. Strict use of const and let is an escape from the first trap. The author must look somewhere else for an escape from the second trap. In general, s/he might be better served by a compile-to-JavaScript language.

                      1. 7

                        I have a few things that I’ve been working on for the past 8 weeks:

                        • data visualization project
                        • ActivityPub implementation
                        • working through Haskell Programming from First Principles

                        I’m in a rut where I’ve lost conviction about all three of these projects. I’m enjoying Haskell Programming from First Principles the most but I’ve been demotivated lately by the thought that I should be focused on applying technology in useful ways more than learning new technologies. I don’t have any immediate or practical need to learn Haskell. I don’t expect to use it professionally. I just think the ideas are pretty interesting. But I can’t escape the feeling that there’s something more important I could/should be working on. So I’m in a rut where I have some things I could work on this week but little motivation to work on them, and I’m not certain how to resolve this stalemate.

                        1. 8

                          Would recommend to any developer who has a few months of mostly-free-time-ahead to dive deep into the Haskell world without regards to such concerns. Just a few months, then step back. Chances are high from numerous anecdotals including myself you’ll be a whole-new-developer in whatever “real-world languages” you come back to. Not in the ivory-tower over-abstracting sense either. Just in the sense of almost-deeply-instinctively circumventing the more subtle pitfalls of all non-purelyfunctional languages, and devising more-principled, less-convoluted designs. Maybe you won’t have that and maybe you’re already there anyhow — just saying, “chances are high”. In any event such time won’t be wasted and in your future real-world-work you will thank yourself for it, maybe even others (slightly less likely, nobody quantifies or detects infinite-numbers-of-troubles-avoided ;)

                          1. 1

                            I really appreciate the recommendation. I’m not totally new to the concepts in Haskell. For example, I know the basics of and am comfortable working with algebraic data types. I too have found that learning these concepts strengthens and clarifies my thinking in other areas of programming. That’s a big part of what I enjoy about Haskell and functional programming in general.

                          2. 2

                            FWIW, I don’t think I’ve ever really learned a language from a book. The examples / exercises usually feel too isolated and far-removed from the domains I work on.

                            I do have a ton of language books, but I use them more as references while I apply the language to some problem. That focuses what you need to know. Usually what I do is to find a bunch of 500-line code snippets in the language, and choose one to add a feature to.

                            I went through Real World OCaml a few years ago, and it was useful. But then I found an ocamlscheme project that I hacked on a bit, and that was a more useful learning experience. It wasn’t exactly practical, but I think it focused me on a few things rather than trying to ingest every concept in a book. There are still a bunch of things about OCaml I don’t know (and I think that is true for most people).

                            The Pareto rule generally applies – 20% of the language allows you to solve 80% of problems. There is usually a long tail of features for library authors (e.g. in C++).

                            1. 2

                              I agree and support the principle here, which I think is that abstractions are best learned through application and experience. I like focusing on the language first and then using the language to solve a non-trivial problem. I get overwhelmed if I try to learn the syntax and core concepts while also trying to think of how to best express myself in that language. But I would never consider the language learned until the second step was taken and I had used the language to complete a real-world task.

                          1. 2

                            If this method is more effective than others, I’d guess the mechanism of action abstracts to “solve a real world problem in the language to learn.” Open source is just a convenient source for real problems with the added benefit that your solution is code reviewed.

                            1. 3

                              It is the synthesis of “solves a real world problem”, “read code”, and “get someone who’s more experienced to review your code.”

                              1. 1

                                Some other benefits of open source over personal side projects (even though personal projects are a great way to learn a new concept/language) are:

                                • Your code is tested against a large test suite.
                                • Code quality guidelines help you pick up best practices specific to that language much faster than you would do usually do while working on your own projects.
                                • You get to interact with developers who have a lot of experience in that language which will also help you learn faster than a personal project where you code on your own isolated from other developers.

                                I agree with you that open source is just one of the ways of “solving a real-world problem” but for amateur developers, with little or no work experience it can prove beneficial and can get them up to speed with the best practices related to writing production quality code in a specific language(s).

                                1. 2

                                  Oh, and your code is run against lots of machines and configurations. Personal projects can reflect the quirks of your machine if they don’t get out into the world.

                              1. 1

                                It’s okay if the answer doesn’t mean a lot to you. It won’t map onto concrete coding intuitions very well. I don’t like this meme at all.

                                1. 4

                                  I recently read the chapter on the lambda calculus in Haskell Programming from First Principles, along with the papers suggested for further reading. It was a great surprise to find that a topic I assumed would be hard to access without a math background was plenty accessible when explained well. Thank you for that!

                                  What are your thoughts on the most effective way to explain monads? I thought @hwayne’s post https://hillelwayne.com/post/monad-tutorials/ was a good elucidation of the imperfection of all familiar monad explainers. I haven’t read the chapter on monads in your book, which might be the answer to my question.

                                  I think explanations like this SO answer are often regarded as unhelpful to beginners. But the most fruitful insights I have come from the learning I do in an attempt to understand statements like “a monad is just a monoid in the category of endofunctors”. The best intuitions I’ve in concrete applications of these topics have come from struggling to understand these concepts abstractly, in terms of algebraic law and not in terms of metaphor or patterns of use.

                                  1. 2

                                    What are your thoughts on the most effective way to explain monads?

                                    You already mentioned how I think people should learn Monad: Haskell Programming from First Principles.

                                    But more seriously, there’s no point explaining Monad to someone unless they’ve already learned enough Haskell to learn Monad in Haskell. The word-stuff and thought-stuff that Haskell equips you with gives you an automatically verified, mechanical way to work through exercises that will teach you Monad.

                                    1. 3

                                      I really have to agree with this. I was just reading “Real World Haskell” today (refreshing my Haskel for programming competitions), and it’s chapter on Monads was very clear, since the book had been building up to it, not by introducing the maths (that was in fact the last part of the chapter) but by showing it’s use, background (idea) and slowly sneaking up to the actual implementation.

                                      To do this properly, blog posts will rarely find the right words. It’s a longer processes, where at the end you realize that you’ve already understood it for a while, and all that was missing was the name.

                                1. 11

                                  This problem is largely solved by “Jump To” in an IDE (or fancy editor). This sort of thing is why I no longer do real work in languages without these niceties. I just don’t have the patience for it any more.

                                  1. 5

                                    Code reviews and online examples can suffer though - I have a very hard time reading unfamiliar Haskell and Agda code on Github where definitions aren’t either explicitly imported in an import list or given a qualified name. But perhaps that’s an argument for better online tooling…

                                    1. 2

                                      That’s a good point, although I agree that better tooling is probably the answer, particularly since fully-qualified imports still mean you’re stuck tracking down docs and such in code review with most of the existing tools.

                                      1. 1

                                        I have to admit, I fully agree with brendan here. Fully qualified imports really do increase readability to any new, or even old code.

                                        I don’t think better tooling is the best approach, I find explicit versus implicit generally explicit ends up being clearer.

                                        A possible middle ground is allow ONE unqualified import only as (if i remember right, only skimmed docs) purescript does. That would at least remove ambiguity as to where something could be coming from.

                                      2. 1

                                        Haskell’s Haddock supports hyperlinked source and so does Agda.

                                      3. 4

                                        you don’t even need that much; I find vim’s split-window feature is perfectly usable if I want to read the code where something is defined, or look at both the current code and the top of a file simultaneously. whereas on the flip side I know if no good way to eliminate the visual clutter caused by fully qualified names everywhere.

                                        1. 4

                                          You also can generate ctags and use them in vim ;)

                                          1. 1

                                            true :) I used to do that more often in my c++ days; somehow I lost the habit now that I’m doing python at work and ocaml at home, even though ctags would probably help with both of those.

                                          2. 3

                                            this doesn’t solve the “import all names” problem that you hit in languages like Python where some people do import * or you are importing something that was already re-exported from another location. You end up with busy work that an IDE could handle with a name lookup

                                            Though I agree that once you find the definition, split windows is a pretty nice way to operate

                                            1. 1

                                              I too find the result to be cluttered. But I also find new programming languages/syntaxes to be strange and chaotic in the same way. Once I use the language long enough, I am no longer overwhelmed. My hypothesis is that the eye will adapt to fully qualified names everywhere in the same way.

                                            2. 1

                                              I came here to say just this: with a sufficiently smart editor (vim, ide, or otherwise) this problem goes away almost entirely.

                                              That said, I think there are some arguments to be made for always-qualified imports

                                              1. 6

                                                I think it can be a cultural thing as well. I never enjoy typing datetime.datetime but don’t mind collections.namedtuple. itertools.ifilter is annoying though. Redundant words or information looks and reads bad.

                                                When the culture is to assume qualified imports, then the library will always be used to provide context, and that can be quite nice.

                                                When resolving a qualified name is the same syntax as a method call, that can look bad quickly. Python very much suffers from this problem. Think of Clojure stylebanespace syntax as an alternative.

                                              2. 1

                                                Does “Jump To” actually jump you to the import declaration or the function definition? I’ve never used an IDE. My guess is that an IDE would largely eliminate manual identification of the function’s origin. So that’s useful! But I’m not convinced that this would be faster than reading the module name inline in plain text. No keystroke or mouse click required to get the information. I guess the argument for using an IDE to solve this problem is something like the IDE places the information close at hand while also enabling a less verbose code style. That’s a reasonable argument. At some point the conversation becomes purely a debate about the pros and cons of IDEs. Then I would say that it’s nice to have code that doesn’t incur a dependency on a code editor.

                                                1. 2

                                                  You can jump to the declaration in most IDEs (provided there is support for the language). In many you can also view the documentation for a symbol inline, no need to go searching in most cases. I agree with you that this really just changes the debate to one about tooling. However, since many people (myself included) prefer the readability of unqualified imports, tooling support is important to at least think about. For example, I work in Dart a lot at work, the Dart community tends toward unqualified imports because, at least in part, I think, pretty much everyone who writes Dart code uses either an IDE or an editor plugin with IDE-like features.

                                              1. 5

                                                The file is 649 lines because large files are encouraged.

                                                A 649-line file is not a large file… ?

                                                1. 2

                                                  The same project has several files that are thousands of lines long. I took a shorter example so it wouldn’t seem hyperbolic. The point is that this file fills more than a single screen so checking the import declaration means jumping to a new context.

                                                  1. 1

                                                    Depending on the situation it might be. Though considering that I’ve seen a fair share of 1KLoc and 10KLoc files, that doesn’t really seem too large to me either.

                                                  1. -4

                                                    The language is called Go :)

                                                    1. 10

                                                      I have found it useful to use “golang” when doing google searches.

                                                      1. 13

                                                        There are two languages called Go as far as I can tell :-) I thought using Golang is nice way to differentiate.

                                                        1. 2

                                                          Rob disagrees :)

                                                          1. 9

                                                            You make a pedantic comment that ignores the common use of “Golang” to refer to the Go language. Your defense of this comment is the appeal to authority fallacy. None of this has been constructive, insightful, or even particularly correct.

                                                            1. 0

                                                              Your defense of this comment is the appeal to authority fallacy

                                                              Is Rob Pike not a trusted authority with a relevant opinion here?

                                                              1. 0

                                                                Rob Pike is an authority and his opinion is relevant. But if Rob Pike’s opinion is correct, it is not because he is Rob Pike. Thus, “because Rob Pike said so” is not a winning argument. The fallacy of the appeal to authority is not that authorities aren’t authorities. It’s just that what authorities say, like anything anyone says, must be proven.

                                                            2. 13

                                                              Rob is wrong.

                                                              1. 0

                                                                Yeah, some random internet commenter knows better what the name of the language is than its creator.

                                                                1. 6

                                                                  Creating something does not mean that one automatically has a good idea of what a good name, method or workflow is for other people.

                                                                  1. -3

                                                                    Creating something most certainly gives you the exclusive right of giving it a name, regardless of what someone else thinks about that name. Otherwise I’m just going to call you Martin Shkreli from now on.

                                                        1. -2

                                                          Here’s my issue: if you’re asking me for an estimate, you’re communicating that what I’m doing isn’t that important. If it were important, you’d get out of my way. A deadline is a resource limit; you’re saying, “This isn’t important enough to merit X, but if you can do it with <X, I guess you can go ahead”. If you ask me for an estimate, I have to guess your X (and slip under it if I want to continue the project, or exceed it if I want to do something else). If that seems self-serving or even dishonest, well let’s be honest about the dishonesty, at least here: estimates are bullshit anyway, so why not use the nonsense game for personal edge? Of course, if you’re the one being asked for an estimate, the system and odds are against you in nearly all ways, and you probably made some career-planning mistakes if you’re my age and still have to give estimates, but never mind that for now….

                                                          There are projects that are nice-to-have but not important and might be worth doing, but that aren’t worth very much and therefore should be given resource/time limits from on high. I just don’t want to work on those. If it’s not doing with an open deadline, then assign someone at a lower skill level, who can still learn something from low-grade work. This isn’t me being a prima donna; this is me being realistic and thinking about job security. If having it done cheaply is more important than having it done well, I can (and should be) replaced by someone else.

                                                          1. 3

                                                            Businesses regularly put resource limits on investments, I don’t see why software engineering salaries are exempt from this.

                                                            1. 0

                                                              I don’t see why software engineering salaries are exempt from this.

                                                              It might have something to do with the fact that the top 5% of us, at least, are smart enough that we ought to be calling the shots, rather than being pawns on someone else’s board.

                                                              1. 2

                                                                Unless you are literally on the board of a privately held company, you are pawns on someone else’s board. This isn’t hopeless, it’s just being honest with where actual final financial votes are cast.

                                                                How “smart” you are doesn’t mean you deserve to call any shots, as much as anyone who owns the company doesn’t deserve to either. Building relationships, managing expectations, cost analysis and collecting requirements are all part of making engineering estimates, and they are tools for you to exert influence over someone who has ownership/authority.

                                                            2. 2

                                                              What if all work is estimated? These inferences depend on selective estimation.

                                                              1. 1

                                                                I’ll disagree with you here a bit–I agree with your last paragraph’s approach, but I think you are leaving out a little bit.

                                                                It’s worth it to send overqualified engineers into certain projects exactly because they are more likely to know how to fix problems preemptively and because they are more likely to have a narrower distribution on the time taken to achieve the task. If you want something with a known problemspace done correctly and to a guaranteed standard and in a timely fashion, you shouldn’t send people who are still learning.

                                                                “This isn’t important enough to merit X, but if you can do it with <X, I guess you can go ahead”.

                                                                Unfortunately, this is a lot of business, right? Like, scheduling and organizing coverage and resources for projects often means that, say, a full rewrite would take too many engineers off of customer-facing work, but incremental cleanups are possible.

                                                                From the employee side, it is arbitrary, but there is at least a chance of method to the madness.

                                                              1. 1

                                                                Of the several Brainfuck implementations I have read, only one wasn’t purely interpreted. Some don’t use ASTs. Brainfuck is “popular enough” but you probably meant “popular enough and practical enough” :)

                                                                1. 2

                                                                  “You don’t need a blockchain” is the new “You need a block chain” article.

                                                                  1. 3

                                                                    I’ve lost track of the number of organizations and people I’ve talked out of a blockchain solution in the last ~6 years that I’ve seen Bitcoin as more than just a cryptocurrency.

                                                                    My entry-level question when someone tells me that they want to start their own cryptocurrency:

                                                                    Are you OK with people buying drugs with it?

                                                                    If the answer is No, then a cryptocurrency is not their solution. Not because people will buy drugs with it, but because if the creators can control who buys what with their currency, then it’s not a cryptocurrency.

                                                                  1. 12

                                                                    The Go project is absolutely fascinating to me.

                                                                    How they managed to not solve many hard problems of a language, it’s tooling or production workflow, but also solve a set to get a huge amount of developer mindshare is something I think we should get historians to look into.

                                                                    I used Go professionally for ~2+ years, and so much of it was frustrating to me, but large swaths of our team found it largely pleasant.

                                                                    1. 12

                                                                      I’d guess there is a factor depending on what you want from a language. Sure, it doesn’t have generics and it’s versioning system leaves a lot to be wished for. But personally, if I have to write anything with networking and concurrency, usualy my first choice is Go, because of it’s very nice standard library and a certain sense of being thought-thorugh when it comes to concurrency/parallelism - at least so it appears to be when comparing it to other imperative Java, C or Python. Another popular point is how the language, as compared to C-ish languages doesn’t give you too much freedom when it comes to formatting – there isn’t a constant drive to use as few characters as possible (something I’m very prone to doing), or any debates like tabs vs. spaces, where to place the opening braces, etc. There’s really something reliving about this to me, that makes the language, as you put it, “pleasant” to use (even if you might not agree with it)

                                                                      And regarding the standard library, one thing I always find interesting is how far you can get by just using what’s already packaged in Go itself. Now I haven’t really worked on anything with more that 1500+ LOC (which really isn’t much for Go), and most of the external packages I used were for the sake of convince. Maybe this totally changes when you work in big teams or on big projects, but it is something I could understand people liking. Especially considered that the Go team has this Go 1.x compatibility promise, so that you don’t have to worry that much about versioning when it comes to the standard lib packages.

                                                                      I guess the worst mistake one can make is wanting to treat it like Haskell or Python, forcing a different padigram onto it. Just like one might miss macros when one changes from C to Java, or currying when one switches from Haskell to Python, but learns to accept these things, and think differently, so I belive, one should approach Go, using it’s strengths, which it has, instead of lamenting it’s weaknesses (which undoubtedly exist too).

                                                                      1. 7

                                                                        I think their driving philosophy is that if you’re uncertain of something, always make the simpler choice. You sometimes go to wrong paths following this, but I’d say that in general this is a winning strategy. Complexity can always be bolted on later, but removing it is much more difficult.

                                                                        The whole IT industry would be a happier place if it followed this, but seems to me that we usually do the exact opposite.

                                                                        1. 1

                                                                          I think their driving philosophy is that if you’re uncertain of something, always make the simpler choice.

                                                                          Nah - versioning & dependency management is not some new thing they couldn’t possibly understand until they waited 8 years. Same with generics.

                                                                          Where generics I can understand a complexity argument for sure, versioning and dependency management are complexities everyone needed to deal with either way.

                                                                          1. 3

                                                                            If you understand the complexity argument for generics, then I think you could accept it for dependency management too. For example, Python, Ruby and JavaScript have a chaotic history in terms of the solution they adopted for dependency management, and even nowadays, the ecosystem it not fully stabilized. For example, in the JavaScript community, Facebook released yarn in October 2016, because the existing tooling was not adequate, and more and more developers are adopting it since then. I would not say that dependency management is a fully solved problem.

                                                                            1. 1

                                                                              I would not say that dependency management is a fully solved problem.

                                                                              Yes it is, the answer is pinning all dependencies, including transitive dependencies. All this other stuff is just heuristics that end up failing later on and people end up pinning anyways.

                                                                              1. 1

                                                                                I agree about pinning. By the way, this is what vgo does. But what about the resolution algorithm used to add/upgrade/downgrade dependencies? Pinning doesn’t help with this. This is what makes Minimal Version Selection, the strategy adopted by vgo, original and interesting.

                                                                                1. 1

                                                                                  I’m not sure I understand what the selection algorithm is doing then. From my experience: you change the pin, run your tests, if it passes, you’re good, if not, you fix code or decide not to change the version. What is MVS doing for this process?

                                                                                  1. 1

                                                                                    When you upgrade a dependency that has transitive dependencies, then changing the pin of the upgraded dependency is not enough. Quite often, you also have to update the pin of the transitive dependencies, which can have an impact on the whole program. When your project is large, it can be difficult to do manually. The Minimal Version Selection algorithm offers a new solution to this problem. The algorithm selects the oldest allowed version, which eliminates the redundancy of having two different files (manifest and lock) that both specify which modules versions to use.

                                                                                    1. 1

                                                                                      Unless it wasn’t clear in my original comment, when I say pin dependencies I am referring to pinning all dependencies, including transitive dependencies. So is MVS applied during build or is it a curation tool to help discover the correct pin?

                                                                                      1. 1

                                                                                        I’m not sure I understand your question. MVS is an algorithm that selects a version for each dependency in a project, according to a given set of constraints. The vgo tool runs the MVS algorithm before a build, when a dependency has been added/upgraded/downgraded/removed. If you have the time, I suggest you read Russ Cox article because it’s difficult to summarize in a comment ;-)

                                                                                        1. 1

                                                                                          I am saying that with pinned dependencies, no algorithm is needed during build time, as there is nothing to compute for every dependency version is known apriori.

                                                                                          1. 1

                                                                                            I agree with this.

                                                                        2. 4

                                                                          I had a similar experience with Elm. In my case, it seemed like some people weren’t in the habit of questioning the language or thinking critically about their experience. For example, debugging in Elm is very limited. Some people I worked with came to like the language less for this reason. Others simply discounted their need for better debugging. I guess this made the reality easier to accept. It seemed easiest for people whose identities were tied to the language, who identified as elm programmers or elm community members. Denying personal needs was an act of loyalty.

                                                                          1. 2

                                                                            How they managed to not solve many hard problems of a language, it’s tooling or production workflow, but also solve a set to get a huge amount of developer mindshare is something I think we should get historians to look into.

                                                                            I think you’ll find they already have!

                                                                          1. 7

                                                                            This is a mess.

                                                                            • Much of the technical complexity of the web has been generated by web designers who refuse to understand and accept the constraints of the medium. Overhauling the design when the implementation becomes intolerably complex is only an option when you are the designer. This luxury is unavailable to many people who build websites.
                                                                            • Suggesting that CSS grid is somehow the reincarnation of table-based layout is astonishingly simple-minded. Yes, both enable grid-based design. CSS grid achieves this without corrupting the semantic quality of the document. They’re both solutions to the same problem. But there are obvious and significant differences between how they solve that problem. It’s hard to fathom how the author misses that point.
                                                                            • The fetishization of unminified code distribution is really bizarre. The notion that developers should ship uncompressed code so that other developers can read that code is bewildering. Developers should make technical choices that benefit the user. Code compression, by reducing the bandwidth and time required to load the webpage, is very easily understood as a choice for the user. The author seems to prioritize reliving a romanticized moment in his adolescence when he learned to build websites by reading the code of websites he visited. It’s hard not to feel contempt for somehow who would prioritize nostalgia over the needs of someone trying to load a page from their phone over a poor connection so they can access essential information like a business address or phone number.
                                                                            • New information always appears more complex than old information when it requires updates to a mental model. This doesn’t mean that the updated model is objectively more complex. It might be more complex. It might not be more complex. The author offers no data that quantifies an increased compexity. What he does offer is a description of the distress felt by people who resist updating their mental model in response to new information. Whether or not his conclusions are correct, I find here more bias than observation.
                                                                            1. 8

                                                                              CSS grid achieves this without corrupting the semantic quality of the document.

                                                                              When was the last time you saw a page that is following semantic guidelines? It is so full of crap and dynamically generated tags, hope was lost a long time ago. It seems to be so crazy that developers heard about the “don’t use tables” that they will put tabular data in floating divs. Are you kidding me?! Don’t even get me started about SPAs.

                                                                              The fetishization of unminified code distribution is really bizarre.

                                                                              The point is, I think, that the code should not require minifying and only contain the bare minimum to get the functionality required. The point is to have 1kbyte unminified JS instead of 800kbyte minified crap.

                                                                              1. 4

                                                                                New information always appears more complex than old information when it requires updates to a mental model.

                                                                                I feel like you completely missed his point here. He isn’t just talking about how complex the new stuff is. He even said flexbox was significantly better and simpler to use than “float”. What he is resisting is the continual reinvention that goes on in webdev. A new build tool every week. A new flavor of framework every month. An entire book written about loading fonts on the web. Sometimes you legitimately need that new framework or a detailed font loading library for your site. But frankly even if you are a large company you probably don’t need most of the new fad of the week that happens in web dev. FlexBox is probably still good enough for you needs. React is a genuine improvement for the state of SPA development. But 3-4 different build pipelines? No you probably don’t need that.

                                                                                And while we are on the subject

                                                                                CSS grid achieves this without corrupting the semantic quality of the document.

                                                                                Nobody cares about the semantic quality of the document. It doesn’t really help you with anything. HTML is about presentation and it always has been. CSS allows you to modify the presentation based on what is presenting it. But you still can’t get away from the fact that how you lay things out in the html has an effect on the css you write. The semantic web has gone nowhere and it will continue to go nowhere because it’s built on a foundation that fundamentally doesn’t care about it. If we wanted semantic content we would have gone with xhtml and xslt. We didn’t because at heart html is about designing and presenting web pages not a semantic document.

                                                                                1. 3

                                                                                  Nobody cares about the semantic quality of the document.

                                                                                  Anybody who uses assistive technology cares about its semantic quality.

                                                                                  Anybody who choses to use styles in Word documents understands why they’d want to write documents with good semantic quality.

                                                                                  You still can’t get away from the fact that how you lay things out in the html has an effect on the css you write.

                                                                                  That’s… the opposite of the point.

                                                                                  All of the cycles in web design – first using CSS at all (instead of tables in the HTML) and then making CSS progressively more powerful – have been about the opposite:

                                                                                  How you lay things out on the screen should not determine how the HTML is written.

                                                                                  Of course the CSS depends on the HTML, as you say. The presentation code depends on the content! But the content should not depend on the presentation code. That’s the direction CSS has been headed. And with CSS Grid, we’re very close to the point where content does not have to have a certain structure in order to permit a desired presentation.

                                                                                  And that’s my main issue with the essay: it presents this forward evolution in CSS as cyclical.

                                                                                  (The other issue is that the experience that compelled the author to write the article in the first place – the frenetic wheel reinvention that has taken hold of the Javascript world – is wholly separate from the phases of CSS. As far as that is concerned, I agree with him: a lot of that reinvention is cyclical and essentially fashion-driven, is optional for anyone who isn’t planning on pushing around megabytes of Javascript, and that anyone who is planning on doing that ought to pause and reconsider their plan.)

                                                                                  If we wanted semantic content we would have gone with xhtml and xslt.

                                                                                  Uh… what? XHTML is absolutely no different from HTML in terms of semantics and XSLT is completely orthogonal. XML is syntax, not semantics. It’s an implementation detail at most.

                                                                                  1. 3

                                                                                    If you are a building websites, please do more research and reconsider your attitude about semantic markup. Semantic markup is important for accessibility technologies like screen readers. RSS readers and search indexes also benefit from semantic markup. In short, there are clear and easily understood necessities for the semantic web. People do care about it. All front end developers I work with review the semantic quality of a document during code reviews and the reason they care is because it has a real impact on the user.

                                                                                    1. 2

                                                                                      Having built and relied on a lot of sematic web (lowercase) tech, this is just untrue. Yes, many devs don’t care to use even basic semantics (h1/section instead of div/div) but that doesn’t mean there isn’t enough good stuff out there to be useful, or that you can’t convince them to fix something for a purpose.

                                                                                      1. 1

                                                                                        I don’t know what you worked on but I’m guessing it was niche. Or if so then you spent a lot of time dealing with sites that most emphatically didn’t care about the semantic web. The fact is that a few sites caring doesn’t mean the industry cares. The majority don’t care. They just need the web page to look just so on both desktop and mobile. Everything else is secondary.

                                                                                  1. 57

                                                                                    Meaningful is…overrated, perhaps.

                                                                                    A survey of last four jobs (not counting contracting and consulting gigs, because I think the mindset is very different)

                                                                                    • Engineer at small CAD software startup, 50K/yr, working on AEC design and project management software comfortably 10 years ahead of whatever Autodesk and others were offering at the time. Was exciting and felt very important, turned out not to matter.
                                                                                    • Cofounder at productivity startup, no income, felt tremendously important and exciting. We bootstrapped and ran out of cash, and even though the problems were exciting they weren’t super important. Felt meaningful because it was our baby, and because we’d used shitty tools before. We imploded after running out of runway, very bad time in life, stress and burnout.
                                                                                    • Engineering lead at medical startup, 60K/yr, working on health tech comfortably 20 years ahead of the curve of Epic, Cerner, Allscripts, a bunch of other folks. Literally saving babies, saving lives. I found the work very interesting and meaningful, but the internal and external politics of the company and marketplace soured me and burned me out after two years.
                                                                                    • Senior engineer at a packaging company, 120K/yr, working on better packaging. The importance of our product is not large, but hey, everybody needs it. Probably the best job I’ve ever had after DJing in highschool. Great team, fun tech, straightforward problem space.

                                                                                    The “meaningful” stuff that happened in the rest of life:

                                                                                    • 3 relationships with wonderful partners, lots of other dating with great folks
                                                                                    • rather broken family starting to knit together slowly, first of a new generation of socks has been brought into the world
                                                                                    • exciting and fun contracting gigs with friends
                                                                                    • two papers coauthored in robotics with some pals in academia on a whim
                                                                                    • some successful hackathons
                                                                                    • interesting reflections on online communities and myself
                                                                                    • weddings of close friends
                                                                                    • a lot of really rewarding personal technical growth through side projects
                                                                                    • a decent amount of teaching, mentoring, and community involvement in technology and entrepreneurship
                                                                                    • various other things

                                                                                    I’m a bit counter-culture in this, but I think that trying to do things “meaningful for humanity” is the wrong mindset. Look after your tribe, whatever and whoever they are, the more local the better. Help your family, help your friends, help the community in which you live.

                                                                                    Work–at least in our field!–is almost certainly not going to help humanity. The majority of devs are helping run arbitrage on efficiencies of scale (myself included). The work, though, can free up resources for you to go and do things locally to help. Meaningful things, like:

                                                                                    • Paying for friend’s healthcare
                                                                                    • Buying extra tech gear and donating the balance to friends’ siblings or local teaching organizations
                                                                                    • Giving extra food or meals to local homeless
                                                                                    • Patronizing local shops and artisans to help them stay in business
                                                                                    • Supporting local artists by going to their shows or buying their art
                                                                                    • Paying taxes

                                                                                    Those are the things I find meaningful…my job is just a way of giving me fuckaround money while I pursue them.

                                                                                    1. 14

                                                                                      I’m a bit counter-culture in this, but I think that trying to do things “meaningful for humanity” is the wrong mindset. Look after your tribe, whatever and whoever they are, the more local the better. Help your family, help your friends, help the community in which you live.

                                                                                      Same (in the sense that I have the same mindset as you, but I’m not sure there is anything right or wrong about it). I sometimes think it is counter-culture to say this out loud. But as far as I can tell, despite what anyone says, most peoples’ actions seem to be consistent with this mindset.

                                                                                      There was an interesting House episode on this phenomenon. A patient seemingly believed and acted as if locality wasn’t significant. He valued his own child about the same as any other child (for example).

                                                                                      1. 9

                                                                                        I pretty much agree with this. Very few people have the privilege of making their living doing something “meaningful” because we live within a system where financial gains do not correspond to “meaningful” productivity. That’s not to say you shouldn’t seek out jobs that are more helpful to the world at large, but not having one of those rare jobs shouldn’t be too discouraging.

                                                                                        1. 4

                                                                                          Meaningful is…overrated, perhaps.

                                                                                          I think specifically the reason I asked is because I find it so thoroughly dissatisfying to be doing truly meaningless work. It would be nice to be in a situation where I wake up and don’t wonder if the work I spend 1/3rd of my life on is contributing to people’s well-being in the world or actively harming them.

                                                                                          Even ignoring “the world,” it would be nice to optimize for the kind of fulfillment I get out of automating the worst parts of my wife’s job, mentoring people in tech, or the foundational tech that @cflewis talks about here.

                                                                                          Work–at least in our field!–is almost certainly not going to help humanity. The majority of devs are helping run arbitrage on efficiencies of scale (myself included).

                                                                                          I think about this a lot.

                                                                                          1. 10

                                                                                            In general I find capitalism and being trapped inside of capitalism to generally be antithetical to meaningful work in the sense that you’ll rarely win at capitalism if you want to do good for the world, no matter what portion of the world you’re interested in helping.

                                                                                            A solution I found for this is to attain a point where financially I don’t have to work anymore to maintain my standard of living. It’s a project in the making, but essentially, passive income needs to surpass recurring costs and you’re pretty much good to go. To achieve that, you can increase the passive income, diminish the recurring costs, or both (which you probably want to be doing. Which i want to be doing, anyway.

                                                                                            As your passive income increases, you (potentially) get to diminish your working hours until you don’t have to do it anymore (or you use all the extra money to make that happen faster). Freedom is far away. Between now and then, there won’t be a lot of “meaningful” work going on, at least, not software related.

                                                                                            [Edit: whoever marked me as incorrect, would you mind telling me where? I’m genuinely interested in this; I thought I was careful in exposing this in a very “this is an opinion” voice, but if my judgement is fundamentally flawed somehow, knowing how and why will help me correct it. Thanks.]

                                                                                            1. 8

                                                                                              Agree re. ‘get out of capitalism any way you can’, but I don’t agree with passive income. One aspect of capitalism is maximum extraction for minimum effort, and this is what passive income is. If you plan to consciously bleed the old system dry whole you do something which is better and compensates, passive income would be reasonable; if you want to create social structures that are as healthy as possible for as many people as possible, passive income is a hypocrisy.

                                                                                              I prefer getting as much resource (social capital, extreme low cost of living) as fast as possible so you can exit capitalism as quickly as possible.

                                                                                              1. 1

                                                                                                Are you talking about the difference between, say, rental income (passive income) and owning equities (stockpile)? Or do you mean just having a lot of cash?

                                                                                                1. 1

                                                                                                  Yes, if you want to live outside capitalism you need assets that are as far as possible conceptually and with least dependencies on capitalism whilst supporting your wellbeing. Cash is good. Social capital, access to land and resource to sustain yourself without needing cash would be lovely, but that’s pretty hard right now while the nation state and capitalism are hard to separate.

                                                                                                  1. 1

                                                                                                    Do you ever worry about 70’s (or worse) style inflation eroding the value of cash? In this day and age, you can’t even live off the land without money for property taxes.

                                                                                          2. 3

                                                                                            Work–at least in our field!–is almost certainly not going to help humanity. The majority of devs are helping run arbitrage on efficiencies of scale (myself included).

                                                                                            This 100%. A for-profit company can’t make decisions that benefit humanity as their entire goal is to take more than they give (AKA profit).

                                                                                            1. 2

                                                                                              Sure they can. They just have to charge for a beneficial produce at a rate higher than the cost. Food, utilities, housing, entertainment products, safety products… these come to mind.

                                                                                              From there, a for-profit company selling a wasteful or damaging product might still invest profits into good products/services or just charity. So, they can be beneficial as well just more selectively.

                                                                                            2. 2

                                                                                              I think you’re hitting at a similar truth that I was poking at in my response, but from perhaps a different angle. I would bet my bottom dollar that you found meaning in the jobs you cited you most enjoyed, but perhaps not “for humanity” as the OP indicated.

                                                                                              1. 1

                                                                                                What is the exact meaning of “run arbitrage on efficiencies of scale”? I like the phrase and want to make sure I understand it correctly.

                                                                                                1. 5

                                                                                                  So, arbitrage is “taking advantage of the price difference in two or more markets”.

                                                                                                  As technologists, we’re in the business of efficiency, and more importantly, efficiency of scale. Given how annoying it is to write software, and how software is duplicated effortlessly (mostly, sorta, if your ansible scripts are good or if you can pay the Dread Pirate Bezos for AWS), we find that our talents yield the best result when applied to large-scale problems.

                                                                                                  That being the case, our work naturally tends towards creating things that are used to help create vast price differences by way of reducing the costs of operating at scale. The difference between, for example, having a loose federation of call centers and taxis versus having a phone app that contractors use. Or, the difference between having to place classified ads in multiple papers with a phone call and a mailed check versus having a site where people just put up ads in the appropriate section and email servers with autogenerated forwarding rules handle most of the rest.

                                                                                                  The systems we build, almost by definition, are required to:

                                                                                                  • remove as many humans from the equation as possible (along with their jobs)
                                                                                                  • encode specialist knowledge into expert systems and self-tuning intelligences, none of which are humans
                                                                                                  • reduce variety and special-cases in economic and creative transactions
                                                                                                  • recast human labor, where it still exists, into a simple unskilled transactional model with interchangeable parties (every laborer is interchangeable, every task is as simple as possible because expertise are in the systems)
                                                                                                  • pass on the savings at scale to the people who pay us (not even the shareholding public, as companies are staying private longer)

                                                                                                  It is almost unthinkable that anything we do is going to benefit humanity as a whole on a long-enough timescale–at least, given the last requirement.

                                                                                                2. 1

                                                                                                  Care about your tribe, but also care about other tribes. Don’t get so into this small scope thinking that you can’t see outside of it. Otherwise your tribe will lack the social connections to survive.

                                                                                                  Edit: it’s likely my mental frame is tainted by being angry at LibertarianLlama, so please take this comment as generously as possible :).

                                                                                                  1. 1

                                                                                                    Speaking of that, is there any democratic process that we could go through such that someone gets banned from the community? Also what are the limits of discussion in this community?

                                                                                                1. 4

                                                                                                  How are Java exceptions monadic? The examples contrast two styles of exception handling. I don’t see a clear connection to the monadic laws in any of the examples. I don’t know Java, so I might be overlooking something. I do know JavaScript and I’m also confused by that section. A Promise is said to be a monad because Promise.resolve and Promise.prototype.then are equivalent to unit and bind in the very specific sense that these operators can be used to satisfy the monadic laws of identity and associativity. The monadic nature of a Promise is not related to control flow or asynchronous code execution, except perhaps incidentally. I hope I’m not being pedantic here. What the article says about Promises is much more essential for practitioners than the data structure’s theoretical context. But if the topic is Promise as monad, it seems like the relevant information is missing.

                                                                                                  1. 2

                                                                                                    I can see what the author’s getting at here - RemoteData doesn’t capture more complex use cases, but I don’t think it’s intended to. RemoteData models a single loading event. Dropping it completely for more complex use cases strikes me a bit as throwing the baby out with the bathwater. I suspect a cleaner option would be to compose RemoteData with additional state and data structures to build up to the use cases he describes.

                                                                                                    For existing data and a refresh request, that might look something like:

                                                                                                    type RefreshableData
                                                                                                         = Refreshable a (RemoteData e a)
                                                                                                    

                                                                                                    For a large number of requests, you could do something like:

                                                                                                    type alias RequestGroup e a = List (RemoteData e a)
                                                                                                    

                                                                                                    And it becomes pretty easy to derive the states from the list:

                                                                                                    {- This can be optimized, but if you have enough data
                                                                                                        on the page for it to be an issue, you probably have bigger UX problems -}
                                                                                                    
                                                                                                    isPartialLoading requestGroup =
                                                                                                        (List.any RemoteData.isLoading requestGroup)
                                                                                                            && (not (List.all RemoteData.isLoading requestGroup)) 
                                                                                                    

                                                                                                    Of course, in these examples, the states aren’t represented as union types, so you lose come compiler checking that you’ve handled all states. That said, I’ve worked on some pretty complex interfaces, and I have never needed or wanted something that would validate that we had code to handle all of:

                                                                                                    • empty, general error, and request pending
                                                                                                    • empty, general error, and request pending for a subset of the data
                                                                                                    • empty, error for a subset of the data, and request pending
                                                                                                    • empty, error for a subset of the data, and request pending for a subset of the Data
                                                                                                    • data cached and general error
                                                                                                    • data cached and error for a subset of the data
                                                                                                    • data cached and request pending
                                                                                                    • data cached and request pending for a subset of the data
                                                                                                    • data cached, general error, and request pending
                                                                                                    • data cached, general error, and request pending for a subset of the data
                                                                                                    • data cached, error for a subset of the data, and request pending
                                                                                                    • data cached, error for a subset of the data, and request pending for a subset of the data

                                                                                                    That said, if you really wanted it, you could take put your composition of RemoteData with additional state into it’s own module, make it an opaque type and enforce correct transistions between states by limiting the exposed API.

                                                                                                    I think all of this would be clearer with a specific use case in mind. The exercise in the article strikes me as a case of premature generalization. It seems like it’s trying to solve all possible problems rather than anything specific.

                                                                                                    I also have questions about what kind of cache is being referenced in the article, as I have some fairly strong opinions about caching data in client-side applications. (TL;DR: Don’t, the browser can already do this for you.)

                                                                                                    1. 3

                                                                                                      I also have questions about what kind of cache is being referenced in the article, as I have some fairly strong opinions about caching data in client-side applications. (TL;DR: Don’t, the browser can already do this for you.)

                                                                                                      I would love to hear more about this, because two problems have me stuck with client-side JS data caches in my apps. And I love deleting code.

                                                                                                      1. Embedded documents. Every app I’ve worked on denormalizes the data we fetch to cut down on HTTP requests. Is it cheap enough with HTTP/2 to send lots and lots of requests? Even then, it would mean a bunch of sequential round-trips for each child document that depends on its parent.

                                                                                                      2. Consistency. If two parts of the app load the same data at different times, we can get different responses, and have weirdness in the UI. With a JS cache, when we update a document, we can have every dependent piece of UI re-render and be consistent.

                                                                                                      1. 3

                                                                                                        You should take this all with a big grain of salt, because I have not had the bandwidth to implement most of these ideas in practice, due to the usual constraints around priorities and limited time. I’ve just been increasingly bothered by the complexity of implementing caches in the client, or alternatively the ugly behavior that results when they’re implemented naively, and on reflection, I think it’s mostly unnecessary. I have probably missed some corner cases and I suspect there are many apps where some amount of specific, targeted caching might still be useful for a subset of APIs or pages.

                                                                                                        With all of that in mind, I will say that #1 is probably the best reason I’ve seen for having a client side cache. I think in that case it’s worth looking at usage patterns to be sure it’s really providing benefit. If the individual requests your app is making in between big de-normalized requests don’t overlap much with the de-normalized data, the client side cache isn’t going to buy you much, although neither is the browser cache. Or if you’re always making large de-normalized requests, you’re still probably not getting a caching win, unless you have a way of structuring those requests to specify what you already have data on.

                                                                                                        I think there’s a lot of promise with HTTP/2. The single TCP connection is nice on it’s own, but there’s also the potential to do interesting things like make a request for a de-normalized structure that only contains the relationships between resources, and then have the server push the actual data from the resources individually. That way they’re cached individually, and the browser will actually cancel the push for resources it has already cached. Running some experiments with that is somewhere medium-high on my TODO list.

                                                                                                        #2 is tricky. If you data doesn’t change often, it’s not as big of a deal, or if you don’t often/never show the same data twice on the same page. One thing you can do if it’s still a problem after taking both of those into consideration is to track ongoing requests at the API layer. If you’re using something like promises, that means when a request comes in while another is still outstanding, you should be able to return the promise from the first request to the second caller, and just share the API call. If the first request has completed already, the browser should have the data in it’s cache (assuming the data has a time-based cache rather than something like etags).

                                                                                                        1. 2

                                                                                                          This is awesome, thank you so much for taking the time to share your thoughts @mcheely!

                                                                                                          Solving the round-trip part of #1 with HTTP/2 server push seems like it could be so damn magical and cool. In my most common case of hard-to-cache embedding – “load a list of items” and then “load a detailed view of one item” – it seems like a drop-in solution.

                                                                                                          For #2, I actually hadn’t thought about races! I was thinking more about the case where data rendered on one part of the screen becomes stale, but there’s no way for the browser cache to tell that part of the UI to re-render, so it stays stale. I guess, since we hope it to be cached, maybe I just need to adjust my thinking to re-render more things more often. Cheap most of the time, since it’s cached, and expensive when it should be expensive anyway. Huh.

                                                                                                          (solving the races by having a client API layer managing promises across the whole app starts to feel like a dangerously tempting place to add new features like… caching :P )

                                                                                                          I think in that case it’s worth looking at usage patterns

                                                                                                          I think it all comes back to this, for me. Building an app usually feels like a process of discovery to me; top-down plans don’t survive long. The usage patterns can be pretty unstable, and it can get painful surprisingly quickly to be completely naïve about loading data.

                                                                                                          …It’s appealing to imagine that a client-side cache, made hopefully robust through explicit modeling of all the possible states of each piece of data, can provide a 90% solution in a general way. Coupled with things like graphQL or PostgREST, you just build stuff and it works reasonably well, for free-ish.

                                                                                                      2. 2

                                                                                                        Thanks for reading and thanks for the feedback.

                                                                                                        I think all of this would be clearer with a specific use case in mind. The exercise in the article strikes me as a case of premature generalization. It seems like it’s trying to solve all possible problems rather than anything specific.

                                                                                                        As I say in the post, “States, events, and transitions should reflect the needs of the application”. I listed those states as an example because the last four applications I’ve built have needed all of these states. I tried to use RemoteData for two of those applications and ran into the problems I describe in the post. These apps do not strike me as complex and three years of dealing with these states led me to assume they were common. One example is an app that lists financial transactions. The app periodically refreshes the list. A loading icon is shown next to the list header during each refresh. Error messages are shown above the list if a refresh fails. That’s half of the states in that list already. On top of that, the user can make inline edits to the transaction details. When the updates are committed, a loading icon displays next to the transaction title. Errors related to the update (e.g. network failure) are displayed under the transaction title. That’s all of the states in that list. But the point of the article is not to dictate states to the reader. Again, “States, events, and transitions should reflect the needs of the application”. The point is that you cannot oversimplify the problem just because the result of that oversimplification looks nice in a blog post or a tweet.

                                                                                                        RemoteData models a single loading event. Dropping it completely for more complex use cases strikes me a bit as throwing the baby out with the bathwater.

                                                                                                        I agree that these states map closely to the HTTP request/response lifecycle. As I said in the post, “RemoteData models a stateful cache of data in terms of a stateless transfer of data, REST.” The original RemoteData post clearly states that the pattern is intended to model the cache, not the request/response lifecycle. That is why that post starts by evaluating existing patterns for modeling cached data and then offers RemoteData as an alternative. Notice that these posts place RemoteData in the model and that the view functions consume RemoteData - cache state, not request state.

                                                                                                      1. 2

                                                                                                        As someone who is just starting to dive deep into operating systems, especially Unix, I’m grateful for all the writing you’ve done about the Oil project.

                                                                                                        Oil is taking shell seriously as a programming language, rather than treating it as a text-based UI that can be abused to write programs.

                                                                                                        One question in response to this statement is at what point does the shell language become just another programming language with an operating system interface. This question seems especially important when the Oil shell language targets users who are writing hundreds of lines of shell script. If someone is writing an entire program in shell script, what is the advantage of using shell script over a programming language? You seem to anticipate this question by comparing the Oil shell language to Ruby and Python:

                                                                                                        …Python and Ruby aren’t good shell replacements in general. Shell is a domain-specific language for dealing with concurrent processes and the file system. But Python and Ruby have too much abstraction over these concepts, sometimes in the name of portability (e.g. to Windows). They hide what’s really going on.

                                                                                                        So maybe these are good reasons (not sure if they are or aren’t) why Ruby and Python scripts aren’t clearly better than shell scripts. You also provide a mix of reasons why shell is better than Perl. For example: “Perl has been around for more than 30 years, and hasn’t replaced shell. It hasn’t replaced sed and awk either.”.

                                                                                                        But again, it doesn’t seem to clearly answer why the domain language for manually interacting with the operating system should be the same language used to write complex scripts that interact with the operating system. Making a language that is capable of both should provide a clear advantage to the user. But it’s not clear that there is an advantage. Why wouldn’t it be better to provide two languages: one that is optimized for simple use cases and another that is optimized for complex use cases? And why wouldn’t the language for complex use cases be C or Rust?

                                                                                                        1. 3

                                                                                                          My view is that the most important division between a shell language and a programming language is what each is optimized for in terms of syntax (and semantics). A shell language is optimized for running external programs, while a programming language is generally optimized for evaluating expressions. This leads directly to a number of things, like what unquoted words mean in the most straightforward context; in a fluid programming language, you want them to stand for variables, while in a shell language they’re string arguments to programs.

                                                                                                          With sufficient work you could probably come up with a language that made these decisions on a contextual basis (so that ‘a = …’ triggered expression context, while ‘a b c d’ triggered program context or something like that), but existing programming languages aren’t structured that way and there are still somewhat thorny issues (for example, how you handle if).

                                                                                                          Shell languages tend to wind up closely related to shells (if not the same) because shells are also obviously focused on running external programs over evaluating expressions. And IMHO shells grow language features partly because people wind up wanting to do more complex things both interactively and in their dotfiles.

                                                                                                          (In this model Perl is mostly a programming language, not a shell language.)

                                                                                                          1. 1

                                                                                                            Thanks, glad you like the blog.

                                                                                                            So maybe these are good reasons (not sure if they are or aren’t) why Ruby and Python scripts aren’t clearly better than shell scripts.

                                                                                                            Well, if you know Python, I would suggest reading the linked article about replacing shell with Python and see if you come to the same conclusion. I think among people who know both bash and Python (not just Python), the idea that bash is better for scripting the OS is universal. Consider that every Linux distro uses a ton of shell/bash, and not much Python (below a certain level of the package dependency graph).

                                                                                                            The main issue is that people don’t want to learn bash, which I don’t blame them for. I don’t want to learn (any more) Perl, because Python does everything that Perl does, and Perl looks ugly. However, Python doesn’t do everything that bash does.

                                                                                                            But again, it doesn’t seem to clearly answer why the domain language for manually interacting with the operating system should be the same language used to write complex scripts that interact with the operating system.

                                                                                                            There’s an easy answer to that: because bash is already both languages, and OSH / Oil aim to replace bash.

                                                                                                            Also, the idea of a REPL is old and not limited to shell. It’s nice to build your programs from snippets that you’ve already tested. Moving them to another language wouldn’t really make sense.

                                                                                                          1. 4

                                                                                                            It’s strange and disappointing that this post does not reflect the nuance and mix of experience documented on a related Lobsters thread created by the post’s author.

                                                                                                            1. 6

                                                                                                              Seems like he’s been exploring that space quite bit more these days:

                                                                                                              I reached a dark night of the soul with regard to software and technology. There were moments when I looked around and realized that my total contribution to humanity, by working for an increasingly maleficent industry, might be negative. The 21st century’s American theatre has featured the dismantling of the middle class, and I can’t say I had nothing to do with it.

                                                                                                              I’ve worked on many features and enhancement that have caused back office people to lose their jobs. One project was “online return” functionality for a medium sized, online retailer ($300M in it’s hayday). FIve people worked returns, then zero. It’s not a good feeling.

                                                                                                              1. 5

                                                                                                                I’ve written software that made friends of mine redundant, and it’s definitely a terrible feeling.

                                                                                                                I feel like there’s an important distinction between

                                                                                                                • Creating productivity improvements (which can result in redundancies but could also increase the total amount of work available) - eg designing electric drills, vs
                                                                                                                • Enabling impersonal mistreatment (of a kind that wouldn’t happen personally) - eg uber automatically routing work away from underperforming contractors instead of employing/training staff
                                                                                                                1. 2

                                                                                                                  There’s a neat game where you play the villain– a paperclip maximizer– and it captures how I feel about working in the tech industry.

                                                                                                                  My experience has been invaluable. I’m building an AI to test some “final” design changes to a card game, Ambition; I couldn’t do it if I hadn’t been a programmer for 10 years. Programming is a great skill set to have, and it really disciplines the mind in a way that’s opposite to what happens to most people in their 20s and 30s (the imprecision of thought that is typical of Corporate America infects them, and they lose their sharpness). I also couldn’t write Farisa (a mage/witch heroine in a world with a complex magic system) without experience of other-than-neurotypicality, nor could I write my first-book villains (in a steampunk dystopia, the Pinkertons win and gradually become evolve into Nazis) if I didn’t have painful experience with what people are when the stakes are high enough.

                                                                                                                  All of that said, I look at what I’ve accomplished to date, and I think I probably come up just barely on the right size of zero. It’s really unsettling. Like anyone else, I could die tomorrow. Corporate America only persists because people forget their own mortality– it would become a ghost town within hours if people fully realized that they, some day, will leave this world for the utterly unknowable– but also because they lose all sense of moral agency. Fuck everything about that.