1. 5

    I downvoted this because Quora won’t let me read the other answers without authenticating.

    1. 4

      I hate Quora as much as the next guy. I would be happy to post the answer as a standalone blog post and link to that—although, if the problem is “you are only able to read my answer and no one else’s,” I’m not sure if that solves the problem. :P

      1. 1

        It’s strange; sometimes they do it, sometimes they don’t.

        1. 1

          What reason did you select? There’s no option for paywall or registration required, which might be good additions.

          1. 2

            How about “inaccessible”? It could be used for paywalls as well as a site going down.

            I visited the link just now on my phone and Quora showed the entire page without obfuscation, so I removed my downvote. Though if other users have this problem while reading Quora URLs, maybe their submission here should be discouraged.

            1. 1

              “Inaccessible” is pretty good though sites go down for assorted reasons not intrinsic to some site policy . I’d hate to see a submission down voted because the target site wasn’t running memcache or something.

        1. 11

          I flagged this as spam, in the sense that this is blog spam that adds nothing to the original article. Please let me know if I was incorrect in doing this.

          I find these links more helpful:

          1. 2

            Completely agree.

            1. 1

              I am not sure why folks are saying that the claims are bullshit.

              The dude who wrote the second post you lined to left LinkedIn in 2009. He was on the team that built the rails system that was replaced with node long after he left. Choice quote from his blog post: “I don’t think there is any question that v8 is freakin’ fast. If node.js blocked on I/O, however, this fact would have been almost completely irrelevant.”

              That said, I agree with you that the post was blogspam, and thanks for posting those links!

            1. 1

              This inspired to me write one in Go that did what I wanted (specify number of pomodoros, break time, and pomodoro time).

              1. 2

                This link is 404'ing for me.

                1. 1

                  Whoops – I forgot that I ended up renaming the repository.

                  1. 3

                    You accidentally linked to your old repository again, I think. Here is the current one.

              1. 1

                I work with some pretty javascript heavy apps and I never really had a problem managing dependencies.

                Maybe if jquery was broken into smaller modular parts ala Ender, then it would make sense.

                1. 1

                  I think it’s a little bit of a chicken-and-egg problem. As a framework author, I’d love to be able to provide Ember.js in a more modular format to our users—but it’s such a pain in the neck right now, we basically have to distribute it as one file. If it was trivial for people to assemble different packages, we’d be happy to support it. In fact, we specifically architected Ember.js in this way (just look inside our packages directory); we’re just waiting for a sufficiently sophisticated package manager to appear.

                  1. 1

                    Do you consider BOWER to be a sufficiently sophisticated package manager?

                    1. 2

                      No, I don’t think it is, and I’ve had several meetings with Alex and Jacob trying to convince them of it. In particular, I don’t think using git as a mechanism for package management is anywhere close to a good idea.

                      You really want the ability to query a central repository for a changeset, instead of what Bower does, which is query GitHub once for every package and its dependencies. After the rubygems.org server made this change, installing gems went from an annoyance to a process you barely notice anymore.

                      Bower was built to serve very specific needs inside of Twitter, and while I think it’s commendable that they’re releasing it as open source, I don’t think it’s the comprehensive solution that the browser JavaScript community needs, and I’m afraid it will steal oxygen from other projects that might have more ambitious goals. In the meantime, I’m hoping that I can convince someone that JS application developers need something closer to the RubyGems or NPM end of the spectrum.

                1. 2

                  I always get lost in frameworks.. I find it takes me longer to learn the ins and outs of a framework than it does to just write my own code to handle things..

                  1. 10

                    I often use a GC analogy when people make comments like this.

                    Before garbage collection, excellent developers got extremely good at manually memory management. Good libraries, frameworks and applications had few if any memory leaks, which distinguished them from libraries, frameworks and applications written by lazier or less competent developers.

                    The first garbage collectors weren’t very good, so it was reasonable for those developers who had honed their memory management skills to poo-poo those tools (and languages that came with them) in favor of direct control over the environment (and smooth performance) by continuing to leverage those skills. Lazier and less competent developers could make use of the new garbage collectors to eliminate the most egregious leaks that they were introducing, but they still couldn’t touch the performance of the hand-coders.

                    Over time, garbage collectors have improved significantly, giving them acceptable performance (CPU, memory, pauses) for the vast majority of cases. Over time, most of the people who rejected early garbage collectors have come around, freeing mental cycles for other domain-specific problems without giving up very much. Also, as garbage collectors improve, the number of developers who can out-perform a reasonably-implemented GC by hand shrinks.

                    This sort of pattern is relatively common with the introduction of any higher-level abstraction. When jQuery first came out, the DOM Scripting heroes of 2004 mostly stuck with their tried-and-true hand-coded methods. As jQuery itself got better, most of the hold-outs found that they would rather surrender to jQuery and focus their energies on building applications.

                    See http://simonwillison.net/2007/aug/15/jquery/ for a well-reasoned article by Simon Willison, an early JavaScript adopter, when he came around to using jQuery. (“For the past few years, I’ve been advising people to only pick a JavaScript library if they were willing to read the source code and figure out exactly how it works… I think I’m going to have to reconsider my advice”).

                    What this means is that if you’re an extremely skilled developer and a higher-level abstraction emerges, your initial reaction will rationally be to reject the abstraction. After all, you’re good at what you do and the abstraction is clunky and suffers from being more generic than the hand-crafted solution you are using. But good developers will also revisit those abstractions periodically to determine whether those problems have receded, and identify the moment when the abstraction leapfrogs their ability to do things by hand.

                    1. 3

                      One thing to consider is that abstractions do not necessarily follow a single path along the axis of “better”: sometimes, and I might even argue for “most of the time”, when you accept one abstraction you open the door to some new opportunities while at the same time closing the door on others. I thereby think it is reasonable for some people to dislike specific abstractions, or to generally be wary of new abstractions, until they are proven.

                      In the case of garbage collection, you certainly gain a lot: I would never argue with someone claiming that garbage collection offers key and even unique advantages to systems that use it. However, when you accept automatic collection of memory into your life, you fundamentally lose out on a few alternative opportunities, including deterministic finalization; these are not optimizations mind you, these affect expressiveness and coupling.

                      As a specific example, if you are working with a “file”, due to the semantics of sharing files with other proceses (or even other computers), it becomes important that when you are done using a file for you close it: if someone else, or even you, needs to open the file again later, you might be unable to do so until the previous usage of the file is done. This is because the concept of “using the file” is similar to a lock or a semaphore: only one person can safely utilize its semantics at once.

                      In a language where object lifetime has to be manually spelled out, this is a trivial problem: when the object is deallocated the file can be closed, a general class of technique and organization that C++ developers call RAII (resource acquisition is initialization). As every object must be deallocated at some point, you have an opportunity to attach at exactly the right moment in the lifecycle of that object to clear away that file handle.

                      If you then provide a language-mediated solution for this deterministic finalization, such as stack allocation (which in C++ means that developers almost never have to manually deallocate anything: the program feels quite automated, despite having “manual” memory management), those benefits not only affect memory resources, they affect everything. Therefore, whether you are allocating memory, file handles, or bicycle messengers, you have a single management interface.

                      However, in a garbage collected environment, you no longer can guarantee when objects will be cleaned up: someone else might have a reference, and it becomes the job of the garbage collector to determine when, or even if (as there is no guarantee that memory is even a precious resource or that garbage will ever be collected at all), the finalizer on that object is called: the result is that you now are driven “back to the stone age” for handling resources like file handles.

                      The common idiom we then see is our friendly call to .close() to get rid of the file descriptor: when we are done with the object, regardless of its garbage collected lifespan, we now must manually call this method to make certain that there is an opportunity for it to know to collect all of its non-memory resources. Of course, we have to guarantee this happens, so in a language with exceptions this gets even more complex.

                      C++:  std::filebuf a("hello.txt"); ...
                      Java: try { File file = new File("hello.txt"); ... } finally { file.close(); }
                      

                      This idiom is sufficiently common that when .NET came out, a number of us (including myself, with many long posts ;P arguing for the need) demonstrated this problem, giving us a special interface IDisposable with a single method .close() that could then be called from a new keyword using, allowing us to mark specific garbage-collected objects as being owned by the stack. If you “using” a variable in a scope, when that scope exits, you get the finally{close()} for free.

                      However, this means that the contract for using an object now includes whether or not it contains unmanaged resources: if it does, you must manage the close() (or have “using”) and if it does not you, well, don’t have to do that, and often even can’t do that, such as in .NET (where the using keyword checks at compile time to verify that the object in question has IDisposable); even if you are allowed to do it to any object, though, you probably didn’t.

                      The result is that, if you previously had an object that only consisted of managed resources (i.e., memory, the only resource that even modern garbage collectors are able to keep track of) that now requires, even as an ancillary implementation detail (such as a file on disk that is used as a cache of a particularly large object, or a connection to an SQL database that replaces what used to be an in-memory datastore), you have changed that contract.

                      At this point, everyone who previously had used this object now must go through their code and change the way they were using it to explicitly declare the scope of the object (either by doing manual resource management with close() or by using something like using to bind the lifetime of the object to something else). There is honestly no end to this, though, as if you store a file handle in a simple dictionary or array you have entangled unmanaged state into it.

                      The simple solution to these problems, of course, is “well, let’s have developers mark the lifespan of all of their objects at least somehow”, but that is obviously no better than the world of “manual memory management”: in fact, it is worse, as languages that can build abstractions yet avoid garbage collection (like C++) normally have abstractions that handle “general resource management”, such as RAII and stack allocation; in C++ you aren’t “manually” doing anything.

                      What this means, of course, is that you have made a choice when you choose to use a garbage collector: you have decided that certain problems are more complex for you to solve than others, or maybe simply more important (it might be easy for you to manage something, but if anyone ever even a single time gets it wrong that is unacceptable); however, it is not really accurate to describe garbage collection as “higher” than alternatives that avoid it.

                      Developers, then, may come across an abstraction that to you seems “higher”: it solves fundamental problems that your use case requires to be solved perfectly, and it does so fairly simply; however, in making that choice, something they wanted to be simple now becomes more difficult, or maybe even impossible to achieve. I thereby really can’t deride the developer who “rejects” an abstraction: it might be that the abstraction is not solving the right problem.

                    2. 4

                      Then I imagine that your code is filled with bugs, edge cases, security vulnerabilities and memory leaks.

                      Frameworks don’t exist just to make your life complicated so the developers can stroke their egos. They exist because there are a ton of hard, nuanced, important and boring problems that most developers don’t want to think about.

                      For a small and inexhaustive list of what, for example, Rails provides for you, see wycats' comment on the node.js thread: https://lobste.rs/s/vkytis/debunking_the_node_js_gish_gallop/comments/0f43zj

                      1. 5

                        FWIW, I agree with your commentary about why frameworks exist; however, it seems rather harsh (very harsh…) to assume that this person’s code is “filed with bugs, edge cases, security vulnerabilities, and memory leaks” without knowing how he programs and his process. Developers who avoid frameworks are not sitting around typing the same thing over and over and over again: they know how to write functions, and they know how to write abstractions… they aren’t incompetent (and here I thought wycats wanted there to be no “negativity” here).

                        Meanwhile, they might even be quite happy using an occasional “library” instead of a “framework”: something that handles a specific hard problem and handles it well, without changing the way the entire flow of the code is operating. As a common and highly frustrating example, just because someone avoids using ORMs doesn’t mean they are sitting around manually concatenating SQL strings together every time they make a query, introducing SQL injection attacks and any manner of other subtle bugs into their project: this isn’t an all-or-nothing endeavor.

                        1. 2

                          Imagine all you want about my code, but keep in mind I never said that frameworks don’t have a place. I simply said I have a hard time finding their place ( be it from lack of experience or from lack of understanding or both ).

                          If I jumped on a framework boat and drank that koolaid, I would be avoiding a lot of pitfalls – no doubt.. but I would also come out lacking a lot of insight into why things are they way they are.

                          It’s hard to see the problems frameworks solve if you haven’t hit said problems yourself.

                          1. 3

                            See my sibling comment, but I don’t think most people want to learn about all the problems higher-level abstractions solve by building their toolkit up from the machine code level.

                            Most people accept on faith that higher-level abstractions (languages like C, Java, Ruby or frameworks like Ruby on Rails) are solving problems that they are unaware of. Becoming viscerally aware of what those problems are requires a level of effort that can conflict with a desire to ship software in a timely fashion.

                            Especially early on in the life of a higher-level abstraction, your mentality is perfectly valid and likely to yield better results. But don’t make the mistake of assuming that your current analysis is a truism about abstractions in general or is likely to hold forever in the case of browser-based applications.

                            1. 2

                              See my sibling comment, but I don’t think most people want to learn about all the problems higher-level abstractions solve by building their toolkit up from the machine code level.

                              I agree. Myself included ( I don’t know assembler, or what physical properties of transistors make them… transist.. ).

                              But don’t make the mistake of assuming that your current analysis is a truism about abstractions in general or is likely to hold forever in the case of browser-based applications.

                              I make no assumptions about it being a truism. Once I understand the uses of a framework or library ( jQuery or expressjs for example ) I will likely use it.

                              The best analogy I can come up with is this:

                              I walk into a carpet store looking to purchase two throw rugs.

                              A salesman approaches me with one of those knee carpet stretcher things, and tells me I need it if I want to lay the carpet correctly.

                              All the while.. I have no idea what the knee hitter thingie is for… or why I would need it.

                              Then the sales man insults my ability to put carpet on a floor because I don’t want to use ( or understand the use for ) his tools.

                              1. 3

                                For what it’s worth, I hope nothing I said was insulting.

                                1. 4

                                  Oh no, sorry If I implied you had!

                                  I found the “Then I imagine that your code is filled with bugs, edge cases, security vulnerabilities and memory leaks.” line offensive, mostly because of how assumptive and all-encompassing it is.

                                  1. 3

                                    Please accept my apologies for offending you. I certainly was not meaning to imply that you are incompetent, lazy, or any other undesirable adjectives. ;) Rather, I was trying to imply that frameworks handle not just the problems that you know about, but (to paraphrase Donald Rumsfeld), the “unknown unknowns.”

                                    The fact that frameworks do a lot of stuff without exposing it means that people “rolling their own” don’t even fully understand what tradeoffs they’re making. I sit next to two of the guys that wrote large parts of Rails 3 all day and even I wouldn’t fully understand what tradeoffs I would be making if I decided to hand-roll.

                                    Of course, frameworks aren’t perfect either. But they improve over time, while your application code doesn’t; at least, not without significant effort on your part. Without demeaning your abilities as a programmer, I would be shocked if your hand-rolled solution deployed the same edge-case coverage as Rails when it comes to things like security and encodings, to name two examples. This is not a function of your competence, but rather the sheer number of man-hours poured into Rails over its lifetime.

                                    1. 4

                                      Apology excepted, however, you will remain my arch-nemesis. Not because of this exchange.. but because you have invited more users to Lobste.rs than I!

                                      The game is afoot, sir!

                      1. 3

                        At any given time in the history of software development, there has always been a tension: more abstraction or less abstraction?

                        Like many things in life, there is a pendulum that tends to swing back and forth. It’s hard for the human brain to reason about exponential improvement, so even though people understand Moore’s Law, their thinking still tends to revert to linear improvement when making projections.

                        So if you go back to technical books from the 70s, they tend to make statements like, “It seems unlikely that interpreted languages will ever be able to do anything serious.” And yet here we are, powering billions of dollars of business on top of interpreted languages.

                        Eventually people realize that the higher abstractions allow them to be more productive, and that the performance trade-offs tend to be minimized over time. This is why, in my opinion, Ruby on Rails won: it front-loaded better abstractions and took the performance hit, knowing that eventually, it wouldn’t really matter. And developers were much more productive than when they had to wrestle the ceremony of Java, for example.

                        But it is easy to see temporary performance gains when you strip away abstraction, and then decree that you have eliminated bloat and waste and inefficiency. Sure, but you’ve also given up lots of productivity. Over time, the abstraction becomes cheaper for free, but your productivity doesn’t get better for free.

                        To me, people advocating Node as a replacement for Rails are like the HR manager that looks at the budget and realizes how much money is being spent on IT, so she lays them all off and outsources the job. At first, everyone is happy, because, wow! look at how much we’re saving! But a few years down the line, everything has gone to shit and users have to do more and more for themselves.

                        Java developers laughed at Rails as a slow toy. Node proponents are warning Rails developers, now, that they are making the same mistake. But I don’t think the scenarios are analogous. Rails' value proposition was: “We’ll offer you better abstractions, and we think they’ll get faster over time.” Node’s value proposition is: “We’ll offer you low-level abstractions, and they’ll be faster. You’ll just have to do more work yourself.”

                        Computers and frameworks get faster over time without any effort on my part. Code I’m responsible for, though, just sits there unless I do something to it.

                        In the end, I always bet on higher abstractions.

                        1. 3

                          Another thing to note is that, based on my observation, most people have very little idea on how to measure what really matters when they discuss things like speed. Essentially, the only way to reasonably argue that Node is faster than Ruby/Rails is actually if you build the same exact thing using different architectures. While I do see the catch-22 here and I do realize that there things that you can isolate and test with both Node and Rails, most of the benchmarks that I have seen so artificial that they are barely useful.

                          In any reasonably big web application, things that tend to matter in terms of speed rarely happen to be what language you use on the front-end level (where I am using front-end to mean language used to serve content, not client-side JS). At the end of the day, you’ll have to take anything that takes more than a couple tens of miliseconds (or maybe hundreds) offline anyway. And it turns out all PHP (what we used at Yahoo), Python (what we used at Digg and what we use at Snapguide) seem to be fast enough to consume any queues. In case you need something way, way faster than those, which is very rarely the case, sure, go ahead and use whatever language you fancy.

                          My point is, sure Node is definitely faster in some aspects but I’ve yet to see be faster enough to matter, especially for the kinds of apps that most people seem to be using them for. At the end of the day, it really boils down to personal preference informed by how fun it is to develop, how easy to find help, how good are libraries that do stuff that you’d not want to do, or as you said productivity boost. In my experience, Rails hits that sweet spot better with web development while Python is somewhat nicer at times to whip out random stuff. But, to each his own.

                          1. 7

                            One thing that has always frustrated me about these kinds of benchmarks is that “hello world” for a web framework can mean wildly different things.

                            For example, a “hello world” response in Rails will:

                            • Automatically detect and thwart a number of common security issues, such as IP Spoofing attacks
                            • Verify credentials using a secure comparison, which is slower but not vulnerable to timing attacks
                            • Generate an ETag for the provided response and automatically support HTTP caching requests (doesn’t improve rendering time, but does eliminate the need to download content that is already cached on the client)
                            • Supports a very expansive router, including arbitrary constraints based on request headers
                            • A whole bunch of other stuff that is unrelated to HTML rendering

                            In short, comparing how fast Node can put a String onto a socket with how fast Rails can render a response is an apples-and-oranges comparison. And because we’re usually measuring responses on the order of a few milliseconds (2,000 requests per second is a 0.5ms response, while 500 requests per second is a 2ms response), it’s easy to get caught up in seemingly large numerical differences that really reflect a millisecond or two of overhead.