Threads for wycats

  1. 1

    Should this have the “news” tag?

    1. 1

      Yeah – probably should – maybe jcs can add it

    1. 3

      Alternatively, and a better solution, is that we never call “exit” inside an “at_exit” block and we make sure that all SystemExit rescue blocks and at_exit blocks are used with great caution and echo the original exit status when necessary.

      Exactly! The purpose of an at_exit block is to allow you to do something before exiting. You don’t have to exit again.

      1. 1

        I’m not entirely sure if I tagged this right, but EDN looks pretty interesting.

        1. 1

          It seems like it shouldn’t be “news”

        1. 2

          I always thought it was odd that a Ruby object can call another object’s protected methods if they are of the same class, mostly because I couldn’t think of a circumstance where this would be a desirable behavior. The example in the article of using this behavior while implementing equality operators is something I hadn’t thought of.

          1. 3

            That’s pretty much the only use case I can think of. Even then, I implement equality operators so infrequently that I’ll forget!

            1. 3

              It’s useful for any kind of comparison (<, >, <=>, ==, ===). I only use protected for methods that are used in comparison (most commonly, accessors).

            1. 4

              The command-line is a pretty cool feature, but the debugging tools in general are still lagging pretty hard.

              For example, as far as I can tell, even Aurora has “Pause on exceptions” but no “Pause on uncaught exceptions”. Chrome has a much nicer script navigator, and pretty-printing for minified scripts, but Aurora has the classic “one big list of scripts” with no pretty-printing. Etc.

              1. 1

                I couldn’t agree more. I use Chrome almost all of the time for web development. However, as you mentioned, that command-line feature looks pretty awesome.

              1. 7

                There’s a Github issue open about this –

                Let’s keep bug reports off of this site, I don’t want the front page being filled with meta discussion.

                1. 2

                  Maybe the meta tag could be hidden by default?

                  1. 1

                    I don’t mind the occasional discussion about the site, I just don’t want the whole front page being about the site rather than news. For bug reports and feature requests, using Github would be better because the whole process can be tracked with pull requests and commits.

                  2. 1

                    D'oh! Good call, thanks. Starred.

                  1. 11

                    I don’t have any inside information, but it feels like it can survive as a community project, especially given that the codebase is open source.

                    1. 2

                      I’m unsure what the rules about tagging are here, but it feels like this deserves the “news” tag and that’s it. If I’m filtering for stories about “design” or “javascript”, this story doesn’t seem like it belongs.

                      1. 2

                        Seems legit. I think that a story about Bootstrap qualifies as both design and bootstrap, but that’s just me.

                        1. 1

                          Is this a story about JavaScript or news about a.JavaScript personality?

                          1. 2

                            To me, it’s a story about one of the people who wrote a library whose work was being sponsored by a company leaving that job. I would have posted the same story as “Yehuda Katz (engineer behind Rails 3) leaves Engine Yard” (what does this mean for the future of Rails?)

                            Just like you haven’t been helping out with Rails as much, I’m guessing this means that Fat won’t be with Bootstrap. Not that there’s anything the matter with that whatsoever, but it may indicate significant change in the project for the future.

                      1. 2

                        Now you can specify the version of Ruby in the Gemfile. That’s big for deployment to Heroku.

                        1. 4

                          It’s also useful for a fail-fast for people on your team using the wrong version of Ruby, and for rvm, which will use the information in the Gemfile to switch to the right version for you.

                          1. 2

                            I’ve updated my articles to point out the need to update Bundler:

                            @wycats I’m looking forward to Tokaido so I can cut the Installing Rails article down to one paragraph.

                          1. 2

                            Why isn’t pry-stack_explorer a dependency of pry-rescue?

                            1. 3

                              How is upvoting or downvoting not the same as marking it positive or negative?

                              1. 1

                                I think he’s looking for a ratio of upvotes:downvotes so as to determine whether a comment just hasn’t been voted on much v. it being “controversial”.

                                1. 5

                                  I mean whether the post is positive (saying nice things) or negative (saying mean/nasty things)

                                  1. 1

                                    That said, total votes is useful for controversial posts that have a low number because of many up and down votes.

                                  2. 1

                                    Because at present downvotes are only for off-topic, troll, spam, incorrect, and me-too. I think the ‘happy face’ ‘sad face’ thing is just for tone, and not karma. So, you could have a comment that was witty and snarky, that was upvoted but also ‘sad faced’, or a comment that was really positive but vacuous or wrong, that was down-voted with me-too or incorrect, but also ‘happy faced’. Basically, this seems like just a way to keep an eye on the ‘emotional temperature’ of a thread or commenter.

                                  1. 2

                                    I always get lost in frameworks.. I find it takes me longer to learn the ins and outs of a framework than it does to just write my own code to handle things..

                                    1. 10

                                      I often use a GC analogy when people make comments like this.

                                      Before garbage collection, excellent developers got extremely good at manually memory management. Good libraries, frameworks and applications had few if any memory leaks, which distinguished them from libraries, frameworks and applications written by lazier or less competent developers.

                                      The first garbage collectors weren’t very good, so it was reasonable for those developers who had honed their memory management skills to poo-poo those tools (and languages that came with them) in favor of direct control over the environment (and smooth performance) by continuing to leverage those skills. Lazier and less competent developers could make use of the new garbage collectors to eliminate the most egregious leaks that they were introducing, but they still couldn’t touch the performance of the hand-coders.

                                      Over time, garbage collectors have improved significantly, giving them acceptable performance (CPU, memory, pauses) for the vast majority of cases. Over time, most of the people who rejected early garbage collectors have come around, freeing mental cycles for other domain-specific problems without giving up very much. Also, as garbage collectors improve, the number of developers who can out-perform a reasonably-implemented GC by hand shrinks.

                                      This sort of pattern is relatively common with the introduction of any higher-level abstraction. When jQuery first came out, the DOM Scripting heroes of 2004 mostly stuck with their tried-and-true hand-coded methods. As jQuery itself got better, most of the hold-outs found that they would rather surrender to jQuery and focus their energies on building applications.

                                      See for a well-reasoned article by Simon Willison, an early JavaScript adopter, when he came around to using jQuery. (“For the past few years, I’ve been advising people to only pick a JavaScript library if they were willing to read the source code and figure out exactly how it works… I think I’m going to have to reconsider my advice”).

                                      What this means is that if you’re an extremely skilled developer and a higher-level abstraction emerges, your initial reaction will rationally be to reject the abstraction. After all, you’re good at what you do and the abstraction is clunky and suffers from being more generic than the hand-crafted solution you are using. But good developers will also revisit those abstractions periodically to determine whether those problems have receded, and identify the moment when the abstraction leapfrogs their ability to do things by hand.

                                      1. 3

                                        One thing to consider is that abstractions do not necessarily follow a single path along the axis of “better”: sometimes, and I might even argue for “most of the time”, when you accept one abstraction you open the door to some new opportunities while at the same time closing the door on others. I thereby think it is reasonable for some people to dislike specific abstractions, or to generally be wary of new abstractions, until they are proven.

                                        In the case of garbage collection, you certainly gain a lot: I would never argue with someone claiming that garbage collection offers key and even unique advantages to systems that use it. However, when you accept automatic collection of memory into your life, you fundamentally lose out on a few alternative opportunities, including deterministic finalization; these are not optimizations mind you, these affect expressiveness and coupling.

                                        As a specific example, if you are working with a “file”, due to the semantics of sharing files with other proceses (or even other computers), it becomes important that when you are done using a file for you close it: if someone else, or even you, needs to open the file again later, you might be unable to do so until the previous usage of the file is done. This is because the concept of “using the file” is similar to a lock or a semaphore: only one person can safely utilize its semantics at once.

                                        In a language where object lifetime has to be manually spelled out, this is a trivial problem: when the object is deallocated the file can be closed, a general class of technique and organization that C++ developers call RAII (resource acquisition is initialization). As every object must be deallocated at some point, you have an opportunity to attach at exactly the right moment in the lifecycle of that object to clear away that file handle.

                                        If you then provide a language-mediated solution for this deterministic finalization, such as stack allocation (which in C++ means that developers almost never have to manually deallocate anything: the program feels quite automated, despite having “manual” memory management), those benefits not only affect memory resources, they affect everything. Therefore, whether you are allocating memory, file handles, or bicycle messengers, you have a single management interface.

                                        However, in a garbage collected environment, you no longer can guarantee when objects will be cleaned up: someone else might have a reference, and it becomes the job of the garbage collector to determine when, or even if (as there is no guarantee that memory is even a precious resource or that garbage will ever be collected at all), the finalizer on that object is called: the result is that you now are driven “back to the stone age” for handling resources like file handles.

                                        The common idiom we then see is our friendly call to .close() to get rid of the file descriptor: when we are done with the object, regardless of its garbage collected lifespan, we now must manually call this method to make certain that there is an opportunity for it to know to collect all of its non-memory resources. Of course, we have to guarantee this happens, so in a language with exceptions this gets even more complex.

                                        C++:  std::filebuf a("hello.txt"); ...
                                        Java: try { File file = new File("hello.txt"); ... } finally { file.close(); }

                                        This idiom is sufficiently common that when .NET came out, a number of us (including myself, with many long posts ;P arguing for the need) demonstrated this problem, giving us a special interface IDisposable with a single method .close() that could then be called from a new keyword using, allowing us to mark specific garbage-collected objects as being owned by the stack. If you “using” a variable in a scope, when that scope exits, you get the finally{close()} for free.

                                        However, this means that the contract for using an object now includes whether or not it contains unmanaged resources: if it does, you must manage the close() (or have “using”) and if it does not you, well, don’t have to do that, and often even can’t do that, such as in .NET (where the using keyword checks at compile time to verify that the object in question has IDisposable); even if you are allowed to do it to any object, though, you probably didn’t.

                                        The result is that, if you previously had an object that only consisted of managed resources (i.e., memory, the only resource that even modern garbage collectors are able to keep track of) that now requires, even as an ancillary implementation detail (such as a file on disk that is used as a cache of a particularly large object, or a connection to an SQL database that replaces what used to be an in-memory datastore), you have changed that contract.

                                        At this point, everyone who previously had used this object now must go through their code and change the way they were using it to explicitly declare the scope of the object (either by doing manual resource management with close() or by using something like using to bind the lifetime of the object to something else). There is honestly no end to this, though, as if you store a file handle in a simple dictionary or array you have entangled unmanaged state into it.

                                        The simple solution to these problems, of course, is “well, let’s have developers mark the lifespan of all of their objects at least somehow”, but that is obviously no better than the world of “manual memory management”: in fact, it is worse, as languages that can build abstractions yet avoid garbage collection (like C++) normally have abstractions that handle “general resource management”, such as RAII and stack allocation; in C++ you aren’t “manually” doing anything.

                                        What this means, of course, is that you have made a choice when you choose to use a garbage collector: you have decided that certain problems are more complex for you to solve than others, or maybe simply more important (it might be easy for you to manage something, but if anyone ever even a single time gets it wrong that is unacceptable); however, it is not really accurate to describe garbage collection as “higher” than alternatives that avoid it.

                                        Developers, then, may come across an abstraction that to you seems “higher”: it solves fundamental problems that your use case requires to be solved perfectly, and it does so fairly simply; however, in making that choice, something they wanted to be simple now becomes more difficult, or maybe even impossible to achieve. I thereby really can’t deride the developer who “rejects” an abstraction: it might be that the abstraction is not solving the right problem.

                                      2. 4

                                        Then I imagine that your code is filled with bugs, edge cases, security vulnerabilities and memory leaks.

                                        Frameworks don’t exist just to make your life complicated so the developers can stroke their egos. They exist because there are a ton of hard, nuanced, important and boring problems that most developers don’t want to think about.

                                        For a small and inexhaustive list of what, for example, Rails provides for you, see wycats' comment on the node.js thread:

                                        1. 5

                                          FWIW, I agree with your commentary about why frameworks exist; however, it seems rather harsh (very harsh…) to assume that this person’s code is “filed with bugs, edge cases, security vulnerabilities, and memory leaks” without knowing how he programs and his process. Developers who avoid frameworks are not sitting around typing the same thing over and over and over again: they know how to write functions, and they know how to write abstractions… they aren’t incompetent (and here I thought wycats wanted there to be no “negativity” here).

                                          Meanwhile, they might even be quite happy using an occasional “library” instead of a “framework”: something that handles a specific hard problem and handles it well, without changing the way the entire flow of the code is operating. As a common and highly frustrating example, just because someone avoids using ORMs doesn’t mean they are sitting around manually concatenating SQL strings together every time they make a query, introducing SQL injection attacks and any manner of other subtle bugs into their project: this isn’t an all-or-nothing endeavor.

                                          1. 2

                                            Imagine all you want about my code, but keep in mind I never said that frameworks don’t have a place. I simply said I have a hard time finding their place ( be it from lack of experience or from lack of understanding or both ).

                                            If I jumped on a framework boat and drank that koolaid, I would be avoiding a lot of pitfalls – no doubt.. but I would also come out lacking a lot of insight into why things are they way they are.

                                            It’s hard to see the problems frameworks solve if you haven’t hit said problems yourself.

                                            1. 3

                                              See my sibling comment, but I don’t think most people want to learn about all the problems higher-level abstractions solve by building their toolkit up from the machine code level.

                                              Most people accept on faith that higher-level abstractions (languages like C, Java, Ruby or frameworks like Ruby on Rails) are solving problems that they are unaware of. Becoming viscerally aware of what those problems are requires a level of effort that can conflict with a desire to ship software in a timely fashion.

                                              Especially early on in the life of a higher-level abstraction, your mentality is perfectly valid and likely to yield better results. But don’t make the mistake of assuming that your current analysis is a truism about abstractions in general or is likely to hold forever in the case of browser-based applications.

                                              1. 2

                                                See my sibling comment, but I don’t think most people want to learn about all the problems higher-level abstractions solve by building their toolkit up from the machine code level.

                                                I agree. Myself included ( I don’t know assembler, or what physical properties of transistors make them… transist.. ).

                                                But don’t make the mistake of assuming that your current analysis is a truism about abstractions in general or is likely to hold forever in the case of browser-based applications.

                                                I make no assumptions about it being a truism. Once I understand the uses of a framework or library ( jQuery or expressjs for example ) I will likely use it.

                                                The best analogy I can come up with is this:

                                                I walk into a carpet store looking to purchase two throw rugs.

                                                A salesman approaches me with one of those knee carpet stretcher things, and tells me I need it if I want to lay the carpet correctly.

                                                All the while.. I have no idea what the knee hitter thingie is for… or why I would need it.

                                                Then the sales man insults my ability to put carpet on a floor because I don’t want to use ( or understand the use for ) his tools.

                                                1. 3

                                                  For what it’s worth, I hope nothing I said was insulting.

                                                  1. 4

                                                    Oh no, sorry If I implied you had!

                                                    I found the “Then I imagine that your code is filled with bugs, edge cases, security vulnerabilities and memory leaks.” line offensive, mostly because of how assumptive and all-encompassing it is.

                                                    1. 3

                                                      Please accept my apologies for offending you. I certainly was not meaning to imply that you are incompetent, lazy, or any other undesirable adjectives. ;) Rather, I was trying to imply that frameworks handle not just the problems that you know about, but (to paraphrase Donald Rumsfeld), the “unknown unknowns.”

                                                      The fact that frameworks do a lot of stuff without exposing it means that people “rolling their own” don’t even fully understand what tradeoffs they’re making. I sit next to two of the guys that wrote large parts of Rails 3 all day and even I wouldn’t fully understand what tradeoffs I would be making if I decided to hand-roll.

                                                      Of course, frameworks aren’t perfect either. But they improve over time, while your application code doesn’t; at least, not without significant effort on your part. Without demeaning your abilities as a programmer, I would be shocked if your hand-rolled solution deployed the same edge-case coverage as Rails when it comes to things like security and encodings, to name two examples. This is not a function of your competence, but rather the sheer number of man-hours poured into Rails over its lifetime.

                                                      1. 4

                                                        Apology excepted, however, you will remain my arch-nemesis. Not because of this exchange.. but because you have invited more users to than I!

                                                        The game is afoot, sir!

                                        1. 1

                                          Awesome! I still love when new tech-related Rails apps get opened up :)

                                          1. 3

                                            At any given time in the history of software development, there has always been a tension: more abstraction or less abstraction?

                                            Like many things in life, there is a pendulum that tends to swing back and forth. It’s hard for the human brain to reason about exponential improvement, so even though people understand Moore’s Law, their thinking still tends to revert to linear improvement when making projections.

                                            So if you go back to technical books from the 70s, they tend to make statements like, “It seems unlikely that interpreted languages will ever be able to do anything serious.” And yet here we are, powering billions of dollars of business on top of interpreted languages.

                                            Eventually people realize that the higher abstractions allow them to be more productive, and that the performance trade-offs tend to be minimized over time. This is why, in my opinion, Ruby on Rails won: it front-loaded better abstractions and took the performance hit, knowing that eventually, it wouldn’t really matter. And developers were much more productive than when they had to wrestle the ceremony of Java, for example.

                                            But it is easy to see temporary performance gains when you strip away abstraction, and then decree that you have eliminated bloat and waste and inefficiency. Sure, but you’ve also given up lots of productivity. Over time, the abstraction becomes cheaper for free, but your productivity doesn’t get better for free.

                                            To me, people advocating Node as a replacement for Rails are like the HR manager that looks at the budget and realizes how much money is being spent on IT, so she lays them all off and outsources the job. At first, everyone is happy, because, wow! look at how much we’re saving! But a few years down the line, everything has gone to shit and users have to do more and more for themselves.

                                            Java developers laughed at Rails as a slow toy. Node proponents are warning Rails developers, now, that they are making the same mistake. But I don’t think the scenarios are analogous. Rails' value proposition was: “We’ll offer you better abstractions, and we think they’ll get faster over time.” Node’s value proposition is: “We’ll offer you low-level abstractions, and they’ll be faster. You’ll just have to do more work yourself.”

                                            Computers and frameworks get faster over time without any effort on my part. Code I’m responsible for, though, just sits there unless I do something to it.

                                            In the end, I always bet on higher abstractions.

                                            1. 3

                                              Another thing to note is that, based on my observation, most people have very little idea on how to measure what really matters when they discuss things like speed. Essentially, the only way to reasonably argue that Node is faster than Ruby/Rails is actually if you build the same exact thing using different architectures. While I do see the catch-22 here and I do realize that there things that you can isolate and test with both Node and Rails, most of the benchmarks that I have seen so artificial that they are barely useful.

                                              In any reasonably big web application, things that tend to matter in terms of speed rarely happen to be what language you use on the front-end level (where I am using front-end to mean language used to serve content, not client-side JS). At the end of the day, you’ll have to take anything that takes more than a couple tens of miliseconds (or maybe hundreds) offline anyway. And it turns out all PHP (what we used at Yahoo), Python (what we used at Digg and what we use at Snapguide) seem to be fast enough to consume any queues. In case you need something way, way faster than those, which is very rarely the case, sure, go ahead and use whatever language you fancy.

                                              My point is, sure Node is definitely faster in some aspects but I’ve yet to see be faster enough to matter, especially for the kinds of apps that most people seem to be using them for. At the end of the day, it really boils down to personal preference informed by how fun it is to develop, how easy to find help, how good are libraries that do stuff that you’d not want to do, or as you said productivity boost. In my experience, Rails hits that sweet spot better with web development while Python is somewhat nicer at times to whip out random stuff. But, to each his own.

                                              1. 7

                                                One thing that has always frustrated me about these kinds of benchmarks is that “hello world” for a web framework can mean wildly different things.

                                                For example, a “hello world” response in Rails will:

                                                • Automatically detect and thwart a number of common security issues, such as IP Spoofing attacks
                                                • Verify credentials using a secure comparison, which is slower but not vulnerable to timing attacks
                                                • Generate an ETag for the provided response and automatically support HTTP caching requests (doesn’t improve rendering time, but does eliminate the need to download content that is already cached on the client)
                                                • Supports a very expansive router, including arbitrary constraints based on request headers
                                                • A whole bunch of other stuff that is unrelated to HTML rendering

                                                In short, comparing how fast Node can put a String onto a socket with how fast Rails can render a response is an apples-and-oranges comparison. And because we’re usually measuring responses on the order of a few milliseconds (2,000 requests per second is a 0.5ms response, while 500 requests per second is a 2ms response), it’s easy to get caught up in seemingly large numerical differences that really reflect a millisecond or two of overhead.