1. 1

    It might be a good idea to justify “down-votes” with a select box option.

    For a user to down-vote a comment, a simple reason would be required.

    Such as:

    • Inflammatory
    • Irrelevant
    • Factually incorrect
    • Other

    You could also optionally justify “up-votes”.

    • Insightful
    • Funny
    • Additional resource
    • Other

    Just a thought! :–)

    1. 2

      You can already give a justification for downvotes of both stories and comments. Click any downvote button and you should see the menu drop down – you can click Cancel afterwards.

      On the other hand, the possibility of up-vote justifications sounds interesting. I know one issue with Reddit that some people on Hacker News have is that too often, funny comments outrank insightful ones, and they’d prefer to read insightful comments. This way, “insightful” votes can automatically add more to a ranking than “funny”. Or perhaps it would be better to keep all upvotes equal by default, but let those people change their settings to tell the site how much they value each type of upvote – they might rate “insightful” as worth “2” and “funny” as worth “1” for their comment ranking.

      1. 4

        For the record, as you didn’t mention it, the up-vote justifications (and down-vote justifications) are in play on Slashdot, so you can differentiate between insightful (good argument), informative (new information), and funny (obvious). (There were probably more options, which might be interesting to check out if you care about thinking more on such a feature.)

        1. 1

          One more interesting note about /.: Funny comments do not give you karma. “You have to be smart, not be a smart-ass.”

    1. 2

      I wanted to tag this as ‘clojure’ but lisp was the closest. I was surprised that clojure is not a tag, given that it’s a pretty popular language.

      1. 1

        Here, Clojure is lumped in with the other Lisps, as it is a Lisp. Common Lisp, Scheme, Clojure – all the same tag.

        1. 2

          This article, however, involves using Clojure type hints to generate an efficient algorithm; it isn’t actually applicable if you are using a Lisp. (Meanwhile, people who like Lisp actually often dislike Clojure, and vice versa, based on the posts I’ve read from a bunch of mailing lists while I was doing due diligence on the language.)

          1. 2

            I see it like I see how C, C++ and Objective C can be lumped together (which makes less sense than the Lisp grouping); most of the time the basic concepts will apply. This is an example of an edge case, and I would personally prefer not have too many categories. My $0.02.

      1. 2

        I always get lost in frameworks.. I find it takes me longer to learn the ins and outs of a framework than it does to just write my own code to handle things..

        1. 10

          I often use a GC analogy when people make comments like this.

          Before garbage collection, excellent developers got extremely good at manually memory management. Good libraries, frameworks and applications had few if any memory leaks, which distinguished them from libraries, frameworks and applications written by lazier or less competent developers.

          The first garbage collectors weren’t very good, so it was reasonable for those developers who had honed their memory management skills to poo-poo those tools (and languages that came with them) in favor of direct control over the environment (and smooth performance) by continuing to leverage those skills. Lazier and less competent developers could make use of the new garbage collectors to eliminate the most egregious leaks that they were introducing, but they still couldn’t touch the performance of the hand-coders.

          Over time, garbage collectors have improved significantly, giving them acceptable performance (CPU, memory, pauses) for the vast majority of cases. Over time, most of the people who rejected early garbage collectors have come around, freeing mental cycles for other domain-specific problems without giving up very much. Also, as garbage collectors improve, the number of developers who can out-perform a reasonably-implemented GC by hand shrinks.

          This sort of pattern is relatively common with the introduction of any higher-level abstraction. When jQuery first came out, the DOM Scripting heroes of 2004 mostly stuck with their tried-and-true hand-coded methods. As jQuery itself got better, most of the hold-outs found that they would rather surrender to jQuery and focus their energies on building applications.

          See http://simonwillison.net/2007/aug/15/jquery/ for a well-reasoned article by Simon Willison, an early JavaScript adopter, when he came around to using jQuery. (“For the past few years, I’ve been advising people to only pick a JavaScript library if they were willing to read the source code and figure out exactly how it works… I think I’m going to have to reconsider my advice”).

          What this means is that if you’re an extremely skilled developer and a higher-level abstraction emerges, your initial reaction will rationally be to reject the abstraction. After all, you’re good at what you do and the abstraction is clunky and suffers from being more generic than the hand-crafted solution you are using. But good developers will also revisit those abstractions periodically to determine whether those problems have receded, and identify the moment when the abstraction leapfrogs their ability to do things by hand.

          1. 3

            One thing to consider is that abstractions do not necessarily follow a single path along the axis of “better”: sometimes, and I might even argue for “most of the time”, when you accept one abstraction you open the door to some new opportunities while at the same time closing the door on others. I thereby think it is reasonable for some people to dislike specific abstractions, or to generally be wary of new abstractions, until they are proven.

            In the case of garbage collection, you certainly gain a lot: I would never argue with someone claiming that garbage collection offers key and even unique advantages to systems that use it. However, when you accept automatic collection of memory into your life, you fundamentally lose out on a few alternative opportunities, including deterministic finalization; these are not optimizations mind you, these affect expressiveness and coupling.

            As a specific example, if you are working with a “file”, due to the semantics of sharing files with other proceses (or even other computers), it becomes important that when you are done using a file for you close it: if someone else, or even you, needs to open the file again later, you might be unable to do so until the previous usage of the file is done. This is because the concept of “using the file” is similar to a lock or a semaphore: only one person can safely utilize its semantics at once.

            In a language where object lifetime has to be manually spelled out, this is a trivial problem: when the object is deallocated the file can be closed, a general class of technique and organization that C++ developers call RAII (resource acquisition is initialization). As every object must be deallocated at some point, you have an opportunity to attach at exactly the right moment in the lifecycle of that object to clear away that file handle.

            If you then provide a language-mediated solution for this deterministic finalization, such as stack allocation (which in C++ means that developers almost never have to manually deallocate anything: the program feels quite automated, despite having “manual” memory management), those benefits not only affect memory resources, they affect everything. Therefore, whether you are allocating memory, file handles, or bicycle messengers, you have a single management interface.

            However, in a garbage collected environment, you no longer can guarantee when objects will be cleaned up: someone else might have a reference, and it becomes the job of the garbage collector to determine when, or even if (as there is no guarantee that memory is even a precious resource or that garbage will ever be collected at all), the finalizer on that object is called: the result is that you now are driven “back to the stone age” for handling resources like file handles.

            The common idiom we then see is our friendly call to .close() to get rid of the file descriptor: when we are done with the object, regardless of its garbage collected lifespan, we now must manually call this method to make certain that there is an opportunity for it to know to collect all of its non-memory resources. Of course, we have to guarantee this happens, so in a language with exceptions this gets even more complex.

            C++:  std::filebuf a("hello.txt"); ...
            Java: try { File file = new File("hello.txt"); ... } finally { file.close(); }
            

            This idiom is sufficiently common that when .NET came out, a number of us (including myself, with many long posts ;P arguing for the need) demonstrated this problem, giving us a special interface IDisposable with a single method .close() that could then be called from a new keyword using, allowing us to mark specific garbage-collected objects as being owned by the stack. If you “using” a variable in a scope, when that scope exits, you get the finally{close()} for free.

            However, this means that the contract for using an object now includes whether or not it contains unmanaged resources: if it does, you must manage the close() (or have “using”) and if it does not you, well, don’t have to do that, and often even can’t do that, such as in .NET (where the using keyword checks at compile time to verify that the object in question has IDisposable); even if you are allowed to do it to any object, though, you probably didn’t.

            The result is that, if you previously had an object that only consisted of managed resources (i.e., memory, the only resource that even modern garbage collectors are able to keep track of) that now requires, even as an ancillary implementation detail (such as a file on disk that is used as a cache of a particularly large object, or a connection to an SQL database that replaces what used to be an in-memory datastore), you have changed that contract.

            At this point, everyone who previously had used this object now must go through their code and change the way they were using it to explicitly declare the scope of the object (either by doing manual resource management with close() or by using something like using to bind the lifetime of the object to something else). There is honestly no end to this, though, as if you store a file handle in a simple dictionary or array you have entangled unmanaged state into it.

            The simple solution to these problems, of course, is “well, let’s have developers mark the lifespan of all of their objects at least somehow”, but that is obviously no better than the world of “manual memory management”: in fact, it is worse, as languages that can build abstractions yet avoid garbage collection (like C++) normally have abstractions that handle “general resource management”, such as RAII and stack allocation; in C++ you aren’t “manually” doing anything.

            What this means, of course, is that you have made a choice when you choose to use a garbage collector: you have decided that certain problems are more complex for you to solve than others, or maybe simply more important (it might be easy for you to manage something, but if anyone ever even a single time gets it wrong that is unacceptable); however, it is not really accurate to describe garbage collection as “higher” than alternatives that avoid it.

            Developers, then, may come across an abstraction that to you seems “higher”: it solves fundamental problems that your use case requires to be solved perfectly, and it does so fairly simply; however, in making that choice, something they wanted to be simple now becomes more difficult, or maybe even impossible to achieve. I thereby really can’t deride the developer who “rejects” an abstraction: it might be that the abstraction is not solving the right problem.

          2. 4

            Then I imagine that your code is filled with bugs, edge cases, security vulnerabilities and memory leaks.

            Frameworks don’t exist just to make your life complicated so the developers can stroke their egos. They exist because there are a ton of hard, nuanced, important and boring problems that most developers don’t want to think about.

            For a small and inexhaustive list of what, for example, Rails provides for you, see wycats' comment on the node.js thread: https://lobste.rs/s/vkytis/debunking_the_node_js_gish_gallop/comments/0f43zj

            1. 5

              FWIW, I agree with your commentary about why frameworks exist; however, it seems rather harsh (very harsh…) to assume that this person’s code is “filed with bugs, edge cases, security vulnerabilities, and memory leaks” without knowing how he programs and his process. Developers who avoid frameworks are not sitting around typing the same thing over and over and over again: they know how to write functions, and they know how to write abstractions… they aren’t incompetent (and here I thought wycats wanted there to be no “negativity” here).

              Meanwhile, they might even be quite happy using an occasional “library” instead of a “framework”: something that handles a specific hard problem and handles it well, without changing the way the entire flow of the code is operating. As a common and highly frustrating example, just because someone avoids using ORMs doesn’t mean they are sitting around manually concatenating SQL strings together every time they make a query, introducing SQL injection attacks and any manner of other subtle bugs into their project: this isn’t an all-or-nothing endeavor.

              1. 2

                Imagine all you want about my code, but keep in mind I never said that frameworks don’t have a place. I simply said I have a hard time finding their place ( be it from lack of experience or from lack of understanding or both ).

                If I jumped on a framework boat and drank that koolaid, I would be avoiding a lot of pitfalls – no doubt.. but I would also come out lacking a lot of insight into why things are they way they are.

                It’s hard to see the problems frameworks solve if you haven’t hit said problems yourself.

                1. 3

                  See my sibling comment, but I don’t think most people want to learn about all the problems higher-level abstractions solve by building their toolkit up from the machine code level.

                  Most people accept on faith that higher-level abstractions (languages like C, Java, Ruby or frameworks like Ruby on Rails) are solving problems that they are unaware of. Becoming viscerally aware of what those problems are requires a level of effort that can conflict with a desire to ship software in a timely fashion.

                  Especially early on in the life of a higher-level abstraction, your mentality is perfectly valid and likely to yield better results. But don’t make the mistake of assuming that your current analysis is a truism about abstractions in general or is likely to hold forever in the case of browser-based applications.

                  1. 2

                    See my sibling comment, but I don’t think most people want to learn about all the problems higher-level abstractions solve by building their toolkit up from the machine code level.

                    I agree. Myself included ( I don’t know assembler, or what physical properties of transistors make them… transist.. ).

                    But don’t make the mistake of assuming that your current analysis is a truism about abstractions in general or is likely to hold forever in the case of browser-based applications.

                    I make no assumptions about it being a truism. Once I understand the uses of a framework or library ( jQuery or expressjs for example ) I will likely use it.

                    The best analogy I can come up with is this:

                    I walk into a carpet store looking to purchase two throw rugs.

                    A salesman approaches me with one of those knee carpet stretcher things, and tells me I need it if I want to lay the carpet correctly.

                    All the while.. I have no idea what the knee hitter thingie is for… or why I would need it.

                    Then the sales man insults my ability to put carpet on a floor because I don’t want to use ( or understand the use for ) his tools.

                    1. 3

                      For what it’s worth, I hope nothing I said was insulting.

                      1. 4

                        Oh no, sorry If I implied you had!

                        I found the “Then I imagine that your code is filled with bugs, edge cases, security vulnerabilities and memory leaks.” line offensive, mostly because of how assumptive and all-encompassing it is.

                        1. 3

                          Please accept my apologies for offending you. I certainly was not meaning to imply that you are incompetent, lazy, or any other undesirable adjectives. ;) Rather, I was trying to imply that frameworks handle not just the problems that you know about, but (to paraphrase Donald Rumsfeld), the “unknown unknowns.”

                          The fact that frameworks do a lot of stuff without exposing it means that people “rolling their own” don’t even fully understand what tradeoffs they’re making. I sit next to two of the guys that wrote large parts of Rails 3 all day and even I wouldn’t fully understand what tradeoffs I would be making if I decided to hand-roll.

                          Of course, frameworks aren’t perfect either. But they improve over time, while your application code doesn’t; at least, not without significant effort on your part. Without demeaning your abilities as a programmer, I would be shocked if your hand-rolled solution deployed the same edge-case coverage as Rails when it comes to things like security and encodings, to name two examples. This is not a function of your competence, but rather the sheer number of man-hours poured into Rails over its lifetime.

                          1. 4

                            Apology excepted, however, you will remain my arch-nemesis. Not because of this exchange.. but because you have invited more users to Lobste.rs than I!

                            The game is afoot, sir!