1. 26

    These are all valid criticisms of certain patterns in software engineering, but I wouldn’t really say they’re about OOP.

    This paper goes into some of the distinctions of OOP and ADTs, but the summary is basically this:

    • ADTs allow complex functions that operate on many data abstractions – so the Player.hits(Monster) example might be rewritten in ADT-style as hit(Player, Monster[, Weapon]).
    • Objects, on the other hand, allow interface-based polymorphism – so you might have some kind of interface Character { position: Coordinates, hp: int, name: String }, which Player and Monster both implement.

    Now, interface-based polymorphism is an interesting thing to think about and criticise in its own right. It requires some kind of dynamic dispatch (or monomorphization), and hinders optimization across interface boundaries. But the critique of OOP presented in the OP is nothing to do with interfaces or polymorphism.

    The author just dislikes using classes to hold data, but a class that doesn’t implement an interface is basically the same as an ADT. And yet one of the first recommendations in the article is to design your data structures well up-front!

    1. 15

      The main problem I have with these “X is dead” type article is they are almost always straw man arguments setup in a way to prove a point. The other issue I have is the definition or interpretation of OOP is so varied that I don’t think you can in good faith just say OOP as a whole is bad and be at all clear to the reader. As an industry I actually think we need to get past these self constructed camps of OOP vs Functional because to me they are disingenuous and the truth, as it always does, lies in the middle.

      Personally, coming mainly from a Ruby/Rails environment, use ActiveRecord/Class to almost exclusively encapsulate data and abstract the interaction with the database transformations and then move logic into a place where it really only cares about data in and data out. Is that OOP or Functional? I would argue a combination of both and I think the power lies in the middle not one versus the other as most articles stipulate. But a middle ground approach doesnt get the clicks i guess so here we are

      1. 4

        the definition or interpretation of OOP is so varied that I don’t think you can in good faith just say OOP as a whole is bad and be at all clear to the reader

        Wholly agreed.

        The main problem I have with these “X is dead” type article is they are almost always straw man arguments setup in a way to prove a point.

        For a term that evokes such strong emotions, it really is poorly defined (as you observed). Are these straw man arguments, or is the author responding to a set of pro-OOP arguments which don’t represent the pro-OOP arguments with which you’re familiar?

        Just like these criticisms of OOP feel like straw men to you, I imagine all of the “but that’s not real OOP!” responses that follow any criticism of OOP must feel a lot like disingenuous No-True-Scotsman arguments to critics of OOP.

        Personally, I’m a critic, and the only way I know how to navigate the “not true OOP” dodges is to ask what features distinguish OOP from other paradigms in the opinion of the OOP proponent and then debate whether that feature really is unique to OOP or whether it’s pervasive in other paradigms as well and once in a while a feature will actually pass through that filter such that we can debate its merits (e.g., inheritance).

        1. 4

          I imagine all of the “but that’s not real OOP!” responses that follow any criticism of OOP must feel a lot like disingenuous No-True-Scotsman arguments to critics of OOP.

          One thing I have observed about OOP is how protean it is: whenever there’s a good idea around, it absorbs it then pretend it is an inherent part of it. Then it deflects criticism by crying “strawman”, or, if we point out the shapes and animals that are taught for real in school, they’ll point out that “proper” OOP is hard, and provide little to no help in how to design an actual program.

          Here’s what I think: in its current form, OOP won’t last, same as previous form of OOP didn’t last. Just don’t be surprised if whatever follows ends up being called “OOP” as well.

      2. 8

        The model presented for monsters and players can itself be considered an OO design that misses the overarching problem in such domains. Here’s a well-reasoned, in-depth article on why it is folly. Part five has the riveting conclusion:

        Of course, your point isn’t about OOP-based RPGs, but how the article fails to critique OOP.

        After Alan Kay coined OOP, he realized, in retrospect, that the term would have been better as message-oriented programming. Too many people fixate on objects, rather than the messages passed betwixt. Recall that the inspiration for OOP was based upon how messages pass between biological cells. Put another way, when you move your finger: messages from the brain pass to the motor neurons, neurons release a chemical (a type of message), muscles receive those chemical impulses, then muscle fibers react, and so forth. At no point does any information about the brain’s state leak into other systems; your fingers know nothing about your brain, although they can pass messages back (e.g., pain signals).

        (This is the main reason why get and set accessors are often frowned upon: they break encapsulation, they break modularity, they leak data between components.)

        Many critique OOP, but few seem to study its origins and how—through nature-inspired modularity—it allowed systems to increase in complexity by an order of magnitude over its procedural programming predecessor. There are so many critiques of OOP that don’t pick apart actual message-oriented code that beats at the heart of OOP’s origins.

        1. 1

          Many critique OOP, but few seem to study its origins and how—through nature-inspired modularity—it allowed systems to increase in complexity by an order of magnitude over its procedural programming predecessor.

          Of note, modularity requires neither objects nor message passing!

          For example, the Modula programming language was procedural. Modula came out around the same time as Smalltalk, and introduced the concept of first-class modules (with the data hiding feature that Smalltalk objects had, except at the module level instead of the object level) that practically every modern programming language has adopted today - including both OO and non-OO languages.

        2. 5

          I have to say, after read the first few paragraphs, I skipped to ‘What to do Instead’. I am aware of many limitations of OOP and have no issue with the idea of learning something new so, hit me with it. Then the article is like ’hmm well datastores are nice. The end.”

          The irony is that I feel like I learned more from your comment than from the whole article so thanks for that. While reading the Player.hits(Monster) example I was hoping for the same example reformulated in a non-OOP way. No luck.

          If anyone has actual suggestions for how I could move away from OOP in a practical and achievable way within the areas of software I am active in (game prototypes, e.g. Godot or Unity, Windows desktop applications to pay the bills), I am certainly listening.

          1. 2

            If you haven’t already, I highly recommend watching Mike Acton’s 2014 talk on Data Oriented Design: https://youtu.be/rX0ItVEVjHc

            Rather than focusing on debunking OOP, it focuses on developing the ideal model for software development from first principles.

            1. 1

              Glad I was helpful! I’d really recommend reading the article I linked and summarised – it took me a few goes to get through it (and I had to skip a few sections), but it changed my thinking a lot.

            2. 3

              [interface-based polymorphism] requires some kind of dynamic dispatch (or monomorphization), and hinders optimization across interface boundaries

              You needed to do dispatch anyway, though; if you wanted to treat players and monsters homogenously in some context and then discriminate, then you need to branch on the discriminant.

              Objects, on the other hand, allow interface-based polymorphism – so you might have some kind of interface […] which Player and Monster both implement

              Typeclasses are haskell’s answer to this; notably, while they do enable interface-based polymorphism, they do not natively admit inheritance or other (arguably—I will not touch these aspects of the present discussion) malaise aspects of OOP.

              1. 1

                You needed to do dispatch anyway, though; if you wanted to treat players and monsters homogenously in some context and then discriminate, then you need to branch on the discriminant.

                Yes, this is a good point. So it’s not like you’re saving any performance by doing the dispatch in ADT handling code rather than in a method polymorphism kind of way. I guess that still leaves the stylistic argument against polymorphism though.

              2. 2

                Just to emphasize your point on Cook’s paper, here is a juicy bit from the paper.

                Any time an object is passed as a value, or returned as a value, the object-oriented program is passing functions as values and returning functions as values. The fact that the functions are collected into records and called methods is irrelevant. As a result, the typical object-oriented program makes far more use of higher-order values than many functional programs.

                1. 2

                  Now, interface-based polymorphism is an interesting thing to think about and criticise in its own right. It requires some kind of dynamic dispatch (or monomorphization), and hinders optimization across interface boundaries.

                  After coming from java/python where essentially dynamic dispatch and methods go hand in hand I found go’s approach, which clearly differentiates between regular methods and interface methods, really opened my eyes to overuse of dynamic dispatch in designing OO apis. Extreme late binding is super cool and all… but so is static analysis and jump to definition.

                1. 13

                  I don’t have an overall positive opinion of RuboCop or zealous linting in general, but, @tomdalling, I don’t see why you, or anyone who feels similarly, wouldn’t just customize the cops/rules for whatever project, and move on. Is this article a complaint against the chosen defaults? RuboCop’s authors wouldn’t be able to satisfy everyone; no matter what default they chose for any given style preference, there’ll be some set of people that don’t agree. I think it’s okay for them to have chosen whatever defaults they wanted, since, for the most part, they provide a lot of flexibility for adjusting or omitting cops/rules.

                  1. 6

                    I don’t see why [the author would write this article.] Is this article a complaint against the chosen defaults?

                    No. The article is a complaint about a perceived culture of developers obeying RuboCop rules too fanatically. The article promotes the idea of disabling or reconfiguring RuboCop rules that make code worse in practice.

                    I learned that from the Disclaimer/Conclusion section at the end of the article. I do think the article would have been clearer if it had started with that message.

                    1. 2

                      The disclaimer section being the main focus of the article would have made it much much better in my opinion rather than being relegated as a footnote. People need to realize that as developers we have tools to get our job done but they are just that, tools. If a tool doesn’t serve our purpose it is 100% within our right as professionals to not use that tool in favor of doing the right thing especially in cases where the tool actually makes the end product worse. The disclaimer is a MUCH more interesting topic than the complaint about a single RuboCop rule but I think its lost in the minutia of the rest of the article.

                      1. 2

                        I think the value of Rubocop isn’t the zealot-ness but instead the set of consistency it applies, which I think refutes @tomdalling’s point entirely.

                        If you don’t like the rules, you can change them , that is a major feature of Rubocop.

                        The specific example he used, I actually disagree with, and think the cognitive load, and clarity of what Rubocop recommended is far better than what he was initially wanting to write. So in the a parallel universe where @tomdalling and I are on the same team, I’m writing code one way, and he’s writing code another, and that is exactly why Rubocop is a good thing, these differences are not only brought to the forefront, they are discussed and decided on and whether you disagree or agree, you both end up agreeing that consistency is worth more than either way, and then Rubocop polices that decision.

                        To me this is a tabs vs spaces kind of discussion, and my answer is: I mostly don’t care, but definitely not both!

                        1. 4

                          Is it better to be consistently less readable, or inconsistent but more readable? What if it only drops you from being 90% consistent to being 85% consistent? Why are we not 100% consistent? Shouldn’t we always use hash rockets instead of colons? Why do we use both normal and postifx conditionals, both if and unless, when it would be more consistent to only use it one way?

                          Rather than refuting my point, I think you’ve actually given an example of it. I don’t think we should try to be consistent for the sake of being consistent — consistency for the consistency god, cops for the cop throne. We should be consistent to the extent that it provides benefits. Anything else is cargo culting. That was my point.

                          1. 1

                            Your whole point was around readability, and understanding, and how rubocop spoils it. But Consistency of style is a major factor to readability (more major than explicit/implicit nil, or guard expressions).

                            If I’m misunderstanding your point, I apologise, but are you stating that inconsistency is sometimes a benefit? I don’t think I’ve ever experienced that, nor heard that position expressed.

                            All of those questions you raise, are exactly the kind of questions you should raise, and decide as a team what to do. Use linting to raise awareness of these decisions, don’t follow it slavishly. Think: “Do I like implicit nil… no, so lets lint against that”, and then the team can move forward, and when a new member joins and the linting fails on their first commit, you can have the discussion again, and even change your mind. Raising team awareness is rarely a bad thing.

                            1. 3

                              That’s sort of but not exactly what I meant.

                              Everyone knows that RuboCop can make code better. It wouldn’t exist otherwise. But not everyone knows that RuboCop can make code worse, too. Lots of people see it as purely positive, with no tradeoffs. If you believe that, then you should enable all the cops and never turn them off. I wanted to show that it doesn’t work like that.

                              It’s the same with consistency. Some people view consistency as purely positive, with no tradeoffs. Everyone knows that it’s good to be consistent, but not everyone knows that it’s bad to be over-consistent. We already understand this intuitively, which is why I asked about hash rockets and all the different ways that Ruby provides to write conditionals, but we’re not always conscious of it.

                              Think about it this way. Let’s imagine that Hammer Man has a hammer, and he uses it consistently. Need to drive in a nail? Hit it with the hammer. Need to break a rock? Hit it with the hammer. Need to cut some wood? Smash it apart with the hammer. Need to beat some eggs? Stir them with the hammer. Need to turn off the TV? Throw the hammer at it. That’s consistency. He might look at his neighbour who uses a saw, a whisk, and a TV remote, and think that they are being inconsistent. They have to learn all these different tools, and the tools cost money, and they take up space in your house, when a simple hammer could have been used instead. That’s all true, but the mistake is in thinking that consistency is always good, and inconsistency is always bad. Hammer Man is being over-consistent, which is bad, and the neighbour is being adequately inconsistent, which is good.

                              You can draw parallels between Hammer Man and the way that developers use linters. Now that I think about it, and if I were to be cheeky, I might draw parallels between Hammer Man and people who really like Golang.

                              1. 2

                                Hashrockets? Here’s my anecdote: rubocop says you should be consistent and choose one of foo: bar or :foo => :bar for your codebase (I think it defaults to the former). This is fine as general advice for application code, until I start getting PRs for my rake task dependencies that rewrite them from

                                task :default => [:html]
                                

                                to

                                task default: [:html]
                                

                                The entire point of this notation is that there’s an arrow from the task name to the dependencies. The fact that this evaluates as a single-key hash from a keyword to a list is … incidental: if that syntax instead resulted in the creation of a Proc or an anonymous class or a giant mecha arthropod, I suspect that Weirich would have done his level best to work with whatever that thing was, to get the list of dependencies out of it. But (the unthinking application of) Rubocop has resulted in completely missing the point

                                Yes, you can turn it off or change the defaults or ignore the warning (unless rubocop is gatekeeping your CI builds) but the point is that it creates a presumption in favour of the “wrong” notation that senior developers (or, I suspect, just old developers in general) now have to spend their time working against.

                                [/rant]

                                1. 1

                                  Yep, that’s the spirit of what I was getting at. The developer writes it a certain way then RuboCop changes it for the worse because it can’t know the original intent, which has both technical costs (worse code) and social costs (time wasted, pissed-off devs).

                                2. 1

                                  I think I agree with the point you are now making.

                                  Though your analogy is wildly inaccurate.

                                  If we are talking Rubocop: Everyone on the team is already using ruby, so they’re all Hammer Man already. So really you are deciding on techniques of using a hammer (by your analogy), not anything to do with other tools.

                                  • Banging things in with the handle is bad, don’t do that.
                                  • Use the back for pulling out nails.
                                  • Use the front for hitting things.

                                  This kind of consistency is always good, and there is no such thing as over consistency in this context.

                                  1. 1

                                    It wasn’t an analogy, it was a parable that shows there is such a thing as being too consistent.

                                    If we’re talking about Ruby being analogous to a hammer, then there would be a RuboCop cop that bans the use of the claw because it’s not good at hitting things, and if you want to use the claw to pull out nails people will complain about how using two different sides of the hammer is not consistent, and it’s best practice to only use the front side.

                                    There is such a thing as over consistency in this context — like only using one side of the hammer. I just don’t understand this desperate need devs have to treat consistency like some divine, flawless god that can never be wrong and can’t be questioned.

                      2. 1

                        It is explained at the bottom. I think this is a case of complaining about the article without having actually read the article. We all do it sometimes :)

                      1. 2

                        Note, this presumes your git clone actually set refs/remote/$REMOTE/HEAD to point to the branch which if you set the config option pull.default to say simple will be the case. Additionally old versions of git servers didn’t always set the HEAD back and your clone may not have this. (cough atlassian stash cough)

                        If not, none of this is going to work. Additionally the remote technically could update the refs in between you.

                        git ls-remote origin

                        Might be a more “correct” way to get at the default branch name on a remote, but means a network call and looking at the HEAD commit and which ref branch is the same as HEAD.

                        1. 1

                          You’re right, my thought was this would work in the majority of cases (Github, etc) and would avoid the network call which would slow things down in the general case. Alternatively you could also run git remote set-head origin --auto every so often but realistically how often is a repository going to change the default branch. We now live in a world where there is no longer a standard convention for default branch names (it was always possible to change but Github changed the default) and I needed a way to not have to worry about that for every repository.

                          1. 3

                            Yep its fine as-is, just wanted to basically say “Its complicated” and that this whole branch name change hadn’t helped matters and made git even more sharp edged. I think its good to at least note that these aliases need to be treated in the vein of a manual transmission, miss clutching and you’ll end up stalling the engine.

                            One other way this can all fall down is if the remote is also not named origin (easy to do as well!). Or you did a git init without any remote.

                            To be honest I’ve not come up with any amount of aliases/wrappers or anything that can adapt to all these permutations and wish we never had this happen or that git had a better way to track this than a bunch of weird plumbing commands that solve 80% of the problem. /rant mode off

                        1. 1

                          I have a synology NAS and am quite happy with it. Nightly backups off-site

                          1. 1

                            ^ I second this. Backed multiple computers using Arq + Google Drive + Random Storage to BackBlaze . Synology CloudSync makes it really easy to pull from and backup to a large number of cloud providers.

                          1. 3

                            This issue is more generic than Python… the true problem comes from piping STDIN and STDOUT from a subprocess.

                            One other alternative is to make sure that you are always streaming the STDOUT and STDERR of a subprocess to a dedicated file rather than to subprocess.PIPE.

                            1. 1

                              Yes you are absolutely correct, the place where I ran into this was with python but it can certainly happen wherever you are piping STDIN and STDOUT and not properly handling the data. In python even thought the documentation does warn about this it still seems like an easy bug to introduce and can be weird to debug.

                              1. 1

                                The issue isn’t too hard to solve by poll()ing the FDs. The deadlock is an implementation issue.

                              1. 9

                                Scott Manley has a good video on the EM Drive with his thoughts on the implications and the likelihood that these findings will be substantiated.

                                1. 4

                                  I much preferred this as a ‘rebuttal’ - lots of good information/analysis, and why to be skeptical and what it would mean if it is experimentally proven. Without all the name calling.

                                  1. 1

                                    Scott Manley is a great blend of science educator and Kerbal Serial Killer/Space Vehicle Designer.

                                1. 2

                                  Wish git had this built in.

                                  1. 6

                                    Probably never will because git is and should be agnostic to what you’re actually using to host the repositories (git != github). There are a number of facaded on top of git (hub for github for example) that act as a pass through for git commands and add extra functionality on top. In my opinion thats the best way to do that sort of thing.

                                    1. 1

                                      You are correct, I wasn’t thinking of that, they will not accept it.

                                      1. 1

                                        It can make a best guess though and try to translate a git:// url into a http(s). Many git servers also serve a website. Git could even have a config variable for the url to use.

                                        1. 1

                                          Sure it could but that can be said about pretty much anything. The unix philosophy is to have software that is minimalist and does its own job extremely well which I believe that Linus and git are looking to follow. Adding a facade on top of git give the most flexibility and keeps the core application as specific as possible so it can just focus on doing that one thing very well.

                                      2. 1

                                        Yes! A friend of mine told me to make a pull-request to github and see if they accept it =)

                                      1. 2

                                        Awesome that this looks like it supports both github and bitbucket. If you are only using github take a look at hub which provides a facade over git and adds a bunch of github related features like hub browse which will open the current repo in the browser.

                                        1. 1

                                          Yes, it works on both, github and bitbucket and I will test with gitlab. I will take a look at hub, thanks for sharing.

                                          Although my idea is to keep git-open as simples as possible, maybe I will take some ideas and implement them =)

                                        1. 6

                                          Please remember the software tag for release announcements! :)

                                          Also, it’s great to see that the Elixir folks are introducing reasonable time support. The Erlang time tuple is a little awkward to work with, and having a bunch of folks introducing their own not-quite-compatible types (Ecto.Datetime, Timex, etc.) is not a long-term solution.

                                          1. 4

                                            Sorry about that! First time posting, I’ll remember for next time.

                                          1. 6

                                            It’s not clear if the experiment broke ethical or even legal boundaries, since it relied on confusion if not outright deceit to trick people into installing something other than what they intended to install. Still, the lesson the experiment imparts is worth heeding.

                                            I’m not sure I understand what the “ethical or even legal boundaries” they are implying were broken here. It doesn’t go into detail about what the script he wrote does, if it was something malicious that would make more sense. But if i read it correctly he basically wrote a script that shows a warning message telling the developer their mistake and pings home to register the download in order to see how large the attack vector was. Am I missing something or is the article trying to make things sound way more interesting than they really were?

                                            1. 6

                                              I think the article is just being bombastic. I don’t see anything unethical about it, it’s basically how all computer security research works.

                                              That being said, judging from the recent CFAA cases, it probably would be considered illegal by US law. It’s a good thing the student lives in Germany and not the US, or they might be looking at jail time (especially since the package infected .mil domains).

                                              1. 5

                                                I would argue that the specifics of what he did were both unethical and illegal. Illegal by the letter of the Computer Fraud and Abuse Act, as you mentioned (he certainly exceeded authorized access on the machines that downloaded his fraudulent packages, as the users no doubt had no expectation that downloading the packages would result in searching of their machine or transmission of data to some outside location). Unethical because his packages scanned the user’s machines, including command history, resulting in potential accidental disclosure of private information. I understand that he had some personal justification for this in the context of his research, but without permission (which would likely have had to have been given by the users when they first accessed the package manager, likely with some sort of credential system to track their having opted in to experiments that may expose personal information), this definitely seems like a breach of reasonable ethical practices in the security field.

                                                1. 2

                                                  Where does it say his program scanned the user’s machine? All I got from the article is that it logged its own invocations.

                                                  1. 4

                                                    It’s on page 23 of the thesis for which this work was done. Here’s the quote listing what the fraudulent packages collected and transmitted back to the university machine. Note that all data was transmitted unencrypted over HTTP as the query string of a GET request.

                                                    • The typosquatted package name and the (assumed) correct name of the package. This information was hard-coded in the notification program before the package was distributed. Example: coffe-script and coffee-script (correct name).
                                                    • The package manager name and version that triggered the operation. The package manager name was also hard-coded, before the package was uploaded. The package manager version was retrieved dynamically. Example: pip and the outputs of the command pip –version
                                                    • The operating system and architecture of the host. Example: Linux-3.14.48
                                                    • Boolean flag, that indicates whether the code was run with administrative rights. Getting this information on Windows systems is not trivial and possibly error prone.
                                                    • The past command history of the current user that contains the package manager name as a substring. This information could only be retrieved from unixoid systems, because Windows systems do not store shell command history data. Example: Output of the shell command grep “pip[23]? install” ~/.bash_history
                                                    • A list of installed packages that were installed with the package manager.
                                                    • Hardware information of the host. Example: Outputs of lspci for linux. On OS X, the outputs of system_profiler -detailLevel mini were taken.
                                                    1. 4

                                                      OK, yeah, that’s definitely crossing a line.

                                                  2. 1

                                                    users no doubt had no expectation that downloading the packages would result in searching of their machine or transmission of data to some outside location

                                                    Why do you say that? That’s what pretty much every ruby gem I’ve ever installed did. Digs around on my hard drive for a while, downloads some more pieces, compiles some code, runs some code, blah blah, finally announces it’s done.

                                                    1. 3

                                                      I suppose I should have been more precise. The type of data collection the program did, in particular greping for commands in the bash history of any Linux machine for commands containing the name of the package manager, and then transmitting the result of that search back to a remote machine is probably behavior the average user would not expect.

                                                      1. 1

                                                        A Ruby gem that computes and installs dependencies is not even remotely the same thing as what happened here.

                                                        I absolutely do not expect installing a package or gem will scrape arbitrary information from my system and send it to an unknown third party, and I don’t think many people do expect that or think it’s okay.

                                                1. 3

                                                  This looks really cool and is one feature that I think is missing from Dash. There are tons of integrations for editors with Dash and its a great resource to be able to quickly look up documentation when you need it but its definitely missing a curated set of examples depending on the documentation you’re looking at, i’ve found most documentation that I use at least is pretty void of usage examples. This means generally if i want to find how its used I’ll need to go over to Google to search for actual usages.

                                                  I like the way Dash does the integration better than having an always on window that updates while I type, personally I would get really distracted seeing the right side of my screen always flickering with new information.

                                                  1. 3

                                                    Sourcegrapher here. Thanks for the kind words. We built this editor integration because we 100% agree with you—actual usage examples are super valuable.

                                                    You can turn off the live-updating, always-on behavior in the editor plugin. See https://github.com/sourcegraph/sourcegraph-vim#vimrc or https://github.com/sourcegraph/sourcegraph-sublime#auto. Then you just use a hotkey to jump to usage examples. Or you can just keep that browser window in the background (that’s how I use it, since I use a full-screen WM on Linux).

                                                  1. 2

                                                    I recently found regex101 and have been using it for all my regex needs. They have some really nice debug information in the right hand side complete with explanations.

                                                    1. 4

                                                      Looks like an awesome project, best of luck getting funded! For anyone who is using chrome as their main browser, I’ve used the Vimium[1] chrome extension with some luck. I’m curious if you’ve seen/used this extension before and some of the benefits of qutebrowser over this extension (beside native support for the vim bindings which I would think lends itself to a more fluid experience)

                                                      [1]https://chrome.google.com/webstore/detail/vimium/dbepggeogbaibhgnhhndojpepiihcmeb?hl=en

                                                      1. 4

                                                        Vimium (which I used for a longer time before starting qutebrowser) is mostly about keybindings, while mostly keeping the Chrome UI (it has no other choice, with Chromium’s plugin API). It doesn’t have things like a real commandline, easy extensibility, or a minimal UI.

                                                        I think the user interface is really important - I have a relatively low-resolution screen (1366x768), and I don’t want a big address/tab bar I almost never look at.

                                                        Also, with qutebrowser you can do things like :spawn mpv {url} to simply launch mpv to play the current URL. Or :hint links spawn mpv {hint-url} to do the same via hints. Or :download-open to simply open the file you just downloaded. Or edit form fields with e.g. vim by using Ctrl-e.

                                                        From my point of view, qutebrowser compared to Vimium is basically like vim compared to some IDE with really bad vim emulation.

                                                        1. 2

                                                          Thanks so much for the run down, qutebrowser sounds awesome. I was sad to see that homebrew dropped QtWebKit as I was exited to give it a try.

                                                          Using qutebrowser with Homebrew on OS X is currently broken, as Homebrew dropped QtWebKit support with Qt 5.6. I’m working on building a standalone .app for OS X instead, but it’ll still take a few days until it’s ready.

                                                          1. 3

                                                            I built a standalone .dmg/.app for qutebrowser just a few hours ago, I’ll release a v0.6.0 dmg once some people confirmed it works - if you want to test it, that’d be most appreciated! https://t.cmpl.cc/qutebrowser.dmg

                                                            1. 1

                                                              App worked perfectly. Was able to download it and fire it up no problem. I’ll play around with it a bit more and let you know if anyone comes up.

                                                              Sent from my qutebrowser

                                                              1. 1

                                                                Awesome, thanks for testing! I assume you’re on OS X 10.11 (~~Yosemite~~El Capitan)? I’d be really curious if it works on 10.10/10.9 as well.

                                                                1. 1

                                                                  El Capitan actually (10.11.4 (15E65)). I think I might have an older machine I can try it out on, i’ll have to get back to you on that. I did receive a crash signing into github and reported it through the reporting dialogue box that came up. Not sure how that reporting system works and if you’ll eventually get the crash report but if there’s a better place for me to send it to you let me know.

                                                                  1. 1

                                                                    Hmm, I think you’re running into this Qt bug. I fixed it in Qt, but maybe for some reason the Mac I’m building the dmg on didn’t have the fix backported…

                                                                    I think you get an OS X crash report window? Can you look at the details there and confirm the stacktrace mentions WebCore::SocketStreamHandle::platformClose() too?

                                                                    1. 1

                                                                      Sorry for the delay, yes I see that line in the stack trace

                                                                      0   libsystem_kernel.dylib          0x00007fff9d1948ea __kill + 10
                                                                      1   libsystem_platform.dylib        0x00007fff8c61852a _sigtramp + 26
                                                                      2   ???                             000000000000000000 0 + 0
                                                                      3   QtWebKit                        0x000000010774bf04 WebCore::SocketStreamHandle::platformClose() + 84
                                                                      4   QtWebKit                        0x000000010774a79a WebCore::SocketStreamHandleBase::disconnect() + 26
                                                                      5   QtWebKit                        0x000000010773bf96 WebCore::WebSocketChannel::fail(WTF::String const&) + 710
                                                                      6   QtWebKit                        0x0000000107739375 WebCore::WebSocket::close(int, WTF::String const&, int&) + 325
                                                                      7   QtWebKit                        0x00000001077fb42d WebCore::jsWebSocketPrototypeFunctionClose(JSC::ExecState*) + 205
                                                                      
                                                                      1. 1

                                                                        That’s indeed the crash I suspected it was - I installed the patched Qt on my build machine and repacked, can you please try https://t.cmpl.cc/qutebrowser-dmgv2.dmg ?

                                                                        1. 1

                                                                          Yup looks like that fixed the crash. I was able to sign into github no problem.

                                                                  2. 1

                                                                    Hi, it seems to run fine on 10.10.4 (Yosemite) for me. Good luck!

                                                          2. 1

                                                            I’ve been a heavy user of Vimium for a few years and I just tried this on my windows machine. It works really well! Will see if I can contribute to development, PyQt5 looks awesome to use.

                                                            1. 1

                                                              I’d be glad! Let me know if you need help :)

                                                          1. 6

                                                            It warms the cockles of my heart to learn that mutt’s still being actively developed :)

                                                            I know I should migrate off of Gmail, but I have the keystrokes in muscle memory at this point.

                                                            1. 2

                                                              I have given up on trying to migrate off Gmail at this point. I’ve tried mutt, pine, thunderbird, and a host of other GUI based email clients (most of which are now defunct at this point, I’m looking at you sparrow) and none of them have stuck. The only caveat is offline email access is not all that important to me and I can using something like offlineimap to create a backup of the emails.

                                                              1. 2

                                                                Ah, yeah, Sparrow getting eaten by The Goog was super painful. I loved that client!

                                                                1. 2

                                                                  How about FastMail?

                                                                  1. 1

                                                                    I recently just moved off of FastMail back to Gmail after using it for 1 year. I liked FastMail but at the end of the day I was missing the integration between the Google products and frankly the mobile app left some to be desired compared to the Gmail app, the web ui was fine however.

                                                                    1. 4

                                                                      Not to be too sardonic: but you mean the integration where people you e-mail with suggested in Google+, etc.? Or where images that are shared in an application covered by Google Apps for Work (Hangouts) show up in ad-mined services such as Google+ or Google Photos?

                                                                      One thing I have appreciated when moving away from Google Apps is that my data is much better compartmentalized and I decide what can be linked when.

                                                                2. 2

                                                                  but I have the keystrokes in muscle memory at this point.

                                                                  But you can add (nearly) the same keystrokes to mutt as well.

                                                                  macro index y "<save-message>=Archive<enter><enter>"
                                                                  macro index d "<save-message>=Trash<enter><enter>"
                                                                  macro index * "<copy-message>=starred<enter><enter>"
                                                                  macro index,pager gi "<change-vfolder>inbox<enter>" "go to the inbox"
                                                                  bind index,pager a group-reply
                                                                  

                                                                  etc.

                                                                1. 10

                                                                  It actually likely cost much more than $336k - possibly more than a million. There are a few parts at play here.

                                                                  There is a minimum obligation of $336,413.59 (box 26), likely to cover the base period of the contract. But, that’s just the first six months of a two year contract. The contract actually includes three more six-month option periods and has a total cost ceiling of a whopping $1,176,280.72 (see the supply the schedule.)

                                                                  There isn’t enough material to decide if the 18-months of option periods were actually funded. However, contractors almost always get these. Also, and this is a time and materials contract, so the TSA may have spent much less than $336k (you would need to look at invoices to see how much IBM actually billed.)

                                                                  Is $1.2M outrageous? The GSA contract vehicle (GS-35F-4984H) given is for general IT hardware, software, and services. The randomize contract is written for “mobile application development,” which means it was a services contract and mostly went to developers, engineers, project managers, etc.

                                                                  IBM has public rates available for this contract for 2016. Who knows what labor categories they used for billing the government, but going with a rate of $200/hr we get a maximum of about 5881 hours or about 3 person-years of effort.

                                                                  1. 2

                                                                    Yeah, and if you’re looking for the hourly rates from the government, https://news.slashdot.org/story/15/10/22/2336220/government-team-experiments-with-paying-for-small-open-source-tasks indicates that an average winning rate for a “Senior Consultant” w/ a BS degree and 5 to 10 years of experience is 171 USD/hour, which has to cover business expenses, overhead, supervision, contract searching, bench time etc. Compare that with the averaged salaried employee only making 50 USD/hour.

                                                                    1. 1

                                                                      According to this article[1] the app itself cost $47K and was only part of the entire contract.

                                                                      The total development cost for the randomizer app was $47,400, a TSA spokesperson told Mashable, which was part of the $336,413.59 contract. The spokesperson declined to elaborate on what else the contract entailed.

                                                                      [1] http://mashable.com/2016/04/04/tsa-ibm-randomizer-app/#1x4kszSOHPqo

                                                                      1. 1

                                                                        Thanks for clarifying; I’ve updated the post to include a lot of the info here, and linked to this comment.

                                                                      1. 16

                                                                        I am in the process of trying out emacs + evil after being a long time vim user. I started with spacemacs but quickly found it to be pretty confusing to setup and switched to just straight emacs + evil and haven’t had any issues. I have the same feeling for all the vim starter kits as well, they do too much and people don’t understand whats going on. I find immense value in setting up my environment from scratch and learning about the different pieces and how they work together. Yes spacemacs adds layers and some other configuration on top but I found it to be way more heavy handed and confusing than I needed.

                                                                        1. 7

                                                                          I created the original Starter Kit for Emacs, and I fully agree. Back in the day (before the package manager) it sorta made sense, but these days the effort would is much better spent on creating and documenting individual packages that do one thing well. The Emacs Starter Kit is now fully deprecated, and the readme is just a document explaining why it was a bad idea.

                                                                          1. 5

                                                                            I’m also fine with emacs+evil so far. One hangup is that evil-mode interacts badly with some other packages and modes, though. For example, I use mu4e to read mail, and evil-mode breaks its main menu. There are usually workarounds, but if you use a lot of those modes together, an all-in-one setup where someone has already done the configuration to get everything working together might save time and hassle.

                                                                            1. 2

                                                                              Yeah thats a good point, to be fair I haven’t gotten far enough into spacemacs or emacs for that matter to experience many weird interactions. I have seem some weird behavior between evil and helm (I think) and some other modes (opening git interactive rebase seems to completely disable evil mode). I was going to give emacs+evil a few weeks and re-evaluate. If I end up switching back I will miss https://github.com/johanvts/emacs-fireplace though :)

                                                                            2. 4

                                                                              The most useful thing is that the layers provide consistent evil bindings. They also deal with a lot the quirks when integrating evil modes into holy things. Recreating that would be a lot of work.

                                                                              1. 1

                                                                                Yeah I’ve definitely noticed some of those quirks and don’t have a great way to figure out what they are and how to fix them. That’s definitely where spacemacs would come in but honestly it feels like an uphill battle of whack-a-mole.

                                                                              2. 1

                                                                                I had the same experience.

                                                                              1. 2

                                                                                My team, despite being mostly local, uses a tool called iDoneThis. We have an in-person/virtual standup Monday morning to sync on work for the week and then use iDoneThis to stay in sync for the rest of the week. This is really helpful for when people are out and want to stay caught up on what everyone is working on. It also really helps look back and reflect on what you’ve worked on.

                                                                                1. 3

                                                                                  We played with iDoneThis at FreeAgent many years ago, and someone also pointed me at: StandUpMail which from the look of things is fairly similar. I can definitely see those being useful for us, so we might have another explore of those tools.

                                                                                  1. 1

                                                                                    I looked at iDoneThis a few years back for a very small team I led. I loved the idea and was shocked there was no open source clone. Ended up not using it because of size, but if I ran a team that was either bigger or distributed, a similar tool would be very helpful.

                                                                                    The question then becomes, when do you have the face to face between team members (managers will get their one to ones, of course). The options I see are:

                                                                                    • adhoc, just through the course of working together
                                                                                    • scheduled (I have heard of peer one to ones, which is an interesting idea)
                                                                                    • weekly group video calls, as other comments have mentioned