This was quite helpful as an amateur vim user.
I remain skeptical about the supposed necessity of hjkl vs the arrow keys, but the rest of it is good stuff.
Moving to hjkl over the arrows was a hard move for me, but it helped me think more in terms of motions instead of movement. That way, it doesn’t matter if you are moving up two lines or yanking two lines - it is the same motion. You aren’t doing one thing to move and another to modify your action. FWIW, this article really helped me in terms of “thinking in vim” https://yanpritzker.com/learn-to-speak-vim-verbs-nouns-and-modifiers-d7bfed1f6b2d#.awwmgb6r9
For those of you having trouble getting the muscle memory for hjkl down, there’s [this game]. I found it really helpful when I was first learning.
Being able to move the cursor without having to move your fingers off home row, also getting used to moving the cursor only in normal mode (utilizing the other movement keys as well) increased my editing speed a lot.
I also recommend remapping esc to caps lock.
Since it looks like something filtered the game, googling ‘vim adventures’ gets you a fun one for teaching vim.
I learned to hjkl using nethack, any roguelike that supports hjkl should get you there pretty fast though.
Moving to hjkl is a Good Thing. Must admit, I didn’t find it too difficult to transition as when I was learning vi the arrow keys never seemed to work when I logged in to remote systems using telnet (yes, it was that long ago…).
Something that can help - setting keybindings in other applications you use to use the same keys. For example, I use vi-style keybindings in mutt, tin, etc. If you use Gmail the same navigation keys work there too (not that I’d condone using Gmail, but some people seem to like it…).
This is not all that different than developing in a local VM per project.
I worked in an environment where I could only use a very bad Windows box where I instituted a similar process. Instead of doing a full remote session on the box I would just Putty into the machine and do all of my development in tmux and vim, it worked great.
As to security, you can run your X session through ssh and it’ll be as secure as your private key (don’t use passwords!). The only other concern is the general security of your cloud provider but there isn’t much you can do there. You probably aren’t a big enough target for someone to be trying to hit you with cross vm attacks anyways, so no need to get too paranoid :)
I think I’d still do ‘heavy’ things (e.g. IntelliJ) locally, just use the remote desktop for lighter, but ‘always on’ things.
Yeah, I guess I’m more worried about my ability to lock down a Linux installation than I am about the provider.
Security of the connection should be ok - connecting by ssh seems to be doable.
Work: Finishing up a python tool and doing my out processing, last day is Wednesday!
Not Work: Packing and preparing for moving to Chicago next week!
I don’t think the site sees enough content. I think we should institute weekly submission requirements, perhaps 3 per week.
Question to lobste.rs here: Is it necessary for a Ruby developer to know how to implement a linked list at all?
To me that sounds like a weird thing to test for in an application developer since practically all application languages have their own list object.
Actually, I have an interesting data point there. I just had a friend join me at Google, after they’d spent more than a decade at a large defense contractor well-known for both software and hardware. Google’s interviews are famous for involving algorithm questions - slightly more complicated than linked-list implementation, but it would be hard to pass them without knowing it as background knowledge. The other company’s are not.
According to my friend, many of the highly productive programmers they know from the other company will cheerfully talk about how glad they are to leave their undergrad algorithms courses in the past and forget everything from them. Googlers… have a contrasting attitude, which was a large factor in coming here for this friend, and for myself.
My conclusion is that whether data structures and algorithms knowledge is important to programmers depends on the nature of the work, and is also a culture question. It would be a surprise here to meet a coworker who wasn’t at least interested in discussing algorithms topics, even though they are only occasionally of direct importance.
Depends on the work you’re doing. Plenty of developers can do their jobs perfectly well without understanding fundamental data structures.
It would seem impossible to do effective performance analysis & many other tasks without understanding how basic data structures & algorithms work, though.
That is to say: is it necessary in order to develop? No.
Is it necessary in order to be successful long term? Almost certainly.
Depends wildly on what you mean by successful, and what you’re working on in the day to day. Does seem like a waste of money to take a bunch of algorithms/programming classes and still be unable to implement a linked list.
There’s an implicit assumption here that the primary goal of a college education is to be marketable. For many I would imagine that becoming marketable is not in fact their primary goal, and is superceded by things ranging from fulfilling the desires of some real or imagined societal or familial pressure, to broadening their cultural and intellectual horizons through interaction with people of varying backgrounds and fields. That’s not to say that being marketable isn’t important (I think it is quite important for long term happiness, as much as we’d like to imagine we can all be happy making low wages working with a non-marketable degree), but that we in professional STEM fields may overestimate the degree to which others value the marketability of their degree.
It’s important to know how it’s implemented, so that in the event you find yourself using one (even if it’s fundamental as in erlang or in a stdlib somewhere), you understand its characteristics. Actually performing the implementation is, however, as you say, utterly pointless now, just like implementing any sort or any tree.
I think its also important to know from a communication perspective. When I’m working with other developers I expect some basic fluency with the fundamental data structures and algorithms. I wouldn’t expect someone to implement a linked list but I would expect them to understand when the business problem can be well modeled like a linked list or tree and be able to communicate the idea using those terms.
Writing a simple version of either is a easy way to demonstrate that understanding.
I don’t know about ‘utterly pointless’ – understanding how different trees or sorts are implemented is valuable in recognizing other datastructures you have to build that are near-copies. It’s perhaps not super relevant to CRUD-building, but if you’re doing anything nontrivial behind that CRUD (for instance, many of the applications I’ve worked on have had very nontrivial business layers, involving stuff like decision support trees, etc). Understanding how to structure those trees effectively relied heavily on my understanding of abstract datastructures.
I’m certainly not saying it’s the most important thing, but ‘utterly pointless’ is maybe a bit overzealous. This goes for things like knowing how to implement depth-first vs. breadth-first search, too – or understanding the complexity of a custom merge-and-balance operation vs. implementing a self-balancing tree (the aforementioned decision support tree program involved a fair amount of theory in how to effectively implement, whether via a M&B approach, or an online/self-balanced approach).
I agree you need to know how to use trees, and communicate about their use; and that some stdlibs don’t have exactly the right tree types for every possible use case. But the context here is what questions you’d ask in a developer interview. Asking for a de novo implementation of an online red-black tree implementation merely tests whether the interviewee has recently completed an algorithms course in college.
Ah – that I totally agree with. I’m not sure I could give you a de novo implementation of a red-black tree without the aid of a few pots of coffee and a couple of algorithms books. Much less on the fly in an interview.
As you say, the setting and the time constraint make even simple things much harder. “Implement this data structure” is NOT a good interview challenge, because it takes a few hours to do properly, even with reference materials at hand.
As just one data point, I am a Ruby programmer of 4 years now and I do not know how to implement a linked list.
That’s awesome. I bet there are a very large number of python and php programmers who have the same experience, and i bet most of you folks can go to your graves with fulfilling careers and lives without that ever being an issue. All of those languages are mutable and strongly prefer arrays and hash tables/dictionaries over linked lists anyway in their stdlibs.
If the answer is “no”, should they know how to implement anything? On the other hand, it seems really strange they asked for it to be implemented in Java.
I wouldn’t expect them to know all the details off the top of their head, but it’s not a very difficult problem at all. Even if they’ve never heard of linked lists before, they should be able to code it up once they know it’s a series of linked nodes. It’s not like they’re asking for some exotic balanced tree with tons of pointer juggling.
That said, 30 minutes to implement the whole List API for a junior developer is a little tight. Hearing it was 25 public methods sounded unreasonable, but looking at the docs, most of them are just wrappers around some variation of a while(…) loop, so it ends up not being too bad. If I had to use it as an interview question I’d probably bump it up to 60 or 90 minutes, though.
I’d be interested in seeing the code the author and his co-workers came up with. 6 hours seems like a really long time to not get the whole thing working.
Entropy being a sellable commodity is an interesting idea but as far as I can tell this ‘business’ is just an exercise in entrepreneurship for the girl running it (which is neat but not really a lobsters thing in my opinion).
http://world.std.com/~reinhold/diceware.wordlist.asc and a die or maybe some git commits should get you the same result, with less chance of a govt getting your password from the mail.
Yes, but on the other hand it’s a cool way to raise awareness about password security, and useful to non-technical people.
Work: Porting my old team’s existing code base to coexist with the code base of the project we got merged with last week, C++ and Boost fun times!
Not work: Spinning up the job search for Chicago area in preparation for a coming move and doing general interview prep!
I like the idea of this a lot, especially in the context of languages that already put you close to the AST or provide macro functionality but it sounds a lot like retreading a lot of the territory of Smalltalk development environments. Using a tool like this as a teaching tool would be interesting (move them on to vim or emacs if they show promise!).
I wish they would have included a video (even if they had no plan to release the source).
Its good to see security focused products both sharing source code and encouraging security researchers to find the bugs.
Does anyone have a thought on how much moving from human-readable protocols, such as HTTP, to bianry ones, like HTTP2 will change things? In many respects, I think binary protocols become easier to parse safely. Especially things like integers which are fixed size in the protocol rather than variable and unbounded in human readable formats. In some sense I lament HTTP2 because HTTP’s simplicity made it so easy for someone to get in and excited, but if it leads to more safety that is a worthwhile tradeoff.
Human-readable to binary wouldn’t change the security profile much. A lot of the problems we are seeing today extend from manual memory management where a flaw in parsing leads to an exploitable condition instead of a crash (not that logic bugs don’t lead to exploitable conditions, they can).
Now there is also something to be said for creating protocols that are easier to implement in software but HTTP isn’t all that hard to parse compared to something with an insane format like Adobe Flash.
Part of the thrust in the paper is that people are implementing things over and over again for performance reason. This was a big fetishism in Node.JS and various Ruby libraries for awhile, “look at how fast our C HTTP parser is”. Perhaps binary protocol lesson this gap, providing a lesser reason to implement something in such an unsafe language as C.
I’ve been thinking a lot about Bram’s Law. Maybe “obvious and easy to get wrong” is nice for starting and growing. On the other hand, Postel’s Maxim means you’ll probably be stuck interoperating with bad implementations.
Postel’s Maxim seems to be getting some flack these days, perhaps we’re seeing the end of an era in terms of complexity in input processing.
Bram’s Law makes sense. If you assume Bram’s Law you need to hold to Postel’s Maxim.
In practice though you need to assume that you’re going to see utterly terrible implementations and even hostile implementations, so even if you are liberal in what you accept, you need to assume that someone is going to send you hostile input and providing them with a potential turing machine is usually less than ideal.
The impact of imperfect implementations is now a problem. An exploitable implementation impacts many more people than the person that deployed the code and no one is held accountable for it today.
I’m looking at some third party code at the moment…. It seems to firmly disprove the converse of Bram’s Law…
If Bram’s Law is…
The easier a piece of software is to write, the worse it’s implemented in practice.
ie. The fact that a piece of software was hard to write, in no way means it is implemented well.
You can always test a piece of software into the shape of a working application…..
…but inspection of the actual code may induce nausea and vomiting.
Oh indeed, I have seen too many horrors to ever entertain the converse of Bram’s Law.
The way I view it is a binary protocol is it is an ordinary ascii protocol with the lexical / tokenization step done (in a perhaps painful and uncomfortable manner). (ie. Yes, your numbers have been tokenized and converted to binary…. but you still have to handle byte order / alignment / struct packing / bit fields….)
ie. A binary protocol has all the same problems….. just the tokenization step has a slightly different set of them.