@work: finishing off projects; getting ready to leave software engineering professionally.
@home: finishing work. :‘)
31 December will be the day…. that I’ll no longer produce software “professionally”.
Sidenote: Am I still allowed to lurk here? ;)
out of curiosity, why are you leaving the profession?
Not out of hate; just preventing the risk of complacency. ;-)
I’ve been earning money with software ever since I was 13 (which makes it 18 years), it’s time for a change. I’ll still keep building stuff, but it’ll be for kicks/myself instead of for deadlines/for storypoints.
The author complains about not users not auditing installed plugins, but then recommends a new solution by saying:
“I haven’t audited it conclusively but its relatively small codebase includes lots of https:// and no http:// or git:// that I could see.”
That seems like somewhat of an audit to me ;-)
@work: Starting on extracting some components from our main products big-ball-o-mud-ruby app to seperate services (a.o. in Elixir).
@!work: Nothing… waiting for a contract proposal for a potential new employer… Leaving software engineering… insert mixed feelings smiley here
Although there’s not wide agreement on what security and privacy properties a future cryptography-enhanced chat service should have, I personally feel that metadata privacy - who’s talking to who - should be a top priority, in fact even above content privacy although it’s hard to have the one without the other so that’s not a real conflict.
Existing architectures do not do this. Nothing where your list of contacts is stored server-side does this, and nothing where there’s any sort of recipient identifier by which messages can be retrieved does this. The important property is not actually that any identifier be opaque - that’s useful but not sufficient. It’s that third parties not be able to correlate the identifier with itself over time.
To meaningfully address the problem, one wants to get into onion routing protocols, graph-rendezvous algorithms, content-addressable storage, … but there’s not an obvious solution; those just seem likely to be involved. Also, the group-chat case is substantially harder than the 1:1.
Another commenter mentioned a different point in the problem-space, where it’s acceptable that there be some centralized party as long as it’s one who doesn’t have regulatory or strategic implications. That’s also legitimate, and serves some people’s needs; full metadata privacy is somewhat of a holy grail, and not necessarily a good use of time right now given the difficulty. But any discussion has to start by identifying a local optimum that serves somebody’s needs, and (once these efforts actually start to get anywhere…), there are going to be multiple attempts going on in parallel because any feasible choice is excluding somebody.
Storing metadata on a server is completely plausible (edit: HERE BE OPINION!!!) when performed encrypted and when using one-time “tokens” for your identity for every other identity (i.e. contact). As long as these tokens are derivable from something you-and-only-you have (i.e. a pbkdf like construct) you should be able to “figure it out securely”. Kinda like “anonymous pgp-subkeys”.
n.b. I’ve been toying around with this and would love some validation on these “thoughts”; I decided not to implement ‘anonymity’ in app mentioned in my other reply since it’s… “not a use case” (gotta love commercial interests ;-)).
I’m working on this - commercially - as a “behind the firewall” for companies that want to use an “IM” service like e.g. Slack but are not allowed to due to e.g. SOx.
PGP/GPG is a viable option for this; although it imposes some usability limits which are entirely acceptable when you clearly communicate to your users that these are due to security and privacy limitations.
E.g.: When “joining” a room/group-chat you’re not able to see earlier messages since these are not encrypted for you to read. This is actually a feature since it ensures the privacy of all earlier messages. Otherwise you would have to ‘unanimously agree’ on inviting a user into a channel.
Key verification is still a thing of course; actually checking fingerprints etc. This can be assisted - in our use case - by also using a companies “directory service” to store said pubkeys.
How would it scale with many users in a channel, though?
Multiple recipients; users can only read messages sent after they joined a channel. If performance becomes a problem might also consider multiparty diffie-hellman with “renegotiation” after a new users joins (same issue; can only read messages after a user joins).
Usecase allows us to define a max number of users in a channel. I.e. More than 128 users in a channel is not considered a use case.
edit: it this what you meant with “scale”?
Yes, it is.
Please let me know if you have any further considerations/concerns! Would love to discuss this further since it’s an active development/r&d project for me.
I’ve subscribed to this method for a long while but I tend to substitute ed for vim and add tmux for splits. If only there was a newer line editor that had line editing capabilities with vim keybindings…
I still love ed as an editor; I’m still not comfortable doing actual coding work in it; but I reach out to ed more and more for small config edits after reading “Actually using ed” (http://blog.sanctum.geek.nz/actually-using-ed/).
Anyone here actually using ed … for real development tasks?
I actually use it for about 90% of my daily work. I realized after years of Vim usage that I was mostly just using ex commands anyway. I’ve become fairly productive with ed aside from one issue: changing a line. That’s where the ‘line editor with line editing’ thing came from. Instead of just replacing the line, I’d love to just populate the line with the content and use a line editor to actually edit it.
I have directories filled with stalled attempts at this; there’s half-completed Go-implementation with a fork of golang.org/x/crypto/ssh/terminal, a rewrite of plan9 ed in c99 for modern *nix with linenoise, and an attempt to add linenoise with modifications to the heirloom version of ed.
More modern regex support would probably be convenient as well. On the whole I find that ed integrates well with my workflow and presents only minor annoyances. I would love to see a resurgence and remixing of the traditional line editor, however.
Have you checked out sam from plan9/p9p? Probably, as you mention working with plan9’s ed. Heck, it might be the same editor.
I’d be interested in your thoughts on it.
You know, I used it for a bit but mostly kept getting sam commands confused with ed commands. Something like 157,189n just falls out of my fingers and I wound up just reverting back to Plan 9’s ed (which is still the version I use). The structural regular expressions are great though and I’d love to see them implemented in other editors. I should try using sam again. The few differences in commands is worth it for the structural regexs alone.
I’ve also been meaning to track down the sources for qed for archaeological purposes.
Wow, it’s neat to see that mentioned. I’ve never used it, but I read about it extensively many years ago when I was more interested in text editors. It’s always a surprise when someone else remembers!
Care sharing one of said directories somewhere?
vi + open mode?
I didn’t even know open mode was a thing but I remember reading about open mode in ed from an interview with Bill Joy:
But while we were doing that, we were sort of hacking around on ed just to add things. Chuck came in late one night and put in open mode - where you can move the cursor on the bottom line of the CRT.
The real question is how do you enable open mode in vi/vim without having a hardcopy terminal?
vim/nvi doesn’t have it. Use ex from ex-vi, then, e.g.:
% TERM=dumb /usr/bin/vi /etc/passwd
[Using open mode]
"/etc/passwd" [Read only] 24 lines, 1520 characters
Installed. It’s very interesting. It’ll take a little getting used the ex commands.
"/tmp/ed" [New file]
No lines in the buffer
No write since last change (:quit! overrides)
No write since last change (:quit! overrides)
ed (plan 9)
One big thing I like about ed is the ability to look through the scrollback and see changes as they were made, even after closing a file or while editing another file. The use of a full screen interface removes this ability. I’ll definitely play with this a bit more though. Thank you for the recommendation.
is a t^A^E^[^[
Really nice read; this is exactly the reason why “we” are now actively using Go for our internal tooling and - since two weeks - are building some backend server stuff we used to build in Ruby in Go. Gave a short workshop three weeks ago introducing the language, the basic constructs and explaining the ecosystem and at least 3 developers here are now actively using it on a daily basis.
It’s a stupid language that actually makes it ‘semi-hard’ to do clever stuff.
In other words, Go represents a kind of Machiavellian power play, orchestrated by slow-and-careful programmers who are tired of suffering for the sins of fast-and-loose programmers. The Go documentation refers quite often to intolerable 45-minute build times suffered by the original designers, and I can’t help but imagine them sitting around and seething about all those unused imports from those “other” programmers, that is, the “bad” programmers. Their solution was not to engage and educate those programmers to change their habits, but rather design a new language that the bad programmers would be compelled to use — and tie down the language sufficiently so that “bad” practices, such as a program containing unused variables, were impossible.
I don’t think there’s anything Machiavellian about designing your tools to fit your notions about correctness. It happens that Go has a lot of traction because it’s from Google, but it wears the history and prejudices of its creators happily on its sleeve. It happens that I think Pike et al are wrong about a lot of things, but it’s an honest wrongness.
Fortunately you don’t have to agree on everything while still being able to use a language someone built.
Just agree to disagree, move along people, keep calm and build awesome stuff.
Well, sure. But technologies have their creators' ideas baked into them, and to use them to their fullest you can’t be fighting all the time. If your view of the world agrees with that of Pike or Odersky or Wall, then you are off to the races and good on you; but if you want something different, or disagree with the ideology of a tool, then it’s useless to fight; switch.
OMG I handed in my dissertation last friday after like 7 full days (and one full night) of editing and writing and polishing. This was the work on Compiling Idris -> Erlang and concurrency stuff. I have a poster to prepare for next week, but should be alright.
I still have courses, including 2 courseworks, and finals to finish, but the pressure is off for the moment.
Congrats! A lot of people don’t finish their dissertation. You have made it that far at least :)
Oh, this is just for my Bachelors. Masters/PhD dissertation will be in a few years.
BSc. is the most important one ;-)
N.b.: this is not a plug; i don’t WANT anyone to use this, just for feedback.
I released one ruby gem in my entire career (where released is a big word); would you guys consider this “not-shitty enough to release upon the world” or would you rather have me erase this pile from existance?
(I’m trying to get something like a benchmark out of this ;))
Since nobody else has given you feedback, here’s mine, although I don’t use Ruby.
It’s small, but small is not the same as low-quality. From Haskell I’m used to the idea that package boundaries can be chosen around clear abstractions at a fine grain, and I see many small packages as strongly preferable to a few large ones. I think the functionality you’ve included is consistent with that.
From a non-Rubyist’s perspective, I see no obvious red flags in the code itself, but that just means that you paid attention to formatting. :)
I’d definitely put this in a public github repo, if I had written it. I might or might not submit it as a gem for … wherever gems are centrally kept? That would depend on whether I felt it was an entry in an already-crowded space. If it’s something that isn’t already there, of course I’d submit it. If it’s already there, I’d think about whether I felt it was part of that conversation rather than just reiterating a code pattern that had been “said” already.
Thanks for your feedback, much appreciated.
I agree with you on whether to submit it to gem-heaven-central: Rubygems. I decided to do so only because … well, to have created a gem to be honest. Sue me ;-)
That’s fair too. :)
@home | @work: Preparing a company wide workshop on Go (at least for the ‘techies’), why it’s suitable for our line of business and how we’re going to start using it in 2015 (n.b. already a few projects running “in production” that were completed in Go as a “Feasibility Study”).
Exciting stuff! So any tips/experiences are welcome!
The most exciting thing about this whole discussion is - imo - the number of comments on the entry. Point.
I’m just not that into community politics ;)
The culture that sprung up around Rails is the problem.
I can’t stand to use Rails because I always feel like I’m playing in DHH’s sandbox. When using Rails, it’s much less Ruby and much more Rails. There’s a vast, beautiful ocean of computation out there, but Rails presents it as a little puddle in the sandbox. A commoditized architecture, by definition, can only be mediocre at best.
I no longer believe more and more metaprogramming can solve the Software Problem. Application devs respond by learning less/caring less to fill in the gap covered by this. Anything that requires more rigor on their part is rejected by the mainstream, ensuring that software quality will not increase over time. I still believe in tools, but I believe developers have to care/practice in order to write good code.
Even if you’re only using Ruby, it’s more Rails than Ruby, at least when searching for documentation. “How do I do X?” (e.g. title case a string) always results in an answer “just do Y” (“foo”.titlecase) that only works if you’ve included at least some of the rails gems. Try to find out which gem? It changes every week, but nobody seems to care since everybody just uses the whole thing.
I have no comment on the specific example of title casing, but I have found the Ruby language documentation in general to be excellent, and I have not had issues find non-gem solutions to my google queries.
The Rails community does rely on many external libraries (although I hardly think more than the Java ecosystem for example), but then the primary use case of Rails is rapid prototyping. I will concede that I have seen wastefully added gems, but I will equally say that if your goal is to get a startup MVP into production, maybe writing an authentication framework is not the best use of your time (use devise for example).
There are a lot of problems with Rails, and I am by no means a fan of the DHH style the project has taken. But having used Rails for the better part of 5 years now, I think that these criticisms are excessive, miss the mark, and are not reflective of a mature/experienced Rails dev. Any responsible engineer evaluates the code quality, test coverage and current maintenance (at a minimum) of any library she brings into a project, and the fact that some engineers fail to do their due diligence is in no way unique or even over-represented in Rails.
Your comment seems an inadvertent confirmation of the problem I’m talking about. I wasn’t using rails (nor did I want to, since I wasn’t building a web app), but most of your comment is explaining the merits of rails.
Apologies :) I realize I was responding at least in part to the comment below yours, which is unfair to you and inaccurate.
The Rails (not ruby!) community always feels a bit “iphone hipster” to me: there’s a gem for that!
Your skills as a Rails developer are measured in how many Gems you know/can use in a single app.
I’m not a huge Rails fan (I was involved in another Framework), but - isn’t that a good thing? It means code reuse is there, possible and happens. There’s an active “commons”.
Sure, untill all you do is combine gems: and hit a brick wall when you have to “innovate”. I’ve seen it happen unfortunately…
Yes, but “some people build stuff on their own and some just assemble” doesn’t strike me as particularly new or rooted to the Rails community. All these gems come from somewhere…
The groupthink around open source is enormous: “open source everything except your core business!!!” OSS is not a guarantee of quality. Actually, I’m petrified of future programming ventures being overrun by this simplistic, collectivist mindset, whereby devs are mostly integrating all the ‘best-practice’ OSS libs decided by the ‘community’ (read: self-promoting hegemony).
Additionally, I believe OSS is ultimately a bad vehicle for R&D; mostly because it’s expensive. The result of R&D is often open source (React), but OSS tends to imitate over innovate. Additionally, innovative OSS often faces adoption hurdles: get too far ahead of the crowd, and network effects ensure that few will try to understand your library, mostly because no one else has.
I think there is a perception mismatch between those who want to advance things and those who “just build”.
My first job was at a company where the core business was selling websites to pharmacies. Some static pages, a module for adding news, a module for calendering if they planned events. Other then that, they had really good, css-editable templates to make every site look personal in 1-2 hours. They didn’t have much technical experience there, but they were providing a lot of value. They were just sourcing the commons and thus enable small business to have affordable web services. It took me a year to understand that and not be frustrated there, because I wanted to build new stuff.
Open-source moves the bar of what can be implemented just by sourcing the commons every year. People that “just” assemble gems are experts in that: providing off-the-shelf products using that. That’s the collective group that we see. A lot of busy people that provide services in short timeframes. They do pagination using will_paginate because it costs 15 minutes instead of an hour. That’s great. It’s just not a model for the next soundcloud.
That’s why I think all this is great and the success is well-deserved. The only thing I still see lacking is a good navigator for the commons.
[Comment removed by author]
I’d like to have proof of that. Just throwing around the assumption (that the parent didn’t express) doesn’t advance the discussion in any meaningful way.
Maybe that is what sets a good Rails developer apart from a bad one? E.g. Discourse has a huge list of dependencies and I wouldn’t call it a low-quality product.
When we take, for example, the PHP community, the argument is often turned around: too many people implement the same stuff over and over again, poorly. It doesn’t hold back the community from having quite a list of high-quality products.
It’s one of those nice statements you can have in both ways and use them as you personally find appropriate. Please substantiate your claims.
I’d go a step further and say that there is no community that has solved the “many small libraries and lack of visibility” problem.
Although Ruby (on Rails) is my bread-and-butter… I can’t but agree with this article. It’s also one of the reasons I’m “sneaking” in Go @work: It requires less discipline.
There’s nothing wrong with discipline; it’s just too easy to slip up.
Really loved it; thanks for letting me relive my childhood for a day ;-)
The Trie: Simple, undervalued, like a swiss army knife ;-)
I considered responding with trie. It’s a very clever data type, and it’s fun to debate how to pronounce it.
But it’s also, due to the way we pre-fetch memory nowadays, very often a much slower choice than using something simpler. I like it as a reminder that the abstractions we use aren’t always the best model for our actual work environment.
The performance remark is completely true and justified; but considering the range of possibilities you have with using a trie it’s a sort of “programmers swiss army knife of datatypes”. And considering I do most of my work in Ruby: Performance is an issue anyway ;-)
I don’t agree with the author, but I think it’s a good excuse to discuss why it’s wrong and segue into correct scalability choices.
The biggest problem is the character the author describes makes a serious of non-scalable decisions but calls it scalable, and the author seems incapable of differentiating between valid scalability decisions and just making bad choices.
But his sentiment is something I see often: don’t worry about scalability, it will be good when we have that problem anyways because it means we have succeeded. I think this idea is wrong.
Scalability can be broken up into two decisions: infrastructure and API.
Building scalable infrastructure is expensive and time consuming. By this I mean everything from networking to the actual application code being performant and scaled out.
However, building scalable APIs is upfront work that needs to happen, otherwise you are severely restricted in your ability to scale if you need to. That means your API should not promise anything you cannot guarantee what you do scale. That roughly comes down to strongly consistent guarantees and exactly-once semantics.
Many people will claim that a scalable API is too complicated a thing to do to start out. But it comes down to one rule: when in doubt, use a 2 phase commit in the API. This means sending some data and getting an opaque token back, then acking that token with another call. This is almost always a scalable solution (I say almost because I don’t know every problem). Default to that and your API will probably be scalable.
Waiting until you have performance problems does mean you at least know what the problems are.
I’m not sure I understand the context of your response. I am saying holding off actually building things at scale, but rather just make sure your API can eventually be scaled up. That is, provide scalable semantics even if your actual components are not scalable so that you can scale it later.
Are you agreeing or disagreeing? Or did I not make my point clearly enough?
Ah, so in this case, it looks like they roughly took your advice. Running queries directly against the database is not a scalable API, so they built a scalable midlayer API.
I interprete “provide scalable semantics” in - for example - the case of database calls like: make sure the actual database call gets performed from some seperate container (e.g. a “service class”) so you can swap out actual database interaction to a “rpc” later on.
Or do I interpret stuff like a madman now?
Yes, IMO the problem the ‘hero’ in the post hit was that they just made poor technology decisions that.
I’m all for using established solutions and not re-inventing the wheel, but I think it’s solving a different problem than OAuth set out to. What if I want to write a REST API to be used by untrusted clients? I’m not shoving a secret into my client side JS app.
Also last I checked Basic Auth has no good standard way to un-authenticate.
Completely true, but - imo - oauth shouldn’t be considered the default for api authentication.
You’re fully right about unauthenticating; only way known to me is the hackish unauthentication with wrong credentials.
What do you mean by “unauthenticate”? I am under the impression that you have to pass the credentials with every request. Or are you talking about how browsers cache the credentials for you?
The latter; since it’s basically a stateless affair; should’ve been more clear on this.
Would be nice if browsers would offer a built in logout button.
Afaik it is purely a ux issue: js initiated requests require the authorization header on every request.
You don’t have to shove your secret in the client side, but you could shove your public key.
Mixed thoughts about hosting this on github. The “future of the web” deserves an independent site, no?
Kinda sobering that even technologists tend to cling to walled gardens as well.
I tweeted something similar about IRC and Slack the other day and got hundreds of angry replies :(
Unfortunately, ‘hacker’ culture (cough) seems to be much more about conformity than iconoclasm nowadays: use a ‘real’ editor, be on the right service, release lots of open source, like the right languages, always be positive.
\You should use a real editor though; that’s non negotiable…\
(of course i’m refering to ed)
Real hackers use echo “printf(\"Hello world!\”)“ >> myprogram.c
editing is achieved with a combination of cat and sed
This is not “hacker” news… ;)
Sad state of the global community; if it ain’t on github, using slack, supported via twitter…. got the same shit from collegues when proposing IRC over slack since we’re hitting their 10k limit and can’t justify their pricing.
I just had to look up slack. I’ve seen it mentioned, but didn’t know what it was. I guess hipchat isn’t hip anymore?
HipChat is too expensive. There would never have been an opening for Slack if their pricing were more reasonable, but it isn’t so there is.
Isn’t slack 4 times more expensive than hipchat?!
For both, the lowest cost paid tier is:
hipchat = $2/mo per user
slack = $8/mo per user ($6.67/mo per user if paid yearly)
Both have a free tier. Slack limits the number of “integrations” with their free tier, and hipchat drops the “audio/video chat” feature on free tier.
If you’re using twitter to complain about people using private closed chat services then you deserve everything you get.
I (and some friends) actually developed a complete, standards compliant microblogging service. Which no one then used.
I complain about having to use Twitter on Twitter a lot too.
And I set up an IRC server at my job. Which no one then used.
Please try again, this time give out limited invites…
Is this really a walled garden? How is the content being restricted?
Walled garden is perhaps the wrong term, but it’s still “controlled by private entity”. Imagine the uproar if it were http2.microsoft.com. Half the comments would be “microsoft is evil incarnate”, and half would be “no corporation should control this”. Github apparently passes the evil incarnate test, but the second half still applies, no?
Point six proven: http://imgs.xkcd.com/comics/new_products.png
For one, a GitHub owned domain is now the official domain of the most important protocol on the planet. If GitHub goes away or changes by doing something like removing this feature, alllll those links forever need to be changed. And why privilege GitHub over all the other organizations and people involved with HTTP2?
They could have used a custom domain with github, and transferred that somewhere if they needed to. I kind of feel like this would have made the most sense. Either way, that was just a decision and doesn’t really reflect upon Github as being a walled garden (imo).
And why privilege GitHub over all the other organizations and people involved with HTTP2?
I don’t know. I didn’t look at it from a privilege point of view, but from a usefulness point of view. At least on Github it becomes very easy for people to propose changes and host non-spec related things.
Either way, that doesn’t seem like a “walled garden” issue. The domain name issue also doesn’t, but it does still seem to me like a reasonable issue to be concerned with.
Yes, a custom domain would have been much, much better, even if it was hosted on GitHub behind such.
I’m being grumpy and overdramatic here. (I also take issue with the prominent Twitter iframe on the page.)
Just seems a bit odd to have this hosted on a third-party service. Maybe they will take PRs for the protocol! (I kid.)
I thought cURL’s 17th birthday article gave an interesting perspective:
We’ve hosted parts of our project on servers run by the various companies I’ve worked for and we’ve been on and off various free services. Things come and go. Virtually nothing stays the same so we better just move with the rest of the world. These days we’re on github a lot. Who knows how long that will last…
The URL churn that comes with that makes me sad though.
Just keep your own domain. Curl has been curl.haxx.se for longer than I can remember.
Which is cool for the static content, but as soon as you start caring about repo/wiki URLs (they link to one from the front page), you’re much more tied to the service.