I was expecting this to be an insecure hack just accepting data on a raw TCP socket but it’s actually using a secure pairing and session API to communicate. Well done :)
Thanks! To be honest, it was easier to use this API provided by Apple than to drop down to raw sockets and mess about. Certainly not the best code I’ve written, but I got a chuckle out of it.
it was easier to use this API provided by Apple
That’s a good example of why it is important to ship critical use cases as libraries.
My favourite way to assess/compute priority, which is far simpler than what is proposed here, is to let anybody pledge to pay whoever fixes a bug any amount of money they want to and an expiry date for their pledge. When somebody claims the bug, the money is collected from all pledgers and kept in escrow until either the expiry date arrives, at which point the money is returned to the pledger (but may be repledged with an extended expiry date at the pledger’s behest), or the bug is fixed and verified, at which point all pledges are released to the person who fixed the bug.
This provides a very simple way to sort open bugs by total bounty or to render a graph showing when pledged bounty for a given will expire. It also means that anybody has the power to increase the likelihood of a bug being fixed by increasing their pledge amount. Kinda like a reverse kickstarter!
This sounds cool, I just wonder if it works in a closed-source environment which many companies are working in. Also, are there mechanisms to prevent those with the most money calling all the shots? I guess it kind of minics real life :P, but seeing some kind of pledge threshold seems like it would make this a bit more fair if an unscrupulous actor were introduced.
No reason it couldn’t. Pledges would just have to be made by product managers. Not sure how one could be unscrupulous if real money is involved…
Many projects use https://www.bountysource.com, which seems to essentially be what was explained above.
New York is a fantastic place to live with an amazing tech community. You should definitely join Hack and Tell!
I was waiting to see who would write this article. We as a team have been discussing the rants we’ve seen come up about Go, and this article perfectly echos what we’ve thought internally. Basically, if you know it doesn’t do what you need, either patch it to do that (as you’re probably not the only one that wants it), or go to a language that does support your needs.
Indeed. Every article I’ve skimmed over has matched what this guy said. I’m not sure why everyone is so hung up on generics. Somehow I’ve managed to write millions of lines of code in my career and the generics problem is so foreign to me that I don’t even know what it is ;) On a related note, people need to quit trying to solve every problem with a design pattern when a few simple functions will work. I built a relatively full feature processing application that some OO architect got his hands on. It went from a few hundred lines to process a file to several hundred classes and thousands of lines of code… oh, but it’s OO so it’s better. Not to mention harder to read, update and maintain. Why people don’t look at code from a “how-is-this-going-to-be-maintained” standpoint when developing I’ll never know. It’s the longest phase of the software lifecycle. Anyhow, grumpy guy is right. :)
I think lots of people get hung up on generics because they didn’t really think about what they were going to build, just that they wanted to try out Go. Because they weren’t entirely sure what they wanted to do, they probably tried making some data structures, which are difficult to do “correctly” without generics.
Basically they didn’t look into the language, they just heard that people liked it, and wanted to try it out.
Somehow I’ve managed to write millions of lines of code in my career
I know I shouldn’t take you literally, but it got me to wondering… How long would it take a great programmer to write one million lines of code by themselves?
The LoseThos/TempleOS says
I wrote all 121,378 lines of TempleOS over the last 10.4 years, full-time, including the 64-bit compiler.
So maybe 85 years. I’m not sure if the schizophrenia helps or hurts.
Here’s an article that talks about the cost of re-writing a million lines of code. Enough to sink a company, apparently.
http://blogs.perl.org/users/ovid/2014/01/ditching-a-language.html
“write” but lets not forget that I did java so you get about half way there with your xml configs. By the time you finish those and apply a design pattern you’re close to 100k ;)
I think this could go either way. It would be nice if they had some kind of cursory knowledge of the jobs they were recruiting for, however, I feel like a better improvement would be to abandon the “spray and pray” technique that I’ve seen so many recruiters employ currently.
Agree. I also think it’s important how they talk to developers. I was once asked how many years of AJAX experience I had, that really showed the recruiter didn’t have a clue of what they were talking about.
Finishing up an iBeacon related piece of demo-ware for some people, as well as finally reading “The Art of Computer Programming, Vol. 1”.
Incredibly helpful visualization. It really helped me get a more firm grasp on the whole idea of consensus in a distributed system.
Containers! Trying to get our services building into containers, and possibly even running through Kubernetes in one of our environments. I’ve got a decent handle on building the containers and getting them hooked up to a private registry, but the whole deployment and monitoring of containers seems to be a whole different beast. Throw Kubernetes into the mix and some of these problems are answered, but more are opened. If people have good recommended reading for getting Kubernetes running in AWS or other cloud infra I’d love to take a read.