It’s a false dichotomy. It’s not either you live like RMS or you must make proprietary software for a living. By far, most of the software written in the world is for in-house use. This is what I do right now for a living. I write software that helps analyse neurological images. It’s not proprietary, because it simply isn’t public. And whatever we make public, we also free up.
Our business is in selling a service. People send us brain scans, we send them about them that we found with our human experts who are aided by free software and some in-house glue. There’s no proprietary software anywhere in here, and nobody’s being coerced to accept a EULA.
Btw, not that it really matters, but I specifically described this business to RMS, and he said he found nothing wrong with it as I described it. He is against people trying to control others with software. He is not against business.
I’d argue that any source you can’t get easily is proprietary, it doesn’t matter if that is the source code to a website or the source code to Windows. If I can’t get the source without contacting someone at the company then it is not open source, therefore until it is open source I would call it proprietary.
That isn’t the OSI’s interpretation of “open source” (emphasis mine):
The program must include source code, and must allow distribution in source code as well as compiled form. Where some form of a product is not distributed with source code, there must be a well-publicized means of obtaining the source code for no more than a reasonable reproduction cost preferably, downloading via the Internet without charge. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed.
That’s a bit of a black-and-white thinking. The right to keep software private is one of the freedoms guaranteed by FOSS. That’s why the (A)GPL’s copyleft only acts when you convey parts of the software – not before. The problem is when you give someone parts of the software but deny them the right to use those parts in some way. If you simply never give others any part of the software, not even a WUI, then from their perspective it’s the same as if you never did anything.
In our case, our customers simply send us images uploaded with free software and get back numerical results in free formats a couple of weeks later. That we used private glue software to get the job done is completely irrelevant to them.
Speaking for myself, I voted this off-topic because it’s in some sense a canary for whether the culture tag is meant to be useful at all. The article here is both one-sided and unimportant; it expresses what are, no doubt, somebody’s valid feelings, but they’re meaningful only to her.
To wit - I tend to agree with the author that engineers can be boring to date, except as compared to football players, but the only possible relevance of this to anyone but her would be if it were a part of the class warfare, monoculture, privacy, or other issues where the industry should be striving for self-awareness. If the article were transposed, and it were saying that, say, salespeople (to pick the first non-technical traditionally-middle-class job I can think of) are boring romantically… Well, I somehow doubt that trade publications for marketing would be interested in it.
Just explaining my position, since I feel I should after down-voting.
IMO, a reasonable rule of thumb is that any technology that is in the customer flow should either be open source (preferably with internal competence) or built internally completely. Most other options are a liability.
Things like Payroll are probably fine being bought as downtime there might make your employees unhappy but (probably) won’t bankrupt your company.
I make some exceptions for tools that are easily replaceable. I’m currently betting on a new datastore, but it’s fed via a standard protocol (graphite wire format). If I decide to ditch that portion it won’t be very hard because none of the inputs or outputs will have to be changed (although I will have a few hundred gb of data to move).
http://en.wikipedia.org/wiki/Rule_of_thumb
A rule of thumb is a principle with broad application that is not intended to be strictly accurate or reliable for every situation.
I don’t understand why this uses goroutines, timeouts and channels to run something which hasn’t got any concurrency.
What is the advantage over a serial implementation?
Hey, OP here. First off, keep in mind whenever a technique is blogged it has to be simplified or the blog post would be unreadably long. So, yes, this technique is used in real code for managing a multi-goroutine system. But in the blog example it’s just one, which makes it look like overkill. It’s a simple technique that really gives you a lot of growth and power as the code complexity increases.
Second, I am not using timeouts and I am only using a single goroutine to run assertions in sync with the executing code. IMO this is the actual power of the technique: the fact that the assertion code is forced to sync with the executing code so that every call to the fake is accounted for and correct. It’s quite strict.
I would like to see an example of a serial implementation. My guess is you would store the call and then have a method to retrieve it? This becomes difficult when one method call to the unit under test executes three calls to the same method on the fake. You need to start storing arrays of calls and be able to inspect an array of responses. IMO channels are actually simpler!
Feel free to continue the discussion here or on the comments section of the site. I’d love more feedback on the technique.
I’m not getting a ton of sleep at the moment; if I’m not totally coherent please let me know.
The serial implementation would indeed have to store the call. I’d envisage something like this.
Of course, I’m bending the interface to fit the test, and moving the complexity there - which isn’t going to be everyones preferred way to test.
I have no idea what problems the OP is trying to solve, but you will on occasion see goroutines/channels used as an implementation detail in order to expose an API that is safe to call from multiple goroutines simultaneously. i.e., “thread safe.” I have no idea whether this is what the OP was trying to do or not.
Just some honest thoughts:
People spend a lot of time these days talking about ‘Engineering Environments’ failing to support diversity. And that these failures are rampant in the valley. I agree with these sentiments. But I found reading her section on ‘How did our engineering environments end up biased?’ to feel like a broken record player.
I think people should stop writing these posts. I think people should focus on getting together with fellow engineers who feel the same way as them, and build companies with the cultures they believe in. If you want change, fight for the change you want, thats how this world works. Support founders who are fighting for engineering cultures that support diversity and work life balance by helping them build the best products and businesses in the industry.
2 cents.
Reading these posts is how people know what to support. Writing them is essential to the process. Writing lots of them and getting people saturated in them is, too. What looks inescapable to those in an echo chamber is often the thing that is only just getting noticed in the mainstream. Also, solely private (and therefore quiet) support does not spread well and gets squashed easily.
Pushing back against those who document our troubles is counter-productive. It inhibits the growth of corrective movements right as they need it the most.
Another good reason to support this kind of work is that people who do not support the changes that you and I want say things just like what you’ve said. Everybody supporting the status quo has lots to say about how people moving for change are doing it wrong! Somehow, it’s never quite good enough.
Personally I really like her “Why” answer.
The environments we work in can be pretty toxic, and are harming even the most alpha of males. (Just count the levels of obesity, stress, type two diabetes, heart disease, depression, anxiety…. around you). (Yes, all that _is_related, read Robert Sapolsky’s “Why Zebras don’t get Ulcers” to get the science behind that statement).
So it is not just a “Gender Issue”. It is a quality of life issue for all of us.
Please always consider that this blog post is written by a person that can actually spend a week drinking after being fired and then some to search for a new job.
Which doesn’t make it a bad post, I think it’s good that people of all experiences write them down.
Please always consider that this blog post is written by a person that can actually spend a week drinking after being fired and then some to search for a new job.
I feel that this comment is fairly morally judgmental. I took the piece to be more of an expression of a fear-turned-reality that many in our profession can relate to. If the post were written by someone who spoke of spending a week surrounded by family, I think it would be pretty strange to say “Please always consider that this blog post is written by a person that can actually spend a week with family after being fired and then some to search for a new job.”
You’re interpreting, but yes, it can be read like this. My point is, though, that many don’t have the luxury to take a week of for whatever they want after being fired - be it the bar or the family.
I still don’t get the point. Many of us don’t have the luxury of having a job in the first place. And it can get worse. Many of us don’t have the luxury of counting on food to be available for us to eat tomorrow. Are we supposed to be fill up discussions with constant reminders that there is always someone else in the world that has it worse?
My point is that the whole blog post crumbles if this security is not there.
And yes, I have no problems with constant reminders.
Everything crumbles if humans can’t rise above the basic security of reasonably assured nourishment for the immediate future. This includes the existence of the Internet itself.
Any statement can be turned into the absurd by widening it and I have no intention to continue in this direction.
My statement was about people being fired from a fancy job, writing a blog post about it and how their conclusions come out of their special position. No more, no less. Please stick to it.
OK. Then my statement is that the exact same line of reasoning that you employed can be used about anything—the only differences are purely relative. In other words, it’s absurd to point it out because the very act of pointing it out on an Internet forum requires the same philosophical assumption: you are fortunate to have access to a fancy Internet connection and the free time to worry about such things. If only everyone could have those problems!
Normally you’re offered severance pay, so most people actually can afford to sit around for a week (or a month). Being fired is an emotionally exhausting experience. You really need to take some time off before looking for the next thing. But it can also be a great turning point if you give yourself time to recoup.
No I’m not getting confused. Severance is pretty normal in the US. I don’t think it’s required, but it would be considered pretty awful if the company never offered any sort of severance. It’s often accompanied by some papers that you have to sign that make you promise to not slander the company, etc. Seeing as he’s not slandering Github, I bet he got severance ;)
While technically there is a distinct difference between the two, in practice I think there’s some fuzziness. I think the fuzziness arises from the fact that sometimes people who are fired for poor performance, rather than as a side-effect of organizational restructuring, are also given severance.
It is interesting that they consider the dependencies of a library to almost be a non-issue (so far at least…the conversation is obviously ongoing).
I was also somewhat surprised to read that they have all their Go code in a single giant internal repo. That seems awful to me for some reason.
From what I gather “one giant Perforce repository with all code used in the company” is a surprisingly common version-control model at large companies.
We do that at my company (on a much smaller scale than Google) and I there are a lot of benefits. All our services and shared library code is in the same repo. It means every commit to the repo leaves every service fully in sync with one another for interface and data versions - no dependency management needed. It also makes git bisect awesome for bug tracking since all our code builds reliabily at every commit you bisect to.
Also it implies shared code ownership which I like. Anyone can fix anything and run all the tests for all the services at once and know nothing is broken.
That seems awful to me for some reason.
Agree. Not sure why the title doesn’t use at least scare quotes for “solution”:
Basically they are saying “We put every single piece of code we need into a global repository, and pretty much don’t care about anyone not doing this. Let’s just standardize some of the dirty hacks and workarounds seen in the wild so that we can keep not caring about it.”
It scares the hell out of me when I imagine inexperienced people learning Go and thinking that this is the best way to manage dependencies in 2015.
We put every single piece of code we need into a global repository, and pretty much don’t care about anyone not doing this.
They are not alone in doing this, contrary to what you wrote. Facebook does it too, with very much success.
Let’s just standardize some of the dirty hacks and workarounds seen in the wild so that we can keep not caring about it.
I don’t think this is a hack. I’d even go as far as saying that the hack is dependency management tools like Python’s pip, Ruby’s Bundler and Node’s npm. I’ve been used these tools for a long time, and unlike you, I was not convinced. Vendoring in a monolithic company-wide repository seems to me a very reasonable and efficient approach.
They are not alone in doing this, …
So what? They can do whatever they want. That’s not the issue.
… contrary to what you wrote.
Where did I write that?
Python’s pip, Ruby’s Bundler and Node’s npm. I’ve been used these tools for a long time, and unlike you, I was not convinced.
You are basically saying that some of the worst dependency management tools ever invented are even worse than not doing any kind of dependency management?
I don’t think many people would disagree with that.
They are not alone in doing this, …
So what? They can do whatever they want. That’s not the issue.
I was replying to the part of your comment where you write “and [they] pretty much don’t care about anyone not doing this”. But you’re right in saying their choice is probably independent of what other companies do.
Where did I write that?
My mistake. I misread your comment. Sorry for that.
You are basically saying that some of the worst dependency management tools ever invented are even worse than not doing any kind of dependency management? I don’t think many people would disagree with that.
What dependency management tools would you recommend as good examples?
The sad truth is that, no matter how terrible it is, Linus won’t change. He has made it abundantly clear every time it comes up that he is fine with being a jerk, and that it is the problem of everyone else to adapt to him. So a blog post talking about why he should change his mind is good in that it gives people a clear way of formulating their opposition to Torvalds, but it won’t ever make Torvalds change.
the conversations are still valuable to have, because in their absence the default assumption is that the community is fine with linus being a jerk, and people who do object are reluctant to speak up and be the lone dissenting voice.
Moreover, if ithe community becomes sufficiently motivated, it will demand changes from Linus, oust him, or fork.
This sort of post is important as a consensus-building piece of communication.
Yup, useless forking is useless. But forking to create a nicer community is not useless. A good example is EGLIBC fork of GLIBC: one of the reason to fork was Ulrich Drepper’s personality.
I am not saying that Linux should be forked for the same reason. I am saying that forking to exclude a problematic leader can be reasonable, has happened, and can succeed with a good result.
GCC is also an example: today’s GCC is actually based on a fork called EGCS. Reason: problems with maintainer behaviour/opinions.
http://en.wikipedia.org/wiki/GNU_Compiler_Collection#History
Many (but not all) of the times I’ve seen people complain about Linus, he was simply being honest or blunt. Perhaps it’s a cultural thing, we Americans frequently get offended by hearing things we don’t like, and sugar-coating is a way of life for us. In contrast, in many European countries, being honest and factual (or blunt) is the norm and not in any way considered insulting.
There’s a distinction between being blunt/honest about someone’s contribution (“the code is not good”) and their ability/person (“FUCK YOU”, “you probably will never be good enough to contribute”, etc). The latter doesn’t have a place in open source.
I think the question is how many Linus posts fall into which category. As journeysquid said, many (but not all) are commentary about the code, not the person. I think Linus can be personally dickish, but also impersonally dickish, and many times evidence of the latter is conflated with the former.
The problem with expletives is that they tend to miss the target and hit the person. I personally have no (moral) issue with them, but avoid them due to that fact.
We’re not in a court, there is no “evidence”, we’re talking about people interacting and a certain expectation of harm-free interaction.
“I’m not a nice person, and I don’t care about you. I care about the technology and the kernel—that’s what’s important to me.”
Yeah, I think that’s a mistake on his part. It is probably true that recruiting female, or non-caucasian developers to meet a quota of “at least n%” is a stupid thing to do in short term, but in long term it will grow the pool of high quality developers overall. That is, by showcasing female developers we might eventually reach the equality and then reap the benefits. And according to some, just having different genders, races on a team tends to improve overall team function, so why wait for someone else to get girls back into IT?
i’d warrant that “shut the fuck up” is insulting in anyone’s culture, especially over a public mailing list.
Look at it from the point of view of user space developers for a while.
One of them reported an issue with the mighty kernel and the developer responsible for the issue have 1. tried to shift blame back to user space and 2. attempted to justify an obvious regression. Well, that’s not very productive way to have a discussion about broken systems. I would guess that he probably started to feel a little bit powerless and disappointed, kernel developers breaking some of their promises and all.
Then the senior developer comes and shouts the junior down in front of the user space developer and rest of his community, re-stating the broken promises and thus renewing the trust and confidence between the two teams.
The junior then attempts to save his face, failing to comprehend the core issue and senior does some explaining to finally get him understand. After that, junior makes some effort to salvage his reputation by accepting the mistakes he made and explains their complicated context.
In my eyes “shifting blame” => “getting shouted at” makes sense. So… another example?
Sorry, but I can’t help but thinking of “The flogging will continue until the morale improves”.
There is a perfectly appropriate response here: just state that you do not want to engage in the discussion further and are not interested in how the patch came to be. Just that you are unwilling to merge it. That takes one or two sentences.
That’s blunt and direct and certainly not nice. People are perfectly good at understanding blunt, short statements with boundaries and they give just as much respect. Keeping the statement short also gives almost no point to argue about it. That’s exactly what you want: it keeps the discussion short.
As far as embedded non-Windows-experience goes, 149$ is at least 50$ too expensive for this kind of a device.
Apart from the power consumption, Intel’s problem with these new markets is always the price tag. If you can buy a complete netbook, with an integrated keyboard and touchpad, LCD, UPS etc, for only 199$, saving some 50$ to buy this instead seems just not as appealing.
OS X is increasingly not resonating with me either. The only thing keeping me on it is mostly inertia and that the hardware is so good. Shit Just Works. But I’m considering moving to Windows in my next purchase and just run everything in VMs. Currently the only applications I require running on the host OS is a web browser, terminals, and emacs. I think Windows can probably do that well enough.
I mainly want to go to a first class citizen on a laptop (Windows) for driver support, unfortunately.
Does anyone have any thoughts on this? Arguments for going LInux entirely?
I bought a top of the line macbook pro 15" a few months ago and I can’t say that “Shit Just Works” has been my experience at all. It’s been ok but there’s a lot that’s just buggy or badly designed about it.
The keyboard scratches the screen when carrying it in my bag. According to the forums that’s normal - macbooks have “always done this”. When I pay for the fanciest screen on the market I don’t expect it to be self-scratching.
The bluetooth randomly fails and needs rebooting.
The Wifi reports high signal strength even when it’s marginal. This flaw is compounded by its wifi range not being particularly good either, so it reports full signal even when it’s so marginal that the network is often unavailable.
The OS occasionally crashes. Not often enough to be a huge problem but linux is much more reliable.
The “magic mouse” is an ergonomics disaster with terrible battery life. You end up clutching on to the hard edges of a piece of flat plastic. It looks great, sure, but it definitely wasn’t designed for humans to use.
Overall I’d rate it as pretty average in terms of problems. I’d just expected better than average when paying top dollar.
I came to the conclusion recently that I’ll replace my existing work 11" Macbook Air with a Thinkpad when the time comes. I recently bought an old X220 to run Arch to see if I could use Linux day to day and I’ve been incredibly happy with the results.
Currently the only piece of software that we use at work that’s OS X specific is [Sketch] (http://bohemiancoding.com/sketch/), which I can see us ditching as time goes by. I’m also perfectly happy to be handed a PSD that I can open in GIMP in that particular case.
Reading this post make me realise just how frequently I run up against OS X oddities, in particular the amount of times that system daemons start hogging resources big-time. When the majority of my time is spent in Vim and iTerm 2 there’s no need to use OS X and have to work around its intricacies.
Why not stick with the apple hardware, keep osx for the battery life and device support, and dev in a linux vm?
I recently started dual booting gentoo and osx on my macbook pro, and I’ve really been enjoying it. I play with a lot of languages, libraries, and tools, and I benefit from gentoo’s system of being able to specify what kind of support you want compiled into your tools in one place, and have your new things support each other right away. It was interesting to get my company’s vpn client to work, and its instability is why I am usually on OSX while at work, but at home I’m usually on gentoo. My home desktop dual boots arch and windows, because I don’t tinker with it as much and I just want it to work in a generic way with low effort. I’ve heard good things about nixos, and that may be the next one I play with.
I would second that. It seems silly to throw away good hardware with a tightly integrated OS for a laptop designed for Windows that will inevitably have a crappy trackpad, poor battery life, and probably a cheap screen. You can use VMWare Fusion full-screened and not really have to deal with OS X constantly changing underneath it. I dual-boot OpenBSD on my MacBook Air and have VMWare Fusion setup to be able to boot OpenBSD directly from its raw partition, so that I can boot it virtualized or on real hardware with the same setup.
I’ve used dozens of different laptops with OS X and OpenBSD and I keep coming back to MacBooks. IBM, Lenovo, Toshiba, Sony, ASUS, Samsung, and probably some others I’m forgetting. The old ThinkPads (X40 era) were great but unfortunately they’re too slow to use these days and the screens are very low resolution. The X220 and newer from Lenovo were pretty thick and ran hot and loud, although they did have an IPS screen available. The original X1 carbon was nice, but its screen was very low resolution. The new one has this abomination for a keyboard. The ASUS UX21A was a good PC-counterpart to the 11" MacBook Air but its keyboard annoyed me enough that I got rid of it.
When switching to a non-Apple laptop, you might not think about all the little things that make Macs such a well designed product (in terms of hardware). The small footprint, the light weight, the silent fans, the low amount of heat generated, the lack of stupid LED lights that are just there to make it painfully obvious that the machine is doing stuff, the lack of gaudy branding, the lack of Intel stickers!, the high quality displays (with little gloss, at least on the MBA), good keyboards and trackpads, and decent speakers and microphones. And don’t forget the power adapters. Apple power bricks are everywhere in case you forget yours, and they wrap up nicely and have MagSafe. PC manufacturers still haven’t figured out how to get rid of those big long black bricks with dual cords that need velcro to wrap up.
I also find some things about recent OSX versions to be a bit irritating/galling, but on the whole it isn’t too bad. I moved away from linux because while it worked great as a desktop and a server², on a laptop it was a rather really awful experience.
That was maybe 6 years ago, and you would think in the interim things would have gotten much better. Reading various articles/comments/posts in the interim, it seems this has not improved as much as I would have hoped – the blame likely lay at the feet of the hardware manufacturers more than the kernel and distro devs, but who knows.
External pressures aside, a new operating system generally has to be more than just a little better for users to switch. Not just a different set of pain, but either significantly less pain, or some facility that is so much better that it “eats” the entire cost of switching.
OSX had that when I switched – really great hardware, better font rendering, laptop suspend worked reliably, wireless worked, bluetooth worked, screen brightness controls worked, audio even worked, apps installed easily without having to search for random libs, a more uniform look (no gtk/qt disparity) and conventions, had unixy bits available (terminal, bash, vim, etc).
Running Windows with something like FreeBSD in a VM does sound interesting, but I spend so much time either in a terminal or a web browser, that I don’t know that Windows would add much value for me. I also haven’t found many of the changes in the last few versions of OSX to be that awful truth be told.
If you have mac hardware though, try running windows on it and see how it goes? A couple guys at work do that, I hear that works rather decently.
²: Have since switched to FreeBSD for person stuff. Still use linux at work though (not my choice)
@apy @robdaemon Apart from your web browser, terminals and text editor, is there any other applications you run in the native host operating system (Windows in your case)? For example, what do you use for your email? Do you use a git repository viewer (like gitk)? Do you native apps like Skype/HipChat/etc.? Do you use screen sharing apps (like join.me)? Do you use some kind of office suite (like LibreOffice)?
I occasionally use things like Skype or Google Hangouts, but I expect them to be well supported in Windows. The main thing that I like about OS X is being Unix at the core so using it is nice to jump down to a command line. But I think for my needs cygwin + putty cygwin would work well enough for my needs. I think I would mostly use Windows as just a virtual machine host.
Cygwin is not as nice as a real UNIX environment by a long shot. And putty is such a pain compared to regular ssh. Maybe I use ssh more than the average person but I can’t stand putty.
I would use Linux or FreeBSD, except OS X has amazing battery life, and wifi drivers that work beautifully. It also doesn’t hurt that it’s pretty.
I would only use cygwin enough to hop into a VM somewhere or the occasional running around my Windows system with tools I know. I’ve used that setup before and it was acceptable. The main issue I have right now is I just do not enjoy using OS X at all. I feel it is getting in the way significantly more than helping me.
Done the switch two years ago, for similar reasons.
I initially started by having a good fat desktop computer running VMS then switched completely by replacing my old macbook pro (bought the biggest T430 I could build).
The rule on my setup is, nothing but chrome, firefox, steam and a VM soft is installed (plus some minor softs, like spotify). I do everything inside my VMs. Initially, I was running my VMs headless and using putty to ssh on them. It’s great and consume less resources. But Putty is really clunky and I like having a tabbed WM so I nowadays mostly work in fullscreen on a linux VM running Xmonad.
Honnestly, it works great, battery life is really good, it feels great to be able to pause your whole work environment on friday evening then reopen it on monday morning like nothing happened :)
Overall, it’s the best of both worlds, Win 8.1 is really stable and you got your unix environment. The only issue I can speak about is that you sometimes encounter problems you’re not used to (as an osx/unix user):
None of these issues are that much hard to solve, the issues is simply you’ll have to browse shitty support websites with ads everywhere to decipher some obscure tutorials to solve your issue. The solutions are usually straightforward, it’s just a matter of knowing where and how to find them. It’s the exact opposite of Freebsd, where you just go straight to the handbook.
What I miss the most is the slickness OSX have, but well, we cannot have everything.
The sentiment in this article resonates with me as well (just ask my co-workers). In the last several months I’ve been reduced to ranting and raving every day I’ve used my Macbook Pro (Retina 13). I was a big Apple fan but that has change after the last couple releases of OS X. When I work from home I work on my Windows gaming PC (doing all my work in a vm) and I am rarely unhappy. I’m not sure if it’s because the laptop is underpowered or if things have just gotten worse and I’m at my breaking point with the OS. I have fantasies of getting a nice Lenovo laptop and just converting completely over but I fear I would just be as frustrated with the hardware. What’s really holding me back is that we have Apple Thunderbolt displays at the office and they only work with Macbooks and OS X.
I’d argue for Linux, or rather for FreeBSD with the way Linux is going these days. It works and gets out of your way. Sometimes there will be a problem, but it’s a problem that you can fix once and for all, and understand why you’ve fixed it. It’s wonderful to be able to upgrade my OS without worrying what’s going to break this time.
Anything *nixey where you can choose window manager etc. is going to be much more configurable than OSX/Windows; watch the way an experienced Xmonad user works. (I don’t use it myself, but if you’re an emacs person that kind of customizability presumably appeals?)
ZFS is very nice, more mature than the alternatives with similar functionality, and is a first-class citizen on FreeBSD.
Interesting that all the Google Fiber buildouts have been in red states thus far. Certainly gives some credence to the author’s points on needless regulation.
It’s not just regulations, it’s the bureaucratic attitudes of the politicos. Note that Overland Park lost their shot (they’re in Kansas, a red state) because they were being PITAs, not because they have specific regulations obligating them to put Google through the ringer.
Everything has a cost. We should keep this in mind.
I think the point is that Zookeeper’s guarantee is overkill. If you temporarily lose data about a server being in a serverset, you’re probably not going to lose any money, unlike if you lost someone’s order form, or processed an order twice by accident. Where Jepsen is generally trying to demonstrate the consistency guarantees that a service actually has, Eureka never claims to have linearizable or serializable consistency.
I think that historically, most people used Zookeeper not for its consistency guarantees, but because Zookeeper provided a recipe for service discovery, and ephemeral nodes + watches provided the API that people wanted, despite its weird CAP tradeoff for a service discovery system.
I believe @robdaemon’s point is not whether Zookeeper is any good, but rather why should one trust Eureka is correct at all. What is the algorithm behind guaranteeing a Eureka node converges on a correct state?
There’s a difference between service discovery and distributed consensus. While Eureka has always been good about converging on state, it’s made vastly easier since all data in Eureka is ephemeral and all sources heartbeat. Lets say a Eureka node drops out, comes back and a node disappears from discovery due to a bug during resolution. 15 seconds later, the node heartbeats and it’s back in discovery.
Discovery is a much different animal and entirely trades consistency for availability. It’s OK if things aren’t quite right, they will be eventually.
During partition events (too many expiring nodes at once), the registry is frozen for deletes to ensure the entire registry doesn’t empty out. At that point, the intelligence in the HTTP clients takes over. Even though you might have 8 nodes to choose from, 3 might be bad. The clients check liveliness of those nodes and stop using them if they’re down.
In the end, it’s about resilience. Ephemeral state makes correctness a lot easier as well.
I work for FullContact, we use Eureka for discovery across our infrastructure and it’s been near bulletproof for us.
You’re missing the point again. You’re still framing it as a problem which would require a formal proof. It’s not. And you’re not “replacing Zookeeper”. You’re replacing Zookeeper for service discovery. There’s a huge difference.
Zookeeper is a fine system, but it’s fragile operationally compared to systems that don’t need to be perfectly consistent. If you’ve ever run ZK in AWS across multiple AZs, you’ll know that it can be problematic. And when ZK goes down, that means discovery is out and your services probably can’t even start up. Discovery has to be reliable.
Service discovery can play a little more fast and loose with correctness for much greater resilience. Eureka does not do any of the other functions of Zookeeper. It doesn’t do leader election, or distributed consensus. It can’t, it’s algorithms would be terrible at distributed consensus. https://github.com/Netflix/eureka/wiki/Understanding-Eureka-Peer-to-Peer-Communication for a somewhat high level overview. But think about the data in a server registry. Hostnames, datacenter info, and a timestamp. That’s really easy stuff. You can even blow away an entire Eureka node’s state and rereplicate from another node. If you’re missing entries, you’ll get them once the Eureka clients heartbeat again. It’s not a one time registration, it’s continuous checkins.
The only issue we’ve had with Eureka was when we were in early phases of rolling it out. We do red/black (also known as blue/green or A/B) deployments. Our servers weren’t shutting down gracefully and would cause dozens of registered servers to expire at all once, causing Eureka to think it was in a network partition and freeze the registry. Once all of our services were in, a few dozen nodes either way stopped pushing it under the heartbeat threshold for “emergency mode” and we fixed our graceful shutdowns to cleanly deregister nodes.
Jepsen’s not applicable to Eureka.
Jepsen, and most databases, are all about making sure that the sum of applied operations are valid when all the clusters have divided and merged again. Eureka isn’t. Among other things, in a healthy cluster, a registration that comes in at time t is automatically invalidated and purged at time t+delta (user-configurable); clients have to phone home on a regular basis (at least more often than delta/2; delta/3 is best practice) if they want to be kept in the registry. Even a single Eureka server with no failures would therefore fail Jepsen if you just paused after writing for a bit before doing the read verification. Clearly, this is not doing the same thing.
In fact, Eureka’s barely a database at all. It’s really, at its core, just a way of passing around expiring caches of service name to IP address. That’s it.
Eureka’s total algorithm is really just a simple union:
Note how insanely different this is from Jepsen. That’s because it’s not trying to do anything remotely similar. And that’s actually the point of this whole post: ZooKeeper is really complicated because it’s trying to be a CP database. Eureka is really simple because it’s just providing a clean way to union a bunch of rolling caches. If you’re doing lock coordination, Eureka would be insane, but this is exactly the kind of thing you want for service discovery.
What is your threat model here? We can only talk about consistency if there’s something to be consistent about. Jepsen is worried about scenarios like: a value is 2, you add 1 to it, you add 1 to it again, you read it, and suddenly it’s 5. If a service entry for a particular server in Eureka got duplicated, that would be maybe marginally annoying, but it would barely even qualify as a bug.
I don’t believe the datastructures would allow for duplicates, but missing entries sure, probably possible. During normal operation however, checkins are replicated to all servers which fixes up missing entries pretty quick, except during partition events. In that case, you probably have bigger problems than load distribution.
Maybe it’s more constructive to frame this as: what guarantees do you want from your service discovery mechanism in the presence of network partitions?
With zookeeper the guarantee is: you will either get the network consensus answer, or no answer at all. With Eureka it sounds like the answer is: you will always be told about any services that the node you’re talking to has received recent updates from, and you will sometimes be told about services that are in fact down. We could formally model/prove these I guess, but IMO the main use of these formal models is assuring you that particular invariants are maintained (e.g. that the results of a sequence of compare-and-swap operations on the same field will admit a serializable ordering), and I don’t think there are really any relevant invariants for the service discovery use case.
Mac only and not free software :-(
What did this add to the conversation?
I think of my comment as like a seed, or a droplet of water. Putting the ideas of cross platform free software as a nagging nugget in the back of the reader’s mind. This software seems pretty great and it is too bad it isn’t cross platform and free software.
I don’t see anything on the website to suggest it couldn’t be otherwise.
If you look at the other comments, they have a similar sentiment. In the small it adds nothing, but as a group of comments, it is revealing. And as users of software, we should demand these things, even if it is a pain in the ass to developers.
I try my best not to be a hypocrite. My own software that I publish should be held to the same standards, and I try. It is a major pain in the ass, but I try.
EDIT:
This can add to the conversation
http://forums.gitup.co/t/cross-platform-support/134
Why are developers doing a poor job of engineering these things to be so locked down and hard to change?
People have been asking for Free software since long, long before app stores existed. That’s how we got GNU. It’s not “the app store mentality”, it’s “the four freedoms”.
The author is free to write non-Free software, and users are free to express their disappointment at the author’s choice.
Sorry for the confusion. I think you were the only one to think that I meant free as in beer. I’ll make sure to capitalize ‘free’ in the future.
hdevalence summed it up nicely below. I want to add that I think you may actually like the ideas behind Free Software (Free as in Freedom, not Free as in beer). It is as far from the App Store mentality as you can get.
The four freedoms are:
You can read more here
| That’s one hell of a presumptuous statement. Clearly we can’t all be as good as you.
I would not pick on a inexperienced developer like that. But can you argue that the developer of GitUp is inexperienced? Seems he is pretty good and should know better. Also, he wrote some open source libraries and my guess is that he is planning on selling GitUp (nothing wrong with selling). Maybe he made it Mac only because he knows Mac people have deeper pockets? Though it seems most GUI apps he wrote are Mac only.
I didn’t say it out of context. I said it because there was a ticket asking if gitup will be cross platform. The developer basically said, not for a while because the design uses object C too deeply, which makes it harder to port. He didn’t reject the idea of cross platform support.
In other words, he didn’t think too far ahead.