Oh, wow. I assumed this meant the iOS terminal emulator. But, no, Prompt is an organization with the goal of promoting open discussion of mental health in the tech industry. That’s a subject of pretty extreme personal importance to me. Concretely, it seems like they want to sponsor people who want to give talks at conferences, which sounds like an excellent idea.
The article has this quote from the founder of Prompt:
I also noticed a lot of companies building products for techies were marketing them as being a way “to make life easier”. I realised they were talking about making work easier, […]
Thanks for posting this. It’s a case of something relevant and important to me that I wouldn’t have heard about otherwise. :)
That quote is an excellent observation.
Yeah, it’s been said a bunch at this point, but it needs to be said more. :)
I’m glad it resonated!
am i just being a miserable old grump, or does this blog post not actually tell us anything about “BaaS” other than that EngineYard want to visibly seeming knowledgeable about it in search rankings? ;-)
IMO, very few of the Engine Yard blog posts actually offer much.
…and yet their authors always think them worth submitting.
Eep, sorry you feel that way. This was an introductory post and part of a series.
Here’s the first one:
The next one will take a look at some specific technologies. And then the few that follow that one will step you through building an app that can sync data from a backend and manipulate it locally.
It’s always fun to read an article about a predominantly science/engineering community from an arts perspective… My thoughts:
I think open source projects moving from a “hacker’s spare time” projects to a “collaboration between varied businesses (and maybe individuals)” will be a win-win for everyone. It seems most small businesses that do use open source don’t contribute back significantly - they see the major benefit as the lack of license fees, and so investing in ongoing development doesn’t make sense to them. If instead they viewed it as a way of not getting screwed with platform lock-in, reducing duplication of effort, etc, then maybe they’d see the advantage is still there even if you end up paying similar fees (to invest in development, rather than pay a license fee). The increased investment might see some of the software reach a level of polish good enough to really take off.
I would usually say the meritocracy system sounded okay, but the article is right: if we were honest about that we’d see a lot more diversity. The same thing is happening with the Australian government, which has only one women in the 18 person cabinet of ministers, apparently decided on “merit”. When you see a cultural homogeneity like that you can usually assume things aren’t purely “merit”. The selection bias of start-ups hiring those who have enough enough free time to volunteer on open source probably accounts for much of the mono-culture I’ve seen.
The quote from Eric S Raymond is particularly off-putting. I’ve gone through these times and the result has always been horrendous, unmaintainable code. Not to mention the negative effects on my health. In fact, the entire “hacker alone with his code” tends to lead to ugly code - it’s when I’m forced to share my work with others and be social that all of a sudden I start thinking much more carefully about design, readability, maintainability etc. I would not wish this rite of passage on any new member of the community.
Is anyone on this board from one of the minority demographics? I’d love to hear some people share their experiences.
Wow, thanks for the thoughtful reply! Your point about business investment is a great one!
Thanks for writing the article. I found it very thought provoking, even personally challenging. It’s probably one I’ll come back to and re-read from time to time…
404, is actually here: http://osdelivers.blackducksoftware.com/2014/07/01/grow-an-open-source-community/
Thanks for catching that!
As a maintainer of a few open source projects, I agree with the author that building and managing a community is difficult and time consuming.
It’s very easy to open source a project, but too many companies see the first public commit as the end product. If you don’t have a dedicated engineer to manage issues, field pull requests, fix bugs, answer questions, and all the other tasks required in running a project, your community will stay small.
The two tactics I’ve used in building my open source communities are being generous with commit access and investing in automation. Many open source projects are treated as pet projects, where only the original creator has commit access. If you’re lucky to have a person contribute more than one patch, give that person commit access. They’ll feel great because they can now help out with asking for permission, and you’ll feel great because you now have more help.
My goal with automation is simple: if a user has commit access, they should be able to manage the entire project. For my open source game, this means that the entire release process is automated, powered by pull requests. Anyone can create a pull request from master into the release branch. If the pull request is merged, then the CI system packages up and publishes the release. While I had to invest significant time into the process, it allowed me to step back from the game without slowing down development.
This is an excellent approach! I’ve heard other people do similar things!
Essentially, Wordpress expects certain resources to be local, and so you’ve got to do some modifications to get it to handle things that aren’t. This is why “the cloud” is such a silly term. Issues like this become far more obvious when you call it what it actually is: someone else’s computer.
Local resources aren’t really incompatible with someone else’s computer.
The term cloud, for all that it’s overplayed, also implies that the computer is ephemeral. Or sharded. Or something. But clearly something different than the shared hosting of a decade ago, which would also qualify as someone else’s computer but could run wordpress just fine.
Sure. Perhaps my statement was laced a little too heavily with snark. I just think that the current choice of language (i.e. “The Cloud”) obscures issues with technology decisions that would otherwise be obvious. “The Cloud” is more than anything a marketing term, and by obscuring meaning and hiding potential problems, it becomes easier to sell “The Cloud” as a panacea which can then be picked up by managers who don’t know better.
I agree that “the cloud” is a bad term. But it’s also commonly used, if frequently misunderstood. Before I started the WordPress series, I wrote a series on adapting legacy apps for the cloud. The very first post in that series addresses these crucial differences in a way that hopefully cuts through the buzzword mess we’ve been saddled with.
Check it out:
I found this article of poor quality.
Why does the Stateless Approach section not bring up the CAP theorem? It has the same issues as the distributed file system.
Why doesn’t the author suggest putting your data in a database? Using something like an RDBMS gets you off your single server at the cost of availability.
Why doesn’t the author talk about how to get around the CAP theorem. For example, if you craft your data intelligently you can be eventually consistent.
This is just weak sauce advice.
Ah, I see the source of your confusion.
This article doesn’t mean to talk about traditional database state at all. That’s a completely separate topic. All we’re talking about here is file system state. Which is why in the Stateless Approach section, there’s no need to discuss CAP, because nothing is being distributed. The file system is either read only, or completely temporary.
Similarly, the An Alternative approach hopes to address the idea that you can get around the CAP theorem for file systems. The summary is: you can’t, if your app is expecting POSIX semantics.
Perhaps I’m misunderstanding this sentence:
It might be as simple as switching to uploading files to S3 instead of the file system, though this will also mean converting your code to use an S3 library for manipulating files
This implies to me that when writing to a file you push to S3 rather than the local filesystem.
Regardless, I think this post leaves a lot to be desired and is more-or-less a fluff piece.
Your understanding of that sentence is correct. What I’m suggesting is that for any file you need to persist, you write it directly to a service like S3. For anything temporary, write it to the filesystem as normal.
This post is the fourth in a series:
The series as a whole is attempting to address the question:
“I have a legacy off-the-shelf app like WordPress that expects to be able to write files to the local filesystem. How do I deploy this app in the cloud while taking advantage of everything the cloud has to offer.”
A lot of people approach this problem by deploying WordPress (or whatever) onto a single server, and just scaling that up. But this is no different from the traditional VPS model of hosting! You may as well not use a PaaS. You’re certainly not getting your money’s worth if you’re doing this. (And you’re going to get a nasty shock when AWS “degrades” your single instance…)
But if you want to distribute something like WordPress across multiple instances, then some changes need to be made. And this series attempts to explain what those changes are, and why they are necessary.
Your understanding of that sentence is correct. What I’m suggesting is that for any file you need to persist, you write it directly to a service like S3.
Which puts you right back into CAP land.
Ah, no. Well, it’s different. Off-the-shelf apps like WordPress assume a POSIX interface for writing out files. That’s where the problem lies. You can’t distribute that without changing your assumptions. So if you change your code to write any files that must be written to something like S3, you are changing those assumptions.
There’s no need to mitigate it any more, because you’re embracing it.
Is S3 as flexible as local files? Can you open an S3 file in append mode?
An issue I’ve encountered when building systems on top of networked file stores is that operations like truncate, append, and inotify are not available. This stymies developers who don’t know the design patterns for working on data systems without those features.
The article states:
Consistency, availability, and partition tolerance. Pick two.
It’s been pointed out that this formulation of the CAP theorem doesn’t make sense. You can’t pick CA.
Oh. I saw this a while back, but didn’t get a chance to read it. Thanks for reminding me!
Not sure if I’m meant to respond, as we already know each other… ;)
Noah Slater here. One of the original guys behind CouchDB. Spend most of my free time doing stuff for Apache. Currently mentoring Stratos and MetaModel. Successfully graduated CloudStack last year.
Long time OSS contributor. Been involved with Debian, GNU, FSF, W3C, etc. One of my claims to fame is that I contributed to the first cross-browser JS library in 1999. And the same year, was the last person to ever receive the Macromedia DHTML Spotlight Award. Haha.
I forgot about the Macromedia DHTML Spotlight Award. Given that DHTML became an important application platform in 2004 (?) with Gmail, and now the dominant one since maybe 2008, were they far ahead of their time or just stupid?
Macromedia were fully behind DHTML in the late 90s. They had demo sites, showcases, etc. One of the only companies I remember being involved. It was a very sparse hacker scene. Not many people doing it, so you kinda felt like you knew all the other folks doing DHTML. I sorta miss that!