Dokku is architecturally very simple, just a bunch of bash scripts, because it only focuses on single-host use cases.
Flynn is natively highly available (though you can run it in a single-host configuration) and has a larger technical scope, covering everything from the details of how container are run all the way up to the user interface used to deploy applications. We also run highly available databases within the platform, in addition to stateless webapps. As a result of working to build an easy to use unified solution, we’ve ended up building many components from scratch specifically for Flynn and have very few dependencies.
Deis is designed to use commonly used off the shelf components, and has a focus on stateless web apps. Currently Deis is built around the Kubernetes and Docker ecosystems.
Flynn’s approach has given us more flexibility, but it has taken longer, so Deis has often reached various milestones faster.
It’s a frequently asked question so hopefully in the future a well-informed user that has experience with various platforms can write a detailed teardown/comparison.
One of the main differences is that it’s binary packages rather than all source. Whenever I finally updated homebrew my Mac was nearly useless for a few hours compiling some rather large packages and their dependencies. Updating pkgsrc takes only a few minutes to update in nearly every case.
Pkgsrc also has over 14k packages whereas homebrew only has about 3500. And pkgsrc is cross-platform. I use it on OS X, SmartOS, OmniOS, and Linux to get a consistent set of packages/versions/configuration style across all platforms. Pkgsrc directly supports 18 different platforms and I’ve seen unofficial builds on several more.
Pkgsrc packages are fairly easy to update if you want a newer version of something, and the maintainers, in my experience, are always willing to work with you to get patches/updates into trunk.
Homebrew cute when you’re in college and beer is the central focus of your life. But really, I just need to get stuff done. For the most part pkgsrc does a better job of that than homebrew.
There multiple versions versions of VLC, ffmpeg, etc. Then surprisingly, quite a few packages that I regularly use are absent from the Darwin packages (they are in Homebrew). For instance: Qt 5, bazel (for building tensorflow), Armadillo, ghc, cabal-install, pandoc, libsvm, rust (though I’ve switched to rustup.rs), and SWI Prolog is at an old version and only with all packages disabled (lite).
Although I like pkgsrc, there are also some nice advantages to Homebrew. E.g., if you want to compile and install your own software, you just use /usr/local/Cellar/<mypackage>/<myversion> as the prefix and then the usually Homebrew functionality works (brew link <mypackage> to link under /usr/local, brew uninstall <mypackage> to remove, etc.).
Qt 5, bazel (for building tensorflow), Armadillo, ghc, cabal-install, pandoc, libsvm, rust (though I’ve switched to rustup.rs), and SWI Prolog
Thanks, this is useful, I’ll take a look at adding or fixing these. If there are any other packages that people would find useful that are missing, please feel free to raise an issue.
After seeing too many “Rails is dead” rants from intermediate developers, who may have enough experience to make their own judgements but not enough to generalize to others' situations, it’s refreshing to read this.
I agree. The future of Rails and of Ruby has less to do with raw speed or hipness than it does with having a smooth and long path from “quicky” web apps up to major APIs with many contributors and appropriate amounts of abstractions.
In my own experience, on a number of small and verging-on-major Rails apps, the transition from the former to the latter scale of Rails project has always been marked by the creation of an app/services directory.
sorry this is a bit off-topic but would you mind pointing me to what is implied by having an app/services directory? I’m currently learning Ruby/Rails at a new job that involves managing and extending legacy Rails apps and am trying to get a handle on what good refactoring paths are. I have experience doing similar work in Java but I imagine the Rails community has had plenty of time to come up with their own best practices for cleaning up technical debt.
Not OP but, if you have a flow where you are orchestrating more than a few models, doing some mailings, etc, you could extract it to a Service object (or interaction, or interactor, or whatever you prefer to call it) A classic example of this popping up in a rails project is handling signups for a SaaS. You may have to create a few internal models (User, Account, Subscription), communicate with Stripe, and send welcome emails. It’s good to compose those into smaller actions, but you probably need something top level to coordinate.
Yeah, I also highly recommend that Code Climate blog post. A set of options any solo developer or team can start using right away and grow into over time as their knowledge of architectural patterns increases (and I say that as a technical lead still only confident in a few of ‘em).
There, you run the master node in AWS and keep your workers in RDS. That would give you query parallelism but the blog post uses pg_shard and it would probably take some fiddling to replicate with Citus (we likely rely on some UDFs being present on the workers)
Short version: don’t worry about it, you’re not missing out.
Releases are inconsistent (basing this statement on their past history of unpredictable dates) and even if they weren’t, they sell out so fast and definitively that you shouldn’t expect to be able to buy one right away. I got very lucky when the 2 was released and ordered it that day, but most of my friends who got one didn’t see it for about a month. Also, the RPi 2 is fantastic as it is, unless you really need wireless. On top of that, the price isn’t prohibitive so you can order this new version too :)
TBH having to dedicate a USB port for a wifi adapter was a much bigger problem before they added the 2 additional USB ports; on the Pi 2 it’s not much of a big deal.
Is “AI” necessarily “data”? I’m not sure a “data” tag is appropriate for “AI” things that don’t operate on lots of data. I would certainly appreciate a tag for quantitative techniques (data mining, ML, etc.), though.
Anyone able to compare this against Deis Workflow, Dokku, or some of the other open-source container-based PaaS systems?
Hey, I’m one of the creators of Flynn.
Obviously I’m biased, but here’s my take:
Dokku is architecturally very simple, just a bunch of bash scripts, because it only focuses on single-host use cases.
Flynn is natively highly available (though you can run it in a single-host configuration) and has a larger technical scope, covering everything from the details of how container are run all the way up to the user interface used to deploy applications. We also run highly available databases within the platform, in addition to stateless webapps. As a result of working to build an easy to use unified solution, we’ve ended up building many components from scratch specifically for Flynn and have very few dependencies.
Deis is designed to use commonly used off the shelf components, and has a focus on stateless web apps. Currently Deis is built around the Kubernetes and Docker ecosystems.
Flynn’s approach has given us more flexibility, but it has taken longer, so Deis has often reached various milestones faster.
It’s a frequently asked question so hopefully in the future a well-informed user that has experience with various platforms can write a detailed teardown/comparison.
Glad to answer questions here. (I’m a Mapzen staffer.) Or feel free to reach out to hello@mapzen.com, @mapzen, or @transitland.
If you like code first, have a look at:
I’ve been using Homebrew for a few years–long enough to not realize what may be its limitations. How is Pkgsrc different/better/worse?
One of the main differences is that it’s binary packages rather than all source. Whenever I finally updated homebrew my Mac was nearly useless for a few hours compiling some rather large packages and their dependencies. Updating pkgsrc takes only a few minutes to update in nearly every case.
Pkgsrc also has over 14k packages whereas homebrew only has about 3500. And pkgsrc is cross-platform. I use it on OS X, SmartOS, OmniOS, and Linux to get a consistent set of packages/versions/configuration style across all platforms. Pkgsrc directly supports 18 different platforms and I’ve seen unofficial builds on several more.
Pkgsrc packages are fairly easy to update if you want a newer version of something, and the maintainers, in my experience, are always willing to work with you to get patches/updates into trunk.
Homebrew cute when you’re in college and beer is the central focus of your life. But really, I just need to get stuff done. For the most part pkgsrc does a better job of that than homebrew.
Homebrew does have binary packages for quite a lot (all?) brews now to be fair, as long as you stick it in
/usr/local.(Not advocating homebrew as better than pkgsrc though. Love pkgsrc on my servers, still using homebrew locally. Love both.)
But really, I just need to get stuff done.
Well, I guess Google doesn’t get stuff done then ;).
https://twitter.com/mxcl/status/608682016205344768?lang=en&lang=en
One of the main differences is that it’s binary packages rather than all source.
That’s years ago. Nowadays, many Homebrew formulae are precompiled, so Homebrew rarely compiles stuff anymore:
https://bintray.com/homebrew/bottles
Pkgsrc also has over 14k packages whereas homebrew only has about 3500.
This is a bit disingenuous, because pkgsrc keeps a lot of old versions. E.g.:
http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/multimedia/
There multiple versions versions of VLC, ffmpeg, etc. Then surprisingly, quite a few packages that I regularly use are absent from the Darwin packages (they are in Homebrew). For instance: Qt 5, bazel (for building tensorflow), Armadillo, ghc, cabal-install, pandoc, libsvm, rust (though I’ve switched to rustup.rs), and SWI Prolog is at an old version and only with all packages disabled (lite).
https://pkgsrc.joyent.com/packages/Darwin/trunk/x86_64/All/
Although I like pkgsrc, there are also some nice advantages to Homebrew. E.g., if you want to compile and install your own software, you just use
/usr/local/Cellar/<mypackage>/<myversion>as the prefix and then the usually Homebrew functionality works (brew link <mypackage>to link under/usr/local,brew uninstall <mypackage>to remove, etc.).Thanks, this is useful, I’ll take a look at adding or fixing these. If there are any other packages that people would find useful that are missing, please feel free to raise an issue.
Thanks, that’s useful background (well, minus the equation of one’s package manager and one’s age or taste for alcohol ;)
After seeing too many “Rails is dead” rants from intermediate developers, who may have enough experience to make their own judgements but not enough to generalize to others' situations, it’s refreshing to read this.
I agree. The future of Rails and of Ruby has less to do with raw speed or hipness than it does with having a smooth and long path from “quicky” web apps up to major APIs with many contributors and appropriate amounts of abstractions.
In my own experience, on a number of small and verging-on-major Rails apps, the transition from the former to the latter scale of Rails project has always been marked by the creation of an
app/servicesdirectory.sorry this is a bit off-topic but would you mind pointing me to what is implied by having an
app/servicesdirectory? I’m currently learning Ruby/Rails at a new job that involves managing and extending legacy Rails apps and am trying to get a handle on what good refactoring paths are. I have experience doing similar work in Java but I imagine the Rails community has had plenty of time to come up with their own best practices for cleaning up technical debt.Not OP but, if you have a flow where you are orchestrating more than a few models, doing some mailings, etc, you could extract it to a Service object (or interaction, or interactor, or whatever you prefer to call it) A classic example of this popping up in a rails project is handling signups for a SaaS. You may have to create a few internal models (User, Account, Subscription), communicate with Stripe, and send welcome emails. It’s good to compose those into smaller actions, but you probably need something top level to coordinate.
Some readings:
Yeah, I also highly recommend that Code Climate blog post. A set of options any solo developer or team can start using right away and grow into over time as their knowledge of architectural patterns increases (and I say that as a technical lead still only confident in a few of ‘em).
Since Citus is available as a Postgres extension, could I enable it on an AWS RDS instance to get multi-core support?
Unfortunately I don’t think RDS allows you to use arbitrary extensions, given that they have a list of supported ones: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.FeatureSupport
Letting users hook random binaries into the database doesn’t sound like it makes managing instances easy :)
I think the closest you can come is something like this: https://www.citusdata.com/blog/14-marco/178-scaling-out-postgresql-on-amazon-rds-using-masterless-pg-shard
There, you run the master node in AWS and keep your workers in RDS. That would give you query parallelism but the blog post uses pg_shard and it would probably take some fiddling to replicate with Citus (we likely rely on some UDFs being present on the workers)
Doh. I bought my very first Rasperrby Pi… last week.
Is late Feb. a consistent release date for new versions?
Short version: don’t worry about it, you’re not missing out.
Releases are inconsistent (basing this statement on their past history of unpredictable dates) and even if they weren’t, they sell out so fast and definitively that you shouldn’t expect to be able to buy one right away. I got very lucky when the 2 was released and ordered it that day, but most of my friends who got one didn’t see it for about a month. Also, the RPi 2 is fantastic as it is, unless you really need wireless. On top of that, the price isn’t prohibitive so you can order this new version too :)
TBH having to dedicate a USB port for a wifi adapter was a much bigger problem before they added the 2 additional USB ports; on the Pi 2 it’s not much of a big deal.
civic?That would be a bit broader, and also include topics related to community groups, civic tech, activism, etc.
I also was surprised to see no tags for data analysis, data mining, machine learning, or related quantitative techniques.
“BIG DATA!” is a buzzword that will, hopefully, pass away in the next hype cycle. And “data science” is a bit overblown, too.
But that said, I would post to and read a “data” tag.
Is “AI” necessarily “data”? I’m not sure a “data” tag is appropriate for “AI” things that don’t operate on lots of data. I would certainly appreciate a tag for quantitative techniques (data mining, ML, etc.), though.