Serverless means your unit of deployment / compute / cost is not analogous to a machine. So when it’s not running it’s almost free, and you don’t have to worry about load spikes in advance.
I didn’t pick the term. I wish it meant “app that runs on phones and browsers but needs no coordinating component” but we don’t always get what we want. :)
If you aren’t familiar there is a whole industry that is gathered under this banner, see for example http://serverlessconf.io
You’re using it though.
The recent white supremacist/racist movement in the USA calls itself “the alt-right” but that doesn’t mean the rest of us don’t know what they really are.
Similarly, stop using a ridiculous marketing buzzword. If you need something catchy, call it Functions As A Service (FAAS).
This is total spam. It says nothing of technical value and just pushes the reader to use a “serverless database”. Whatever that is? The post, literally, says “there is no downside”.
Yes there are risks inherent in new technology and new startups, but from a technical standpoint, I’d argue that Fauna sidesteps most/all of the downsides of the existing solutions people move to when their Postgres / MySQL thing gets exhausting to manage. Strong consistency, elastic scalability, no provisioning trap, hierarchical multi-tenancy, event feeds, etc.
Oh c'mon, if it’s technology there are trade-offs and downsides. You’re just being dishonest and I don’t think lobste.rs will fall for it.
Doesn’t look like there’s foreign keys. That alone kinda makes it DoA as a SQL-replacement.
An unexpected traffic surge is the worst time to mess with your database infrastructure. This is the cloud database trap.
I think integrating a proprietary db from a startup is a much bigger risk than this. I can almost eliminate this risk with scalability testing and overprovisioning (beefy db servers are still cheap compared to devs). I’m not even sure where to start calculating the risk of a db startup - what’s the cost of having probably 1-3 months of notice to rewrite every line of code that touches the database (assuming its architectural decisions didn’t seep into my app design and of course they did)?
It’s a balance of trade-offs. Our customers are comfortable with closed source, because they like to see us with a reliable revenue stream. Our ability to run in multi-cloud and on-premises, with reasonably priced per-core licensing, means your cost model isn’t tied to your cloud provider.
The team has been together for years, and the pace and product we are building feels sustainable, not flashy.
Not to be a wet blanket, but I’m not sure why I should be excited about 120k writes/s on a 15 node SSD cluster.
Running in a single datacenter or worse, on a single node, is not really viable in 2017. If you want scalable, globally consistent transactions there is nothing else to choose from. That’s why there’s a lot of excitement about distributed transaction protocols these days.
Running in a single datacenter or worse, on a single node, is not really viable in 2017
That’s a bit bold. Sometimes keeping things simple is worth more than keeping things globally available.
There’s lots of organizations thst keep their main database in a cluster that’s strongly consistent due to physical proximity and often dedicated lines. VMS and NonStop clusters were often done this way. Such a setup can be replicated on a per country or per continent basis.
We are super excited to be one of the first commercial users of the Calvin protocol http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf
Are you using the (automatic) reconnaissance txn approach or are you requiring that all txns declare their read/write set ahead of time?
Unusual username you have there.
All the talk of “not paying” for idle capacity smacks of hype. Someone is paying for idle capacity. Either Fauna is consciously losing money on each customer right now, or the idle costs are being rolled into per-use costs. Which is it?
There are fixed costs to running even a tiny cluster. You want to be in multiple data centers for redundancy. You want machines that won’t get swamped the second your site gets linked on a popular blog. All that for what might only be a few hundred requests a day. The minimum hardware you are running can support orders of magnitude more traffic.
If you use the cloud managed by FaunaDB, you’re only paying for the resources you use. So it’s much cheaper for small apps, without Fauna losing money. And for big apps it’s still cheaper when you consider the cost of a database operations team, maintaining hardware node, etc.
This is the same story as virtualization and containers. The more heterogenous workloads you run on a FaunaDB cluster, the better utilization will smooth out.
It’s a real issue for a service like Dynamo. You provision it based on an expected RPS, which is essentially translated to a static resource allocation on their side, and is not shared with another tenant.
This is fine for situations where you have a predictable level of throughput, but in cases where RPS is unpredictable and spikes, you must provision based on your expected peak, rather than your average if you don’t want to be intermittently rate-limited.