We are super excited to be one of the first commercial users of the Calvin protocol http://cs.yale.edu/homes/thomson/publications/calvin-sigmod12.pdf
Are you using the (automatic) reconnaissance txn approach or are you requiring that all txns declare their read/write set ahead of time?
Unusual username you have there.
A nice plot to make you pay per row?
Yes, but that may be fine for some applications. IBM and Oracle have been doing that for years, albeit sometimes indirectly by having pricing that depends on how fast your hardware is.
Better than paying by non-row for idle capacity, we think.
So… this is actually a third party hosted database, served on AWS hosts so that you’ll get acceptable latency to it from anything else hosted on AWS? With an API with per request+storage pricing? And where the secret key to access it also identifies which application it should be serving?
It sounds like a reasonable thing to sell to me. You’d have to accept some level of vulnerability to lock-in of course.
I don’t understand how this can purport to be the “first” DB for serverless applications when things like Amazon RDS and DynamoDB already exist and, presumably, work just fine for applications built on AWS lambda?
RDS and DynamoDB still require you to provision a specific and limited level of capacity ahead of time (even if only shortly ahead of time). They aren’t metered and you still have to pay for idle capacity.
That sounds like a razor thin USP that Amazon themselves could very, very easily wipe out by implementing metered pricing for their DBaaS.
Alternately, one could go out and buy a copy of Feedback Control for Computer Systems and make one’s apps periodically emit API calls to dynamically change the provisioned capacity of RDS/DynamoDB.
Retrofitting a control system is very different from having an instantaneous, QoS-managed query process scheduler.
RDS and DynamoDB provisioned capacity is based on hardware virtualization; you can’t adjust it millisecond by millisecond, and idle capacity can’t be used to serve other loads.
People have implemented systems that do the second thing but it doesn’t help when you need to burst by 10x in a second, and then dial back down.
For instance, you can connect to FaunaDB in moments, and scale seamlessly from prototype to runaway hit.
Does anyone else read sentences like this and wonder where the case study is of the “runaway hit” using the tech at scale?
I read that sentence and thought it said “runway hit” as in “lithobraking” as in “oopsie, your aeroplane does not work any more”.
We’re working on those case studies. :-)
I dunno, I think DBM, or maybe Berkeley DB came first.
Those are embedded databases, to me. This is serverless in the way AWS Lambda is serverless.
Aren’t embedded databases sans server, and therefore severless?
In any case, I know, and was making a joke. The “serverless” name has always been really silly to me, and to others, of course.