Heh. Did you get this from my phlog reference or is this a coincidence?
Been eyeing building one of these for some time. I’m new to electronics, though, and don’t have a pile of components or know good sources for them. Building one of these from scratch seems overwhelming. Though it’s small and would be a good one to start with, I guess. I’d need a chip programmer, too, though.
Someone distribute it in kit form. :)
Here is the kit. Currently sold out, but more PCBs are on the way. You can join waitlist to get notified.
Sweet! Thanks. Somehow I didn’t even think to look on Tindie.
I’ve used JMeter extensively to do performance and scalability tests for services I’ve helped build. Driving it through CI is almost essential in order to keep a reliable shared history of test runs. The jmeter-ec2 project has been helpful for scaling tests out economically, although it has significant bugs and limitations. I’ve usually measured the applications under test with New Relic.
Is there a limit of how many users JMeter can simulate?
There’s a practical limit per node that is somewhere between 200-4000 threads, which are a good proxy for individual users, depending on how JMeter is tuned. You can use multiple nodes to horizontally scale out though. I’ve done practical tests with the equivalent of 20,000 users using jmeter-ec2, spread across dozens of EC2 servers.
How running containers reconcile with components that rely on “owning” entire physical machine, like Postgres or Erlang VM? Say with Erlang I can routinely run with half terabyte RAM and 10G dedicated network and serve 100s of thousands users per node. Can I do this with K8s?
I wouldn’t say PostgreSQL relies on “owning” an entire machine, but if you want that, you can create node pools with taints, and then setup your PostgreSQL pod such that it can tolerate said taints. It will be the only pod allowed to be scheduled on that node. (I suspect you might still have some Kubernetes infrastructure running on that node, so I doubt you can literally remove everything, but you can certainly manage the allocation of pods to nodes in a fine grained way.)
Can K8s help with replication and failover? Say Amazon RDS maintains DNS record to be used in clients and when master failure is detected it promotes slave and adjusts that record.
think of k8s as ‘erlang for the datacenter, thrown roughly together by enterprises and people who like C’ and you’ll get pretty close.
I don’t know. I’m not a k8s expert. I just know the basics. My guess is that something like that is possible. Disclaimer: that’s probably my answer for every question you might ask. K8s is very large and very complicated. I don’t even know enough to say whether it is mostly incidental or necessary complexity.
Cowboy web server supports variation  of Webmachine REST flow.