NYTProf is pretty great though
The best SRE recommendation around Memcached is not to use it at all:
Don’t use memcached, use redis instead.
(I do SRE and systems architecture)
… there was literally a release yesterday, and the project is currently sponsored by a little company called …[checks notes]…. Netflix.
Does it do everything Redis does? No. Sometimes having simpler services is a good thing.
SRE here. Memcached is great. Redis is great too.
HA has a price (Leader election, tested failover, etc). It’s an antipattern to use HA for your cache.
Memcached is definitely not abandonware. It’s a mature project with a narrow scope. It excels at what it does. It’s just not as feature rich as something like Redis.
The HA story is usually provided by smart proxies (twemcache and others).
It’s designed to be a cache, it doesn’t need an HA story. You run many many nodes of it and rely on consistent hashing to scale the cluster. For this, it’s unbelievably good and just works.
seems like hazelcast is the successor of memcached
I would put it with a little bit more nuance: if you have already Redis in production (which is quite common), there is little reason to add memcached too and add complexity/new software you may have not as much experience with.
this comment is ridiculous
it’s pretty much abandonware at this point
it’s pretty much abandonware at this point
i was under the impression that facebook uses it extensively, i guess redis it is.
Many large tech companies, including Facebook, use Memcached. Some even use both Memcached and Redis: Memcached as a cache, and Redis for its complex data structures and persistence.
Memcached is faster than Redis on a per-node basis, because Redis is single-threaded and Memcached isn’t. You also don’t need “built-in clustering” for Memcached; most languages have a consistent hashing library that makes running a cluster of Memcacheds relatively simple.
If you want a simple-to-operate, in-memory LRU cache, Memcached is the best there is. It has very few features, but for the features it has, they’re better than the competition.
Most folks run multiple Redis per node (cpu minus one is pretty common) just as an FYI so the the “single process thing” is probably moot.
N-1 processes is better than nothing but it doesn’t usually compete with multithreading within a single process, since there can be overhead costs. I don’t have public benchmarks for Memcached vs Redis specifically, but at a previous employer we did internally benchmark the two (since we used both, and it would be in some senses simpler to just use Redis) and Redis had higher latency and lower throughput.
Yup. Totally. I just didn’t want people to think that there’s all of these idle CPUs sitting out there. Super easy to multiplex across em.
Once you started wanting to do more complex things / structures / caching policies then it may make sense to redis
Yeah agreed, and I don’t mean to hate on Redis — if you want to do operations on distributed data structures, Redis is quite good; it also has some degree of persistence, and so cache warming stops being as much of a problem. And it’s still very fast compared to most things, it’s just hard to beat Memcached at the (comparatively few) operations it supports since it’s so simple.
Syncthing + Git seems a little bit overlapping? Don’t you think? Instead I would just create git account and use it via SSH.
You don’t even need a “git” user ala gitolite if you only need one user. You can ssh as yourself and it all works.
while i also have remotes i access via ssh, that method depends on you be able to reach a device via ssh. what if it’s offline?
syncthing provides local copies everywhere. there are definitely big caveats to that, but i think it’s an advantage overall.
Maybe I’m confused, but isn’t the point of Git being distributed that you have a copy of the data even if the remote is down? You shouldn’t need access to the remote to access your data.
you are not confused.
i guess if remote is down, and workstation a is not up-to-date, you could pull from workstation b if it is up-to-date.
maybe i just get use out of this because i am lazy. but i find it comforting to know that there’s this directory being synced, that is not my working directory, and it lives in many places and is hard to destroy.
That makes sense. I might just enable SSH on every machine and push to all of the others when I have a change, but your solution certainly works (and is probably more automated, and allows for downtime more nicely). It’s nice that there’s a variety of ways to achieve this with different trade-offs.
If you’re working with trusted staff (i.e. you don’t need to prevent malicious/stupid acts) you don’t need a git user account for collaboration either. Filesystem ACLs solves the ‘files created with the wrong permissions’ issue.
Brad’s had an impressive ability to do work in areas that he realized were important before everybody else got there. Before MongoDB existed or sharding was a term in everyone’s vocabulary, he helped popularize it after he had to do it to scale LiveJournal and talked about it at OSCON. Even within Go, many of the particular things he worked on often seemed disproportionately key to making Go practical (getting it in prod at Google, the stdlib and built-in HTTP server, even small-by-comparison bits like goimports). Makes me even more curious what he’ll do next. :)
his work increases “whipuptitude” imho
Not what you’re looking for, but my website is http://www.malsmith.net/ . Since its main function is to provide software downloads that run on old systems, the site itself is also designed to work on old systems. I’ll probably still want it to work with IE6 essentially forever. (My next site will be gopher.)
Your site background is identical to Leslie Lamport’s and it’s blowing my damn mind
that color blows your mind?
that yellow used to be the gold standard: https://web.archive.org/web/19990428222304/http://www.useit.com/