I’m surprised that as an SRE guide, this doesn’t mention anything about cache warming, or the trade-offs of using memcached vs something else like redis or couchbase.
Memcached is definitely not abandonware. It’s a mature project with a narrow scope. It excels at what it does. It’s just not as feature rich as something like Redis.
The HA story is usually provided by smart proxies (twemcache and others).
It’s designed to be a cache, it doesn’t need an HA story. You run many many nodes of it and rely on consistent hashing to scale the cluster. For this, it’s unbelievably good and just works.
I would put it with a little bit more nuance: if you have already Redis in production (which is quite common), there is little reason to add memcached too and add complexity/new software you may have not as much experience with.
Many large tech companies, including Facebook, use Memcached. Some even use both Memcached and Redis: Memcached as a cache, and Redis for its complex data structures and persistence.
Memcached is faster than Redis on a per-node basis, because Redis is single-threaded and Memcached isn’t. You also don’t need “built-in clustering” for Memcached; most languages have a consistent hashing library that makes running a cluster of Memcacheds relatively simple.
If you want a simple-to-operate, in-memory LRU cache, Memcached is the best there is. It has very few features, but for the features it has, they’re better than the competition.
N-1 processes is better than nothing but it doesn’t usually compete with multithreading within a single process, since there can be overhead costs. I don’t have public benchmarks for Memcached vs Redis specifically, but at a previous employer we did internally benchmark the two (since we used both, and it would be in some senses simpler to just use Redis) and Redis had higher latency and lower throughput.
Yeah agreed, and I don’t mean to hate on Redis — if you want to do operations on distributed data structures, Redis is quite good; it also has some degree of persistence, and so cache warming stops being as much of a problem. And it’s still very fast compared to most things, it’s just hard to beat Memcached at the (comparatively few) operations it supports since it’s so simple.
I’m surprised that as an SRE guide, this doesn’t mention anything about cache warming, or the trade-offs of using memcached vs something else like redis or couchbase.
The best SRE recommendation around Memcached is not to use it at all:
Don’t use memcached, use redis instead.
(I do SRE and systems architecture)
… there was literally a release yesterday, and the project is currently sponsored by a little company called …[checks notes]…. Netflix.
Does it do everything Redis does? No. Sometimes having simpler services is a good thing.
SRE here. Memcached is great. Redis is great too.
HA has a price (Leader election, tested failover, etc). It’s an antipattern to use HA for your cache.
Memcached is definitely not abandonware. It’s a mature project with a narrow scope. It excels at what it does. It’s just not as feature rich as something like Redis. The HA story is usually provided by smart proxies (twemcache and others).
It’s designed to be a cache, it doesn’t need an HA story. You run many many nodes of it and rely on consistent hashing to scale the cluster. For this, it’s unbelievably good and just works.
seems like hazelcast is the successor of memcached https://hazelcast.com/use-cases/memcached-upgrade/
I would put it with a little bit more nuance: if you have already Redis in production (which is quite common), there is little reason to add memcached too and add complexity/new software you may have not as much experience with.
this comment is ridiculous
i was under the impression that facebook uses it extensively, i guess redis it is.
Many large tech companies, including Facebook, use Memcached. Some even use both Memcached and Redis: Memcached as a cache, and Redis for its complex data structures and persistence.
Memcached is faster than Redis on a per-node basis, because Redis is single-threaded and Memcached isn’t. You also don’t need “built-in clustering” for Memcached; most languages have a consistent hashing library that makes running a cluster of Memcacheds relatively simple.
If you want a simple-to-operate, in-memory LRU cache, Memcached is the best there is. It has very few features, but for the features it has, they’re better than the competition.
Most folks run multiple Redis per node (cpu minus one is pretty common) just as an FYI so the the “single process thing” is probably moot.
N-1 processes is better than nothing but it doesn’t usually compete with multithreading within a single process, since there can be overhead costs. I don’t have public benchmarks for Memcached vs Redis specifically, but at a previous employer we did internally benchmark the two (since we used both, and it would be in some senses simpler to just use Redis) and Redis had higher latency and lower throughput.
Yup. Totally. I just didn’t want people to think that there’s all of these idle CPUs sitting out there. Super easy to multiplex across em.
Once you started wanting to do more complex things / structures / caching policies then it may make sense to redis
Yeah agreed, and I don’t mean to hate on Redis — if you want to do operations on distributed data structures, Redis is quite good; it also has some degree of persistence, and so cache warming stops being as much of a problem. And it’s still very fast compared to most things, it’s just hard to beat Memcached at the (comparatively few) operations it supports since it’s so simple.