Don’t assume anything is slow. Profile before you do anything. Not having automated performance testing these days is like not having automated unit testing ten years ago.
I think the “need” for something like Redis is greatly overstated these days, and one really should not prematurely “optimize” there until the severe costs of adding it can be afforded. After all, there aren’t that many applications having a simple beefy, well designed postgres server with a good schema and performant queries isn’t going to meet the needs of all devs willing to test its limits. Scaling well beyond what normal postgres clustering (yes, that’s a thing there are options for) can give you is a very rare actual need.
If you’re throwing everything you have in a couple tables and wondering why postgres is slow… well, Redis or things like it might be fooling you into believing the problem is anything but you and your sheer laziness. Sorry the Emperor is naked.
Postgres might work well for it, but my initial reaction would be “lets make sure it doesn’t adversely impact our other, more important queries or see if we should split this work out to Redis and benefit from a second connection pool / CPU / memory, etc.”
Adding Redis means understanding an entirely new security model (see this story from a day or two ago to see how dangerous that is), spinning up new infrastructure, bringing in new dependencies to your program, altering your monitoring to watch the Redis box, (likely) reworking a deployment process, …
Put another way: that might be your initial reaction, but try telling one of your ops people that’s what you want to do and watch them start squirming.
Also don’t forget to look at it from a non-ops perspective: You have to synchronize updates.