Comment

Peter Bengtsson

Yeah, it's rare if you use Redis generally as a caching pattern.

The one case would be if the Redis is flushed (corrupt restart, or FLUSHALL command) and it causes a stampeding herd on the backend that the cache is supposed to protect.
For example, a lot of web apps use something like Redis to store use session cookie values (e.g. https://docs.djangoproject.com/en/2.2/ref/settings/#std:setting-SESSION_ENGINE in Django) and losing the cache would sign everyone out which would suck. But even there, there are choices, such as the `cached_db` option in Django which *writes* to both, but then mostly *reads* from the cache.

Parent comment

Kyle Harrison

You know, I simply can't think of a scenario where I'd even want Redis to be "durable". It's a great server to spin up and immediately start storing serialized values into. Building into the application layer the reliance on refreshing that key when expired or missing. For everything else I would care about if lost to a restart, I'd store in a normal database that properly respects ACID transactions. Can someone spoon feed me some scenarios where having _redis_ persistence is actually a desireable thing? what's the point of sacrificing the speed (what it's good at) for AOF mode, especially if it's unreliable enough for Redis' docs to make note of anyways?