The problem with your response is that human perception isn't the only factor. Another thing to consider is the cost at scale of grinding away for an extra 10x or 20x the time. A lot of infrastructure, particularly cloud infrastructure, is sensitive to this. So you can save a considerable amount of money by saving 20x the computation even if it isn't always happening in a loop.
An in-memory cache *should* be faster, and such tools absolutely have a place. A tool such as Redis can make an unusable system great, under the right conditions. It would be cool if Postgres added in-memory tables in V13 or V14.
In this case, unless the work is compounded in a loop, even the slowest times are *imperceptible* to a human being:
https://www.nngroup.com/articles/response-times-3-important-limits/
Put another way, some of your results are "20x faster" to a computer and imperceptibly different to a person.
Comment
The problem with your response is that human perception isn't the only factor. Another thing to consider is the cost at scale of grinding away for an extra 10x or 20x the time. A lot of infrastructure, particularly cloud infrastructure, is sensitive to this. So you can save a considerable amount of money by saving 20x the computation even if it isn't always happening in a loop.
Parent comment
An in-memory cache *should* be faster, and such tools absolutely have a place. A tool such as Redis can make an unusable system great, under the right conditions. It would be cool if Postgres added in-memory tables in V13 or V14. In this case, unless the work is compounded in a loop, even the slowest times are *imperceptible* to a human being: https://www.nngroup.com/articles/response-times-3-important-limits/ Put another way, some of your results are "20x faster" to a computer and imperceptibly different to a person.