A cached fragment has a key often composed of the object identifier and its updated_at
timestamp.
Everything cached to this object is invalidated when any field in the object is changed.
You could potentially be frequently invalidating big fragments when little of the data is changed.
For instance, in Supplybunny, the product fragment contained some supplier information. Therefore the cache key had to include the supplier key as well. So, when any supplier field was updated, the cache of all its products would be invalidated too.
Similarly, the product search result had only a subset of all the information for that product. But the cache key would get invalidated when any product field was changed.
Instead, I created different cache timestamps for different fragments.
Updating a field that is only on the product details page would not invalidate any other pages. Similarly, updating supplier information that is not on the product pages, would not invalidate them.
A straightforward way to implement this is with a hash, or map in Python terminology, of fields and the timestamps to touch
. A before_save
callback should check the changes that are about to be saved and trigger updates to the relevant timestamps.
When you add a new field, you should add it to the hash of the page it is on.
This structure is quite robust and will last a long time.