Cache and Rails

Work In Progress

Cache has TTL, or when do we expire cache keys. After set time passed, key is deleted from the cache. For high traffic scenarios, TTL should be something frequent. To set proper TTL, understand how often do your data change. And evaluate risk of having outdated/stale data.

It is also a good idea to add some time jitter (random time) to your TTL. The reason being if all your cache expired in the same time, your application will suddenly under heavy load and drain your database.

Or: Cache aside.

You ask if cache has it, return if it has, otherwise query database.

When you update database, also update the cache right away. Usually comes with lazy loading cache.

@instance ||= something_expensive — memoized, reuse something in memory of your thread/process.

Cache in Hash:

user_bookmarks_count = {}
user_id = 42
user_bookmarks_count.fetch(user_id) do
  expensive_sql_query(user_id)
end

Active Record Cache.
SQL Caching in same request. So if you issue same SQL in the same request won’t hit the database twice.

This has network latency cost. Latency usually is something like 1ms. They can probably cache requests at 10k-1M scale.

E.g. Amazon ElastiCache (Redis & Memcached)

Rails.cache can be configured to cache in Memory, File and external services.

Redis is single threaded and you’ll likely hit capacity ceiling soon (either CPU / memory), then you can look into using Redis Cluster. Use Redis Cluster you can split key space across multiple Redis instances.

AWS adds support of Data Tiering to Amazon ElastiCache in 2021. Data tiering is only slightly more expensive but when you are overfilling your cache (over memory), instead of evict cache items. The items will be pushed to extra SSD.

So think of r6gd is r6g + SSD and AWS claims reaching to that SSD only costs µs latency.

Read more: https://guides.rubyonrails.org/caching_with_rails.html