[Complete Practical Guide] Laravel Caching Strategy — Cache/Redis/Tags, Locks, HTTP Caching, Deployment Optimization, and Accessible High-Speed UI
What you will learn in this article (key points)
- How to use Laravel’s caching features as a design strategy rather than in an ad hoc way
- Practical patterns for
Cache::remember(), tags, locks, expiration, and invalidation strategies - How to choose among Redis / database / file and what to watch for in multi-server environments
- How to incorporate
config:cache,route:cache,view:cache, andevent:cacheinto production deployments - HTTP caching, ETag, CDN, image and list optimization, and how to handle content freshness
- Key design, monitoring, testing, and accessible loading states to prevent cache-related incidents
Target readers
- Beginner to intermediate Laravel engineers: those whose pages work, but whose lists and dashboards are getting slower
- Tech leads: those who want to standardize Redis-based caching design across a team
- QA / maintenance teams: those who want to reduce issues like “old screens appear” or “only part of the page updates” caused by caching
- Designers / accessibility specialists: those who want to build UIs that are not only fast, but also clearly communicate waiting and update states
Accessibility level: ★★★★★
Caching may seem like a performance topic, but in practice it directly affects “how waiting time is shown” and “how updates are communicated.” In this article, we will also cover role="status", aria-busy, progress indicators that do not rely only on color, auto-refresh that is not too aggressive, and list design that does not leave keyboard users lost.
1. Introduction: Caching is not “magic that makes things fast,” but a design for reducing recomputation
As you continue developing with Laravel, pages that were initially fast often become heavier over time. Search conditions increase, dashboards gain more aggregates, navigation starts showing counts, and external APIs and file processing get added into the mix. A very common response at this point is, “Let’s just add remember() for now.” Of course, that can make things faster temporarily. But if you keep adding caches without designing expiration and invalidation, a different set of problems appears: “old information is displayed,” “we don’t know what to clear,” and “only some parts fail to update.”
Caching is convenient, but the more casually it is added, the harder it becomes to manage. What matters is deciding first what to cache, for how long, at what unit, and when to clear it. Laravel provides a unified Cache API, so even if you switch drivers later, usage does not change much. That means if you organize the thinking up front, moving to Redis or scaling into a multi-server setup becomes much easier later.
2. First, organize the problem: it becomes easier if you divide cache targets into four types
In practice, it helps to divide cache targets into the following four categories.
2.1 Data for screen display
- Homepage rankings
- Category lists
- Dashboard aggregate values
- Sidebar count displays
This is information where “being slightly stale is not disastrous, but speed matters.” It is the easiest place to start.
2.2 Expensive calculations
- Daily sales totals
- Popular article rankings
- Per-customer cumulative values
- Preformatted responses from external APIs
This type becomes expensive if recomputed on every request, so caching has a large effect. At the same time, it is also an area where “stale numbers” can remain if update design is not considered carefully.
2.3 Locks to prevent contention
- Preventing duplicate execution of the same job
- Preventing duplicate launches of a batch process
- Preventing duplicate issuance of the same invoice
This is not mainly about speed, but about consistency and safety. This is where Laravel’s atomic locks come into play.
2.4 Cache for deployment optimization
- config cache
- route cache
- view cache
- event cache
This is not application data, but cache that speeds up Laravel’s own startup and routing. It is very important in production, but can actually get in the way during local development.
Once you have these four categories in mind, it becomes easier to think, “this cache is for display speed,” “this is for mutual exclusion,” or “this is for deployment optimization.”
3. Laravel’s Cache API: start by understanding remember as the core
Laravel’s cache API is very straightforward. Even just understanding put, get, and remember will take you quite far.
use Illuminate\Support\Facades\Cache;
$popularPosts = Cache::remember('home:popular-posts', 300, function () {
return Post::query()
->published()
->orderByDesc('views_count')
->take(10)
->get(['id', 'title', 'slug', 'views_count']);
});
In this example, popular posts are stored under the key home:popular-posts for five minutes.
What matters here is that the key name and expiration time are themselves design information.
- If the key name is vague, it becomes hard to clear later
- If the expiration is vague, you get data that is too old, or recomputation that happens too often
That is why it helps to think of naming like this from the beginning:
- Screen name
- Target
- Conditions
- Locale or tenant ID if needed
Examples:
home:popular-postsdashboard:tenant:12:daily-salesproducts:list:ja:category-books:page-1
If you do this, the purpose remains obvious even during operations.
4. Choosing a driver: start by asking whether it is shared
Laravel can handle multiple cache drivers through a unified API. In practice, the following way of thinking is useful.
4.1 file
- Easy for local development
- Can be acceptable on a single server
- Not shared in multi-server setups, so be careful
4.2 database
- Easy to introduce
- Fits well with Laravel’s initial setup
- May not scale well for high-frequency access or large-scale operations
4.3 redis
- Works well in multi-server environments
- Easy to combine with locks, queues, and sessions
- In practice, often the easiest and most reliable choice
4.4 memcached
- A fast option
- Strong if your team already has operational experience with it
- But in Laravel projects, Redis is often easier to adopt
For small to mid-scale production systems, using Redis as the default assumption tends to make the design more stable. Sessions, queues, and locks can also be consolidated around Redis, making it easier to reason about multi-server scaling and asynchronous processing with a shared store in place.
5. Key design: many cache incidents begin with vague naming
The most underestimated but highly effective part of cache design is key design. The main things that should go into a key are:
- Screen or purpose
- Whether it depends on the user
- Whether it depends on the tenant
- Locale
- Search conditions
- Page number
- Version
For example, a list screen search result needs information like this:
$key = sprintf(
'users:index:tenant:%d:q:%s:status:%s:page:%d',
tenant()->id,
md5($search),
$status ?: 'all',
$page
);
If you put the raw search term directly into the key, it may become too long, so using something like md5() to shorten it can be a good choice.
Also, if a page has language variants and you do not include the locale in the key, the Japanese and English versions may get mixed up.
A cache incident is not only “old data is shown”; it can also become “information for another user becomes visible.” That is why user-specific and tenant-specific separation must be handled carefully.
6. How to decide TTL: too short and too long both cause problems
If you decide cache expiration by feel alone, it often fails. A practical method is to determine it based on the nature of the target.
6.1 Short (30 seconds to 5 minutes)
- Search results
- Dashboard counts
- External API responses
- Summary information on list screens
6.2 Medium (10 minutes to 1 hour)
- Popular rankings
- Category lists
- Aggregates that change often, but do not require instant freshness
6.3 Long (half a day to one day)
- Master data
- Configuration values
- Fixed classifications such as prefecture lists
If TTL is too short, you end up recomputing almost every time even though you are “using cache.” If it is too long, update reflection becomes slow.
When in doubt, it is often easiest to decide by asking, “Is it acceptable if this is slightly stale?”
Also, if you combine TTL with explicit clearing on update, even longer TTLs can be operated safely.
7. Invalidation strategy: decide in advance where forget() should be called
Caching is harder to clear than it is to add. In practice, it helps to think in these three patterns.
7.1 TTL-only
This approach lets data expire naturally after a few minutes and does not explicitly clear on updates.
It works well for search lists or lightweight dashboards.
7.2 forget() on update
If the “moment of change” is clear—such as publishing an article, updating a user, or changing settings—clear immediately after the update process.
$post->update($data);
Cache::forget('home:popular-posts');
Cache::forget("post:{$post->id}:detail");
7.3 Clear in groups with tags
This is useful when you want to clear related caches all at once.
However, because it depends on driver support and operational conditions, it is safer to define within the team where tags may be used.
Cache::tags(['posts'])->put("post:{$post->id}", $post, 3600);
Cache::tags(['posts'])->flush();
For frequently updated screens, “TTL-only” is often sufficient. If you add strict invalidation everywhere, complexity rises quickly, so it is practical to balance by importance and update frequency.
8. Locks: Laravel cache can also be used for concurrency control
Cache can be used not only for display speed, but also for preventing duplicate execution. Laravel has a lock API that lets you ensure the same process does not run simultaneously.
use Illuminate\Support\Facades\Cache;
$lock = Cache::lock('billing:issue:2026-04', 120);
if ($lock->get()) {
try {
// Invoice issuance process
} finally {
$lock->release();
}
}
This is useful in situations like:
- Preventing duplicate issuance of monthly invoices
- Preventing duplicate launches of the same CSV export
- Preventing concurrent execution of the same batch process
- Supplementing protection against double submission from repeated button clicks
This becomes especially effective when combined with the Scheduler or Queue. Locks are not flashy, but they are extremely valuable in real-world systems.
9. Multi-server operation: if you stay on file cache, inconsistencies are likely
If you only have one production server, problems may stay hidden. But once you have two or more, designs based on file cache or local storage break quickly.
For example, server A may clear cache after an update, while server B still keeps the old cache.
So in multi-server setups, it is safer to keep the following in mind:
- Move cache to a shared store such as Redis
- Move sessions to a shared store as well
- Use the same shared store for locks
- Run cache clearing consistently across all servers during deployment
A design that feels safe with “we only have one server right now” can become a major obstacle when you scale later. You do not need a massive setup from day one, but it is wise to design in a way that is easy to migrate toward shared infrastructure.
10. Deployment optimization: incorporate config / route / view / event caches
Laravel itself also includes built-in cache optimizations for production. In practice, it is important to incorporate them into deployment procedures.
10.1 config cache
This consolidates configuration files, reducing file reads at startup.
It is especially effective in production.
However, if your implementation directly calls env() outside config files, this can create inconsistencies.
10.2 route cache
This tends to help in large applications with many route definitions.
If you still have closure routes, adoption becomes harder, so a clean route structure helps.
10.3 view cache
Precompiling Blade views reduces the wait on first access.
10.4 event cache
This speeds up event listener resolution. It is especially useful in apps that rely heavily on event-driven design.
A typical deployment sequence looks like this:
php artisan config:cache
php artisan route:cache
php artisan view:cache
php artisan event:cache
Since local development changes frequently, there is no need to always use these there. The important thing is to separate local and production usage.
11. optimize and optimize:clear: align the meaning in operations
Laravel also provides optimize-related commands.
A common source of confusion within teams is not sharing which command caches what and which one clears what.
A good practice is to document the cache-related commands used during deployment and the clear procedure used during incidents.
For example:
- During deployment:
config:cache,route:cache,view:cache,event:cache - During emergencies:
optimize:clearor individual*:clearcommands
This makes it much clearer where to look when, for example, “a config change in production is not reflected.”
12. HTTP caching: do not rely only on application cache—use the browser and CDN too
Application cache is important, but HTTP-level caching is also extremely important.
Especially for list pages, images, and public-facing pages, it is often more efficient to let the browser and CDN do the work.
12.1 ETag and 304
If the content has not changed, return 304 Not Modified without sending the body again.
This works well for APIs and some list responses.
12.2 Cache-Control
For public files that rarely change, you can cache them for a long time.
On the other hand, be careful not to apply strong caching to pages that vary per user.
12.3 Hashed assets
If images and built assets include hashes in their file names, they can be cached for a long time.
When content changes, the URL changes too, reducing stale cache problems.
HTTP caching has a slightly different purpose than Laravel’s internal cache.
Internal cache is about reducing recomputation, while HTTP cache is about reducing retransmission.
13. External API responses: short-lived cache and fallback are practical
External APIs should be designed around the assumption that they are slow, fail, and have rate limits.
This is where short-lived caching becomes helpful.
$data = Cache::remember('weather:tokyo', 300, function () {
return Http::timeout(10)->get('https://example.com/api/weather/tokyo')->json();
});
Even keeping the result for five minutes can significantly reduce load and failure rates.
In practice, the following policy is often useful:
- On success: save the new value
- On failure: if a recent cached value exists, return it
- If only very old data remains: clearly tell the user “currently unable to update”
In other words, cache is useful not only for speed, but also as a buffer against external API failures.
14. Dashboards and lists: balance caching with accessibility
Caching reduces waiting time. But even then, if update timing and status communication are not handled carefully, users can become confused. The following points are especially important.
14.1 Communicate loading
<section aria-busy="true" id="summary">
<p class="sr-only">Loading summary data.</p>
</section>
14.2 Communicate updated results
<div role="status" aria-live="polite" class="text-sm">
24 results found.
</div>
14.3 Do not rely only on color
Increases, decreases, and statuses should be communicated with words, not only red and green.
Examples:
- Sales increased
- Low stock
- Update failed
14.4 Do not make auto-refresh too aggressive
Just because caching makes updates easier does not mean the screen should rewrite itself every few seconds. That makes it harder for users to interact calmly.
In many cases, it is safer to keep auto-refresh to a minimum and also provide a “Refresh latest data” button.
15. Testing: make sure cache-related behavior does not break
Caching is useful, but without tests, it is easy to miss problems like “the page became faster, but now stale values appear.”
At minimum, the following viewpoints are effective.
15.1 Confirm that data is cached
Cache::shouldReceive('remember')
->once()
->andReturn(collect());
15.2 Confirm that updates clear cache
Cache::expects('forget')
->with('home:popular-posts');
15.3 Confirm that different conditions generate different keys
When search conditions or tenant IDs change, verify that the same key is not being reused.
15.4 Confirm fallback behavior on failure
If an external API fails, test that the cached value is used instead.
The purpose of cache testing is not to prove speed, but to prevent stale data and cross-contamination.
16. Monitoring: watch not only cache hit rates, but also signs of incidents
In cache operations, the following metrics are helpful:
- Hit rate
- Miss rate
- Redis memory usage
- Growth of keys
- Number of failed lock acquisitions
- Sudden increase in external API calls
Also, from the user impact perspective, the following symptoms should be monitored:
- An updated page does not reflect changes
- Only some tenants see old values
- Old config remains after rollback
- Batch jobs run twice
In other words, in addition to technical metrics, it is useful to understand the types of incidents that cache commonly causes from an operational point of view. That makes investigation much faster.
17. Common pitfalls and how to avoid them
17.1 Caching everything
If you cache things that change often and are hard to invalidate, management quickly becomes unmanageable.
It is safer to start with heavy lists, aggregates, and external APIs.
17.2 Missing conditions in the key
If locale, tenant, user, or search conditions are missing, data collisions can occur.
Even if a key becomes longer, it is safer if its meaning is clear.
17.3 Trying to solve everything with TTL alone
For important updates, forget(), tags, or versioned keys are often easier to manage.
17.4 Scaling to multiple servers while still using file cache
This makes update inconsistencies and lock issues more likely.
If multi-server operation is expected, a shared store design is necessary.
17.5 Forgetting config cache in production, or using it all the time in local development
It improves speed in production, but in local development it often causes confusion because changes are not reflected.
Different environments should be operated differently.
18. Checklist (for distribution)
Design
- [ ] Cache targets are categorized into “display,” “aggregates,” “locks,” and “deployment optimization”
- [ ] Key names include purpose, conditions, and dependency information
- [ ] TTL is determined based on update frequency
Safety
- [ ] There is a
forget()policy for updates - [ ] Places where locks should be used for duplicate execution prevention are identified
- [ ] User-specific and tenant-specific caches are separated
Operations
- [ ] A shared cache store is used in multi-server environments
- [ ]
config:cacheand similar commands are included in production deployments - [ ] There is an emergency
optimize:clearprocedure
HTTP / delivery
- [ ] HTTP caching is used for public pages and assets
- [ ] ETag / Cache-Control is considered based on update frequency
- [ ] There is a strategy for reflecting updates when using a CDN
Accessibility
- [ ] Loading is communicated with
aria-busyor text - [ ] Update results are communicated with
role="status" - [ ] Status is not communicated by color alone
- [ ] Auto-refresh is not too aggressive, and manual refresh is also available
Testing
- [ ] There are tests for invalidation of important caches
- [ ] There are tests for condition-based keys
- [ ] Fallback behavior on external API failure is tested
19. Conclusion
Laravel caching strategy is not simply about adding more remember() calls.
It only becomes operationally reliable when you decide what to store, at what unit, for how long, and when to clear it.
Start with things that are heavy and can tolerate being slightly stale, such as lists, aggregates, and external APIs. Add invalidation strategies where freshness matters, use locks where duplicate execution is dangerous, and in production make optimization steps like config:cache and route:cache part of your deployment process. On the UI side, communicate loading and update states accessibly.
Once this flow is in place, you improve not only speed, but also clarity of the interface and stability during incidents. A good first step is to take one heavy list screen and improve it with the four-part set of key design, TTL, invalidation, and state communication.
