Caching in Laravel: what actually works
Adding Redis to a Laravel application and calling it "cached" is one of the most common performance mistakes we see. The app is still slow. The team is confused. Redis is running. What went wrong?
Usually, the wrong things are being cached, at the wrong layer, with no invalidation strategy. The result is stale data, unpredictable bugs, and response times that barely improve.
Laravel actually gives you four separate caching mechanisms. Each one targets a different part of the request lifecycle. Using one doesn't mean the others are covered.
The four caching layers
A request stops at the first layer that has a valid cached response. Each miss falls through to the next.
Understanding where each layer sits in the request flow changes how you think about caching entirely.
1. Config and route cache
This is the easiest, most overlooked win in any Laravel deployment.
Every time a request hits your application, Laravel reads and merges every file in config/, parses your route definitions, and discovers listeners and event classes. On a typical application, this bootstrap phase costs 20-80ms before your code runs a single line.
php artisan config:cache # Serializes all config into one file
php artisan route:cache # Compiles all routes
php artisan view:cache # Pre-compiles Blade templates
php artisan event:cache # Caches event/listener discoveryThese should run in every production deployment, without exception. They cost nothing and consistently reduce bootstrap time by 30-50ms.
One important constraint: once config:cache runs, env() calls outside of config files return null. If you're calling env('STRIPE_KEY') directly in a service class, it breaks in production. The fix is to always access environment values through config():
// Wrong: breaks after config:cache
$key = env('STRIPE_KEY');
// Correct: always works
$key = config('services.stripe.key');2. Data cache with Redis
This is where most teams focus, and where most of the mistakes happen.
Cache::remember() is the right tool for expensive database queries, external API responses, and computed values that don't change on every request. The mistake is caching everything uniformly, regardless of how often data actually changes or how expensive it is to regenerate.
// Basic pattern: cache for 10 minutes
$products = Cache::remember('products.featured', 600, function () {
return Product::with('category')
->where('featured', true)
->orderBy('sort_order')
->get();
});For related data, use tagged cache. Tags let you invalidate a group of cache entries at once, which solves most "stale data" problems:
// Store with a tag
$user = Cache::tags(['users', "user:{$userId}"])->remember(
"user:{$userId}:profile",
3600,
fn() => User::with('roles', 'preferences')->find($userId)
);
// Invalidate everything for a user when their data changes
Cache::tags(["user:{$userId}"])->flush();Tagged cache requires a driver that supports it. Redis does. The database driver does not.
What not to cache: Per-user session data belongs in the session driver, not the general cache. Things that change on every write (like order counts) will cause more cache invalidation work than they save. Anything that varies per-user without a user-specific cache key will serve one user's data to another.
3. Query-level caching
Laravel doesn't have native query-level caching, but Laravel Query Cache or a simple repository pattern with Cache::remember() gives you caching closer to the data access layer:
// In a repository method
public function getFeaturedProducts(): Collection
{
return Cache::tags(['products'])
->remember('products.featured', now()->addMinutes(15), function () {
return Product::featured()->with('category')->get();
});
}Keeping caching logic in repositories rather than controllers makes invalidation straightforward: when products change, flush the products tag in the Product observer.
// In ProductObserver
public function saved(Product $product): void
{
Cache::tags(['products'])->flush();
}4. HTTP cache
This is the layer most Laravel applications never use, and it has the highest potential impact.
HTTP caching happens before your application code runs at all. A reverse proxy (Nginx, Varnish) or a CDN (Cloudflare, Fastly) stores the full HTTP response and serves it directly to subsequent requests. Your PHP process is never invoked.
// In a controller, for public, non-personalised responses
return response($content)
->header('Cache-Control', 'public, max-age=300, stale-while-revalidate=60')
->header('Vary', 'Accept-Encoding');HTTP caching only works for responses that are:
- Public: not personalised to a logged-in user
- Deterministic: the same URL always returns the same content (for a given TTL)
- Correctly invalidated: either by TTL expiry or a purge request when content changes
For marketing pages, blog posts, public product listings, and API endpoints consumed by multiple clients, HTTP caching is almost always the right answer. For authenticated dashboards and user-specific data, it is not.
Where caching goes wrong
The hardest part of caching is not adding it. It's invalidation.
Overly broad TTLs mean users see stale data for longer than necessary. A 24-hour TTL on a product price cache is a business problem, not just a technical one.
Missing invalidation is more common. A cache entry is created, a related record changes, and nothing clears the cache. The fix is to tie invalidation to model events, not to hope that the TTL will save you.
Caching exceptions and empty results by accident. If a query throws an exception inside Cache::remember(), some configurations will cache the error state. Wrap external calls carefully:
$result = Cache::remember('external.data', 300, function () {
$data = Http::get('https://api.example.com/data');
// Don't cache failures
if ($data->failed()) {
throw new RuntimeException('External API unavailable');
}
return $data->json();
});Thundering herd happens when a popular cache key expires and many concurrent requests all miss the cache simultaneously, each generating the same expensive query. The solution is cache locking:
$value = Cache::remember('expensive.key', 600, function () {
return Cache::lock('expensive.key.lock', 10)->block(5, function () {
// Only one process runs this at a time
return DB::table('expensive_table')->get();
});
});What a properly cached application looks like
Full DB query on every request
Framework bootstrap eliminated
Queries served from memory
Response served at the edge
Numbers are representative, not benchmarks. Your app will vary depending on query complexity and data volume.
Applying all four layers in the right places consistently brings most Laravel applications from 600-900ms average response times to under 100ms for cacheable routes. The gains are not additive — they're multiplicative, because each layer eliminates an entire class of work.
The pattern we use in audits:
- Config, route, view, and event cache in every deployment (free wins, zero risk)
- Redis data cache with tagged invalidation on expensive queries
- HTTP cache headers on any public, non-personalised route
- Thundering herd protection on high-traffic cache keys
If your Laravel application is slow and you've already added Redis without seeing the improvement you expected, caching strategy is usually the explanation. We cover this in every performance audit we run. Request one here, and we'll show you exactly which layer is missing.