[ Performance ][ Laravel ][ Performance ]

Top 7 mistakes killing your Laravel performance

·12 min read

Your Laravel app is slow. Not slow in an obvious, traceable way. Slow in the way where you've added Redis, upgraded the server, and response times are still bad at peak traffic. The kind of slow that makes your team argue about infrastructure.

The infrastructure is almost never the problem. The problem is a handful of mistakes in the code, compounding quietly on every request. We find most of them in every codebase we audit.

Here are the seven we see most often.

[ Typical overhead per mistake ]Max observed impact · production workloads
Missing composite indexes500ms – 8s

Full table scan on large datasets

Offset pagination (high pages)100ms – 5s+

Scales with page number and row count

Sync ops in the HTTP request200ms – 2s

SMTP, external APIs, PDF generation

Loading datasets into memory50ms – 2s

Memory pressure at collection scale

SELECT * on wide tables20 – 200ms

Data transfer and cache payload bloat

whereHas on selective queries20 – 150ms

Correlated subquery evaluated per row

OPcache off in CLI workers30 – 80ms

PHP compilation on every queue job

Ranges represent real-world cases, not benchmarks. Actual impact depends on data volume and query patterns.

1. Missing composite indexes

Single-column indexes are better than no indexes. They are rarely enough.

When your queries filter on multiple columns, MySQL can use only one index per query. If you've indexed user_id and status separately, a query filtering on both still scans more rows than necessary. The database picks whichever single index eliminates the most rows, then filters the rest in memory.

php
// This query needs a composite index, not two separate ones Post::where('user_id', $userId) ->where('status', 'published') ->orderBy('created_at', 'desc') ->get();
php
// In your migration — one index covers the full query pattern $table->index(['user_id', 'status', 'created_at']);

Column order matters. Put the most selective column first. For user-scoped queries, user_id typically goes first. The index ['a', 'b', 'c'] covers queries on a, on a + b, and on a + b + c. It does not cover queries on b + c alone.

One case we looked at: a property search query at 8.39 seconds. After adding the right composite index, it dropped to 0.77 seconds. Same query. Same data. One migration.

Run EXPLAIN on your slow queries. If you see type: ALL, that's a full table scan. It should say type: ref or type: range. If it doesn't, you have a missing index.

2. Eloquent selecting every column

Eloquent's default is SELECT *. That works fine until the table has a content column storing long-form text, a metadata JSON column, or binary data you don't need for a given operation. Every query for a listing page pulls all of it into PHP memory.

The waste compounds when you cache the result. You're storing more than you need in Redis, and deserializing more than you need on every cache hit.

php
// Fetches every column, including large text and JSON fields $posts = Post::with('author')->get(); // Fetches only what the listing page actually uses $posts = Post::select(['id', 'title', 'slug', 'published_at', 'user_id']) ->with(['author' => fn($q) => $q->select(['id', 'name'])]) ->get();

When scoping select() on a relationship, include the foreign key. Without user_id in the posts select, Eloquent cannot bind the relationship correctly. Without id in the author select, you'll get nulls.

Five minutes of work. Immediate reduction in memory usage and cache payload size.

3. Blocking the HTTP request with slow operations

A user submits a form. Your controller creates the record, sends a welcome email via SMTP, calls a CRM API to sync the contact, and generates a PDF summary. Then it returns a response.

The user waited for all of that. Every external call added latency they experienced directly. SMTP averages 150 to 400ms on a good connection. A CRM API can add 500ms to 2 seconds. PDF generation adds more. None of this belongs in the HTTP request lifecycle.

php
// Bad: the user waits for every operation public function store(Request $request) { $user = User::create($request->validated()); Mail::send(new WelcomeEmail($user)); // ~300ms $this->crm->syncContact($user); // ~800ms $this->pdf->generateOnboarding($user); // ~600ms return response()->json($user); // 1.7 seconds total } // Good: the user gets an instant response public function store(Request $request) { $user = User::create($request->validated()); OnboardNewUser::dispatch($user); // ~2ms — queued return response()->json($user); }

The gotcha: new Laravel applications default to QUEUE_CONNECTION=sync, which runs jobs immediately in the same process. The code looks like you're using queues, but you're not. Set QUEUE_CONNECTION=redis in production and run queue workers. That's the entire fix.

4. OPcache off in the CLI

OPcache caches compiled PHP bytecode in memory. Without it, every request triggers the compilation of your application and framework files from scratch. That's 30 to 80ms of overhead on every request, before your code runs a single line.

Most production setups have OPcache enabled for web requests. The problem is the CLI, which means queue workers and scheduled tasks compile PHP from scratch on every job. A worker processing 200 jobs per minute recompiles the same files 200 times per minute.

ini
[opcache] opcache.enable=1 opcache.enable_cli=1 ; This is the line most servers miss opcache.memory_consumption=256 opcache.max_accelerated_files=10000 opcache.revalidate_freq=60

The default opcache.max_accelerated_files is 2,000. A typical Laravel application with vendor dependencies has more files than that. When the cache fills up, OPcache starts evicting files silently. Check your current value:

bash
php -r "var_dump(opcache_get_configuration()['directives']['opcache.max_accelerated_files']);"

If it's 2,000, your OPcache has almost certainly been evicting files and the benefit is smaller than you think.

5. Offset pagination at scale

Laravel's paginate() uses SQL LIMIT and OFFSET. At low page numbers, it's fine. At high page numbers, it becomes expensive.

To fetch page 1,000 with 20 items per page, the database must scan and discard 19,980 rows before returning the 20 it needs. Even with a perfect index. The higher the page, the more work the database does. There is no upper bound on that cost.

[ Offset vs cursor pagination — query time at scale ]
OFFSET (paginate)
CURSOR (cursorPaginate)
Page 1
OFFSET
5ms
CURSOR
~5ms
Page 100
OFFSET
42ms
CURSOR
~5ms
Page 1,000
OFFSET
390ms
CURSOR
~5ms
Page 10,000
OFFSET
4.2s
CURSOR
~6ms

OFFSET must scan all prior rows on every query. Cursor uses a bookmark — page 10,000 costs the same as page 1.

php
// Gets slower with every page $posts = Post::orderBy('id')->paginate(20); // Page 10,000: OFFSET 199,980 — scans 200k rows before returning 20 // Constant time regardless of page $posts = Post::orderBy('id')->cursorPaginate(20); // Uses the last seen ID as a cursor — no offset

Cursor pagination uses the last seen record as a bookmark. There's no offset, so page 10,000 is exactly as fast as page 1. The trade-off: you can't jump to an arbitrary page number. For "load more" and sequential navigation patterns, that's not a real constraint.

If you need numbered pages with large offsets, Aaron Francis's fast-paginate package rewrites the query to use a subquery that only touches the index, then fetches full rows for the small result set. Worth the dependency for high-traffic paginated views.

6. whereHas() on selective queries

whereHas() is expressive and the right default for most use cases. It generates an SQL EXISTS subquery. For queries where the condition is highly selective (few matching rows), this can be significantly slower than a join.

sql
-- What whereHas() generates SELECT * FROM users WHERE EXISTS ( SELECT 1 FROM posts WHERE posts.user_id = users.id AND posts.status = 'published' ) -- What a join generates SELECT DISTINCT users.* FROM users JOIN posts ON users.id = posts.user_id WHERE posts.status = 'published'
php
// whereHas: correlated subquery, re-evaluated for every user row User::whereHas('posts', fn($q) => $q->where('status', 'published'))->get(); // Join: typically faster when the condition is selective User::join('posts', 'users.id', '=', 'posts.user_id') ->where('posts.status', 'published') ->select('users.*') ->distinct() ->get();

The performance difference depends on your data distribution. Don't rewrite every whereHas() call. Run EXPLAIN on both versions and compare the rows examined count. If a specific query is slow and shows high row examination with whereHas(), converting to a join will usually fix it.

7. Loading entire datasets into memory

->get() loads every matching row into PHP as Eloquent model instances. On a table with 100,000 rows, that's 100,000 objects allocated before your code does any work. The collection syntax that follows looks clean. What's happening in memory isn't.

The mistake appears in several forms: filtering in PHP what the database could filter in SQL, running aggregate operations on collections when query-level count() and sum() exist, and iterating large datasets in jobs without chunking.

php
// Loads every user into memory to count the active ones $active = User::all()->filter(fn($u) => $u->is_active)->count(); // The database does the work in one query, no models allocated $active = User::where('is_active', true)->count(); // Loads 100,000 records into memory at once foreach (User::all() as $user) { $user->sendWeeklyReport(); } // Memory stays flat regardless of dataset size User::lazy()->each(fn($user) => $user->sendWeeklyReport());

lazy() uses a database cursor internally, pulling one record at a time. chunk() is the older alternative, processing records in configurable batch sizes. Both keep memory usage flat. Use whichever reads more clearly for the operation you're doing.

The rule is simple: if the database can compute it, let the database compute it. Filters, aggregates, and counts belong in the query, not the collection.

The compounding problem

These mistakes rarely appear alone. A page that's missing indexes, selecting every column, and calling an external API synchronously isn't dealing with three separate 100ms problems. It's dealing with a 1,500ms problem, minimum.

We find combinations of these in almost every codebase we review. The good news is that each fix is independent. Start with indexes and synchronous operations, since those have the largest impact, and work from there.

Install Laravel Debugbar locally, seed your database to production scale, and load your five slowest pages. The query count and timing breakdown will point you directly at which of these are active in your application.

We cover all of this in our free performance audits. Request one here.