Optimizing Laravel 5 & MySQL for 9M+ rows. Step 2.
Optimize your queries!
This article is part 2 of my “Optimizing Laravel & MySQL for 9M+ rows” series.
Eloquent, it’s pretty awesome!
It gives us the ability to do simple to complex database queries with ease. But what actually happens behind the scenes?
If you don’t take the time to investigate your queries you’ll never know what’s being ran. Checking your queries will give you a better understanding of how your application is working and will provide you an opportunity to investigate and test the queries generated by Eloquent.
Reduce the total number of queries
Look over all of your queries. Can some be Eager loaded? Can you use joins or unions?
Eager loading is an easy way to optimize your relations, in my case 250+ queries down to 12 and load times from 30+ seconds to ~2.
This is not always the best way to optimize your queries, and in some cases can be costly as well. Writing complex joins, unions and nested queries may provide better performing results.
Play around in your MySQL editor and find a query that performs the best.
Using your MySQL editor, take a look at a query using the EXPLAIN option.
Focus on what index is being used (key) and what indexes could be used (possible_keys).
When there is no key or any possible_keys MySQL is not using any index, adding an index to our username column can fix that right up.
Let’s add an index to the username column using a simple migration.
And now let’s check EXPLAIN again too see that the key is being used.
When using multiple conditions in your where clauses you can use index groups.
I’ve had little luck getting the correct indexes to be chosen and using FORCE INDEX works great, but not so well with Eloquent. So for now I don’t force my index groups and I rely on MySQL selecting the best index.
Indexes are an easy way to optimize your MySQL database, check your queries and add any that are missing.
Don’t count all the rows
Counting millions of rows in MySQL is slow. SQL_CALC_FOUND_ROWS is even slower and Laravel loves to count with its Paginator. That’s why my last article tells you to use simplePaginate instead.
Do you see any of your queries counting all rows? Get rid of them!
Offload the task of counting to a scheduled task, background job or with the Cache system. Write the results to a column, row or cache.
I currently use columns and rows. I have a `settings` table that contains several types of counts and on individual tables I use columns to count their relations (like votes or comments).
The cache method will also work well, just read the count from the cache — and update the cache with a job, simpler & cleaner then adding rows or columns to your database.
The last thing I liked to do is cache all the results I can.
I do this last because I want to get my queries as fast as I can before they goto a cache. In my case users will be updating the cache at random intervals so when the user triggers a cache update, I want it to be fast.
You can also send everything to cache by using the task scheduler or a job. This would ensure the user is never waiting for any query to execute.
By now your application should be much faster. All your queries are using indexes, you’ve eager loaded and joined everything you can and you’ve sent it all to cache so the users aren’t running live queries.