Don’t use ‘next’ instead of ‘pairs’

Debunking the Lua optimization myth and how to actually optimize your for loops

My name is Stephen. Most of you reading this know me as my Roblox username, Crazyman32. In the last few years, I’ve seen many Roblox users fall into the trap of attempting to “optimize” their code in strange ways. One of those ways involves using the ‘next’ function instead of the ‘pairs’ function.

Here’s how you use the “pairs” function to iterate through a Lua table:

local tbl = {...}
for i,v in pair(tbl) do
-- Blah
end

The “pairs” function returns an iteration function for the ‘for’ loop. Internally, this is what the pairs function is:

function pairs(t)
return next, t, nil
end

Therefore, a lot of people like to write their generic ‘for’ loops like so:

local tbl = {...}
for i,v in next,tbl do
-- Blah
end

Firstly, a disclaimer: doing this is fine! There is nothing wrong with it. If that is your preference, then keep on doing what you’re doing.

However, many people do this as an optimization method. In fact, I’ve had people insult me for using “pairs” instead of “next” because they say it is so much faster. Whether you’re an advocate for using “next” or you’re just interested in what the heck I’m ranting about, let’s do some speed tests to test these theories.

Note: I’m using the Lua 5.3 binary distribution

First, let’s construct a table with 1 million values. The values themselves don’t matter, thus are just set to 1:

local data = {}
for i = 1,1000000 do -- 1 million
data[#data + 1] = 1
end

Alright, now that we have our data, let’s iterate through the table multiple times with both the ‘pairs’ method and the ‘next’ method. First, the “pairs” method:

local start = os.clock() -- Start time
for iteration = 1,100 do -- Iterate 100 times
for i,v in pairs(data) do end -- pairs
end
print(os.clock() - start) -- Print duration
RESULT:
> 5 seconds (a few tests seemed to be within 5.0 and 5.2 seconds)

Great, 5 seconds. Now, let’s try the “next” method. Same code except for the inner loop:

local start = os.clock() -- Start time
for iteration = 1,100 do -- Iterate 100 times
for i,v in next,data do end -- next
end
print(os.clock() - start) -- Print duration
RESULT:
> 5 seconds (a few tests seemed to be within 4.9 and 5.1 seconds)

Again, about 5 seconds. However, slightly faster. After so many iterations, you would expect an optimization to make more of an impact, right? Perhaps it’s not actually an optimization, due to the fact that “pairs” is really just a function returning “next” at the end of the day. You’re getting the same code execution.


Numeric Magic

So to those who say “next” is so much faster: Let me introduce you to the numeric for loop. Oh yes, the good ol’ numeric for loop. Many have abandoned this method as it doesn’t look as pretty as generic for loops. While it might not look as clean, it does come with some performance boosts. Let’s run the same tests as above, but with the numeric for loop:

local start = os.clock() -- Start time
for iteration = 1,100 do -- Iterate 100 times
for i = 1,#data do local v = data[i] end
end
print(os.clock() - start) -- Print duration
RESULT:
> 2 seconds (a few tests seemed to be within 1.9 and 2.2 seconds)

Wow, 2 seconds! That’s right, 2 seconds! What an improvement! That is more than double the speed of the first two methods shown.


Some Quick Conclusions

If you are using “next” as an optimization method, stop! Instead, use a numeric for loop! If you’re using “next” as an optimization method but don’t really need to optimize, use “pairs” instead, unless you just prefer the “next” method. If you simply perfer using “next,” keep rocking it.

I’m not here to criticize the “next” users, but rather shed some light into the reality of the performance of “pairs” and “next” versus the numeric for loop.

The “pairs” function literally just returns the “next” function and the table. It is literally the same code execution as the “next” method. That’s why it is not actually an optimization.

Let’s be smart about our code and not assume that people are always right when they talk about optimization techniques. Test them yourselves.

Like what you read? Give Stephen a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.