I'm well aware of that document, I've read the whole book back in the day.
Originally Posted by Blooblahguy
This and some other points are just about potential optimizations, not that I saw it was being done blatantly wrong at any point. oUF does seem to mostly create variable inside of loops though.
Take the following code as an example
Code:
for i = 1, 1000000 do
local a = {}
a[1] = 1; a[2] = 2; a[3] = 3
end
Takes 52.240 seconds to run while
Code:
for i = 1, 1000000 do
local a = {true, true, true}
a[1] = 1; a[2] = 2; a[3] = 3
end
Only takes 20.98 seconds to run. It's 60% faster. Obviously total time is exaggerated by a high loop, but I don't think implementing this practice would take much time and the benefits start to add up.
|
We create a lot temp vars and upvalues inside of loops, yes, but we never create throwaway tables like this, if we do, that's prob a mistake/type/whatever, we reuse tables as much as we possibly can.
Moreover, debugprofilestop returns time in milliseconds.
Lua Code:
local lastTime = debugprofilestop()
for i = 1, 1000000 do
local a = {}
a[1] = 1; a[2] = 2; a[3] = 3
end
print(debugprofilestop() - lastTime)
This takes 581.90662911534ms or ~0.6s on my machine w/ i5-7500.
Lua Code:
local lastTime = debugprofilestop()
for i = 1, 1000000 do
local a = {true, true, true}
a[1] = 1; a[2] = 2; a[3] = 3
end
print(debugprofilestop() - lastTime)
This takes 332.96345540881ms or ~0.3s.
However, in oUF we mainly have this scenario:
Lua Code:
local lastTime = debugprofilestop()
local a_ = {}
for i = 1, 1000000 do
local a = a_
a[1] = 1; a[2] = 2; a[3] = 3
end
print(debugprofilestop() - lastTime)
This takes ONLY 60.121539920568ms or 0.06s, the lowest I've seen while benching was 0.05s.
While this
Lua Code:
local lastTime = debugprofilestop()
local a_ = {true, true, true}
for i = 1, 1000000 do
local a = a_
a[1] = 1; a[2] = 2; a[3] = 3
end
print(debugprofilestop() - lastTime)
Takes 59.989032864571ms or 0.06s, given that results' fluctuation is ~0.01s, I think you understand what I'm implying here...
I'm still curious about this bit:
There are easily 100 functions that could be localized and localized function references are a minimum of 30% faster (i've profiled in some cases up to 300% faster)
|
the 300% thingy in particular.