Praise the object pool
What is a difference between a cache and an object pool?
Caches are meant to store computation results. When next time, we need the result, we don’t have to recompute, we just return the already computed one.
Here is a simple example of a cache, making tremendous difference:
Fibonacci function is recursive. By caching the results, we get huge benefits. In the gist we can see that cached version makes 14x less computations. The difference becomes more dramatic, the longer sequence we need to compute. fib(25) takes multiple seconds on my maxed out MacBook Pro. fibC(90) takes less than a second.
What is an Object Pool than?
Object pools are meant to reserve memory for reuse. We don’t reuse results, we reuse space in the “storage room” — heap. Creating new object is expensive and it doesn’t matter which language you use.
- You pay on creation, when runtime has to find a “place” on the heap
- You pay on destruction, when object is not needed any more
In some languages the first cost is more expensive in others it is the later, but one thing is always true. It is expensive!
Isn’t it something that has to be managed by Garbage Collector or something?
I am sorry to break it for you, but garbage collector is not a magical creature. It knows — that you are not using this object. But it can not know that you will create a new object with the exact same amount of storage again in a not that distant future. It will free the space. The space will be used for other things. When you will need the same amount of space again, the runtime will need to find new space. It is similar to the Knapsack problem, where you take stuff out and put new stuff with different weight all the time. So a very complicated problem, which has to be solved over and over and over again.
By working with Object Pools, we solving the knapsack problem once and keep reusing the solution. It is basically a cache for memory management.
Is it only important for garbage collected languages than?
No, reserving and freeing space on the heap is a universal problem. The only way we can avoid it, is by keeping all computations on the stack. Which is practically impossible.