Developers’ Diaries. Issue #4
In this part, we will talk about one of the ways to optimize a mobile application with data caching.
Caching is storing data in a buffer (cache) which is most likely to be requested. A typical example is the caching of images downloaded from the cloud.
Created cache can be stored in the device’s memory. The cache in memory is quickly accessible, but rather small. Mobile devices have limitations on allocated memory per application. This is especially true for older devices. On the other hand, we can save much more data on the device and not delete data that we may need next time the application is launched. The iOS optimizes the memory in the device automatically. If necessary, temporary files are deleted when the application using them is not running.
In our application, we use both described methods of caching. We have a data cache in memory and temporary files.
The functionality of our application can be divided into two parts:
- viewing existing models: obtaining a list of user’s models, previewing models, viewing the selected model;
- creating a new model: saving a video file with model, processing it and uploading a 3D View to the cloud.
In the first part, we are dealing with requests to the cloud and processing the responses. Here we cache frequent unique requests, namely requests for a preview of a model.
If we load an image from the cache file instead of sending a repeat query to the cloud server, we save time and unburden the server of extra queries. If we try to predict which images we may need next and load them into the cache memory in advance, we can save even more time.
While scrolling down through the list of models we will need the next in line models, and if we are scrolling up we need previous models. But if the user jumps several pages at once, it is more difficult to predict which models may be previewed next. Images preloaded into memory may not be queried. Therefore, it is important to correctly choose the buffer size in order to make time gain bigger than possible time loss.
When working with video, we cache frames. The time of receiving frames directly from the video is almost the same as reading the image from the file, but only if receiving frames sequentially. When the sequence is broken, retrieving from the cache is much faster.
Besides the obvious situations when caching is used, there may be others where data caching can significantly speed up application running time.
In video processing we get an additional mask of the frame. After analyzing the code and running time of individual unit modules, it appears that it is much more efficient to cache masks and later transform a mask and its corresponding frames than to recalculate masks each time.
We could stay on this seemingly simple topic for a long time. Connect with us on social networks and tell us what else you would like to know. Our developers will gladly share their knowledge.
Our TG channel — https://t.me/artoken