Android: The Evolution of the Photo Editor
How the photo editor for Android transformed from Snapster’s initial version to official VK app filters.

This story begins in 2015 when we launched development efforts for Snapster, an independent app for editing photos. At that time and in our official apps, as well as in Instagram, photo editing tools were fairly primitive. Take a picture, make simple color modifications using OpenGL-shaders and apply textures on top if you want glamourous images. Challenge accomplished.
After several months passed since development began, when the first version of Snapster was ready, it still used filters taken from the main VK app. At this time we started to understand that the existing approach was no longer relevant and we needed to think of something new and innovative.
We decided it would be cool if users were able to create their own filters straight from the phone. The process of creating filters ought to have been simple so that a decent result could be attained by simply randomly pressing button. Moreover, the editor needs to work quickly as high-quality performance is extremely important, which means we must maximize the capabilities provided by the embedded GPU.
Considering unique filters can’t be created because of the dial adjustments for basic parameters (brightness, contrast, etc.), we decided to add full color correction to the editor.
Color correction
The basic algorithm for color correction consists of two steps:
- Find all pixels that satisfy pre-established conditions on an image. For example, if you want to change all pure red pixels to something different, then all pixels matching the RGB color code (255, 0, 0) need to be found.
- Change the selected pixels to the desired color.
This sounds pretty simple but several questions appeared immediately:
- How can search criteria be set? Obviously, conditions where “the R component is 255 while G and B equal 0” are difficult to replicate in practice. How can the algorithm needed to select pixels similar to this color be determined?
- How fast can these pixels be located in an image? If search conditions are too complicated, then quality work can only be expected if using very powerful devices.
- If pixels of different colors fall under the search criteria, then they can’t be strictly changed to color N without taking their original color into consideration.
To find the pixels of a desired color, the search criteria needs to be determined. If we wish to find all pixels that are different from a certain color by no more than N, where N is the dynamically specified error, how can this be done? Since various colors can be deconstructed into separate components that are coordinate values, Euclidean distance can be used to calculate the difference between two colors.
At first, we tried to locate the necessary pixels of an image by measuring the distance between their RGB color values. However, the acquired search results were far from ideal. While pixels might seem very similar according to the human eye, they could, in fact, be a significant Euclidean distance apart. The problem is that RGB color changes are very nonlinear with regards to human perception, which means that a small change in RGB value could be very noticeable to the human eye and vice versa. Compare:


To resolve this problem, we moved on to using the CIELab color model.
CIELab is a color model that resembles human color perception as close as possible. In CIELab, any color is individually determined by L lightness and two chromatic components: a (the location between green and purple) and b (the location between blue and yellow).

CIELab permits the evaluation of color difference as seen by the human eye by using a simple calculation of the Euclidean distance between two colors. In other words, it is possible to isolate pixels that resemble the same color as the desired one, within some error N, by setting the condition “if the Euclidean distance between Lab-value of some pixel and Lab-value of X color is less than N, the pixel is good”.
Additionally, by using this model, it becomes a very natural process for the human eye to replace the selected pixels with a desired color. Based on the same Euclidean distance, we can also calculate how to alter the color while maintaining a natural image.

Initial implementation
We wanted to give users the ability to immediately see results on the screen, which requires a cross-platform solution. Consequently, the only option is OpenGL.
In addition to color correction, the editor also needed to be capable of automatically enhance the image while simultaneously performing basic operations like changing the brightness, contrast and other characteristics.
Our editor’s auto enhance relies on the histogram equalization technique. The goal is to adjust the histogram of the original image to resemble that of a linear function. Although several such techniques exist, we use CLAHE.
CLAHE (contrast-limited adaptive histogram equalization) is adaptive because the whole histogram of an image is not processed but rather smaller, individual fragments. It is contrast-limited as the contrast varies only within the specified limits so as to avoid undesirable amplification of background noise in the image.
CLAHE is executed on a processor, but this happens only once each time the editor is run as the resulting texture is stored in the in-memory cache. Consequently, no specific performance requirements are needed.
During initial implementation, the editor’s conveyor looked like this:
- Application of auto enhance with a specified intensity.
- Application of basic operations.
- Conversion of the resulting image to Lab.
- Application of color correction.
At this point, users were given the ability to choose any color in any amount for color correction. As a result, the OpenGL shader was collected dynamically at the time the user selected and edited the colors.
Conversion from RGB to Lab was done without any optimization, which means the RGB color value was converted to XYZ and only then from XYZ to Lab. Both conversions are computationally intensive, thus creating a bottleneck for the entire editor. To resolve the problem, we switched to 3D Lookup Table technology.
The underlying concept is extremely simple. 3D LUT is a 3-dimensional table that stores corresponding input and output color values. For any input value in one color model, we can uniquely determine a corresponding output of a different model. It works quite quickly!

Such an organizational scheme for the conveyor proved effective, but, of course, the vast world of Android devices contributed additional problems. Such issues include the banal lack of video memory and unbelievably poor performance on some devices to strict limitations on the number of instructions available in one shader for some GPUs, the result of which is a color correction shader that can’t compile. Even if the problems of minimal video memory and poor performance could be solved by lowering the resolution of rendering on weak devices, the issue regarding the number of instructions is not so easily mitigated. We needed to learn how to split the color correction shader into parts dynamically and only when necessary since each additional step in the conveyor reduces performance.
What is next?
After the release of version 1.0, we decided to add more editing capabilities so we included a popular photographers tool Tone Curve into the editor. By using curves in the app, the brightness of pixels within specific RGB bands can be adjusted. In this event, the pixel values change according to the shape of the curve the user sees on the screen.

Introducing curves helped optimize some basic operations, thereby allowing curves, brightness, contrast, fade, temperature and tint (shade) to be applied in one pass using three 1D Lookup Tables.
The use of OpenGL requires a strict limit on the size of the output image since all textures must be stored in video memory. Some Android devices at the time could barely render a square larger than 1500x1500 pixels using our conveyor, and we wanted even more. But if we could not do anything with the editor itself, then the solution for rendering the final product was found by duplicating the conveyor’s functionality in pure C. The resulting code was able to store all intermediate textures in ordinary RAM and used all multi-core processor capabilities to the max. Thanks to this, users of even particularly weak devices could save images in extremely high resolution.
Filters in the main application

When it came time to update the photo editor in the main VK app for Android, we decided to use the results gained from Snapster. Currently, the same set of standard filters are available, and the editor permits switching filters with a swipe along the image while two filters can be simultaneously displayed on the screen. With fast swiping, everything happens instantly.
Such results could not be achieved using the full Snapster editor conveyor as its far too heavy and is better suited for more precise photo touch-ups. The fact that almost the entire conveyor is static plays a major role in implementation. If Snapster had the ability to change any filter as desired, the main app would need to provide the possibility of applying pre-made filters. This allowed us to use the aforementioned 3D Lookup Table technology when applying filters. Instead of needing to convert from RGB to Lab, we can convert the color from the original photo to the color of the filter quite quickly.
Conclusion
We managed to go from clumsy color modifications to a powerful and flexible tool for advanced image editing. Now it not only works but it functions quickly. Snapster became a place for experimentation that we can consider successful since the product is a cool editor to be proud of. This experience has helped us and the official VK app, which will potentially receive some additional functions from Snapster.
If you know and love Android, work with us on developing apps that millions of people use every day.
Questions?
Questions to the author can be submitted to our technical blog’s official community.
