Improved performance of deep learning neural network models for Traffic sign classification using…
Vivek Yadav
667

You can also takes a look at mxnet augmentation code. Their illumination argumentation produce some magic for me:

https://github.com/dmlc/mxnet/blob/master/python/mxnet/image.py
#brightness, contrast, saturation-------------
#from mxnet code
if 1: #brightness
alpha = 1.0 + illumin_limit*random.uniform(-1, 1)
perturb *= alpha
perturb = np.clip(perturb,0.,255.)
pass

if
1: #contrast
coef = np.array([[[0.299, 0.587, 0.114]]]) #rgb to gray (YCbCr) : Y = 0.299R + 0.587G + 0.114B

alpha = 1.0 + illumin_limit*random.uniform(-1, 1)
gray = perturb * coef
gray = (3.0 * (1.0 - alpha) / gray.size) * np.sum(gray)
perturb *= alpha
perturb += gray
perturb = np.clip(perturb,0.,255.)
pass

if
1: #saturation
coef = np.array([[[0.299, 0.587, 0.114]]]) #rgb to gray (YCbCr) : Y = 0.299R + 0.587G + 0.114B

alpha = 1.0 + illumin_limit*random.uniform(-1, 1)
gray = perturb * coef
gray = np.sum(gray, axis=2, keepdims=True)
gray *= (1.0 - alpha)
perturb *= alpha
perturb += gray
perturb = np.clip(perturb,0.,255.)
pass
One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.