In each of these examples, blur is a useful augmentation step. a drone is flying over a windy field of crops to detect A camera and its objects its detecting are moving e.g.a user has a mobile app to scan a book cover A camera is moving, but objects it is detecting are stationary e.g.A camera is stationary, but objects it is detecting are often moving e.g.More commonly, however, only some images may have blur. If all images in production may have blur involved, all images in training, validation, and testing should be blurred in preprocessing. Blur Use Casesīlur can be applied as a preprocessing technique or image augmentation technique. Researchers suspect blur particularly obscures convolution's ability to locate edges in early levels of feature abstraction, causing inaccurate feature abstraction early in a network's training. Even a small amount of blur (σ = 2) caused misclassification. In their research, blur and noise had the most adverse consequences on simple classification tasks from a variety of convolutional neural network architectures. Researchers from Arizona State University considered the impact blurring may have on image classification, especially as compared to other techniques. Of these, blurring is among the most detrimental. Many types of imperfection can make their way into an image: blur, poor contrast, noise, JPEG compression, and more. We line up our subject just right and curate datasets of best case lighting.īut our deep learning models in production aren't so lucky.ĭeliberately introducing imperfections into our datasets is essential to making our machine learning models more resilient to the harsh realities they'll encounter in real world situations.ĭegrading image quality is exactly the type of task that can be completed in post processing – without the headache of collecting more data and needing to label it. When we train computer vision models, we often take ideal photos of our subjects.