6 December 2020 Issue 60

Using Machine Learning Techniques to Denoise NDR Images

Noise in images is the random grainy effect seen on the picture. This is produced by the random distribution of light (photons) falling on the camera detector and the electronics in the detector. Noise can be produced by the movement of electrons during the process of reading the detector’s pixels, from the heat in the detector, or a fixed pattern of noise across the detector due to variations in the pixels.

DOI: 10.22443/rms.inf.1.195

A novel Non-Destructive Readout (NDR) camera was used to image fluorescent cellular samples by taking rapidly acquired images without reading out electrons, where each sub- frame is the previous sub- frame plus any newly captured photons. This means many sub-frames are taken during a normal camera’s exposure time [1].

By subtracting a lower sub-frame from a higher sub-frame, it is possible to produce a normal image of any required sub- frame rate in post-processing - Fig. 1(i). However, the higher the sub-frame rate the higher the noise. It is necessary to remove as much noise as possible to improve image resolution.

The aim of this project was to remove noise from these images to improve their contrast using two machine learning algorithms: Noise2Void (N2V) [2] and CARE [3] . Both have different techniques of using noisy images to train neural networks. After the neural networks are trained, images with different sub- frame rates can be fed into them to be denoised.

Download full article

Written by

George Hume

Share this