Visible light image alignment technology

Image alignment is the process of overlaying images of the same scene, aiming to find a mapping relationship between the reference image (fixed image) and the floating image (image to be aligned). Two two-dimensional arrays are usually used to define the two images. The region-based alignment algorithm is mainly based on the reference image and uses a high correlation metric to find the best position of the image to be aligned. Choosing a suitable similarity metric algorithm to find the correlation of two or more images and extending it in the spatial or frequency domain is the core of region-based alignment methods. Traditional feature-based image alignment algorithms are a continuous iterative optimization process to find the best similarity between images. In order to improve the accuracy of image alignment, deep learning alignment methods mainly use deep learning networks to perform feature extraction and similarity measure on input image pairs to generate aligned images and discern the accuracy of image alignment. Generally, the visible and infrared images are mapped to the same pattern by a deep network model (e.g., converting visible images to infrared images), and then input to the alignment network for similarity measurement, and the loss value is passed to the network iteratively according to the backpropagation property of the neural network to promote better mapping of visible images to infrared images, so as to obtain a more accurate similarity measure of the two infrared images. Finally the optimal output value is obtained for the aligned images. Thus, the aligned images are generated and the accuracy of image alignment is discriminated. Generally the visible and infrared images are mapped to the same pattern by a deep network model (e.g., converting the visible image to an infrared image) and then input to the alignment network for similarity measurement, and the loss value is passed to the network iteratively according to the back propagation property of the neural network to facilitate better mapping of the visible image to the infrared image to obtain a more accurate similarity measure of the two infrared images. Finally the optimal output value is obtained for the aligned images. Thus, the aligned images are generated and the accuracy of image alignment is discriminated. Generally the visible and infrared images are mapped to the same pattern by a deep network model (e.g., converting the visible image to an infrared image) and then input to the alignment network for similarity measurement, and the loss value is passed to the network iteratively according to the back propagation property of the neural network to facilitate better mapping of the visible image to the infrared image to obtain a more accurate similarity measure of the two infrared images. Finally, the optimal output value is obtained for the aligned images. The visible image is converted into an IR image by a deep network model), then input to the alignment network for similarity measurement, and the loss values are iteratively passed to the network to facilitate better mapping of the visible image to the IR image based on the back propagation property of the neural network to obtain a more accurate similarity measure of the two IR images. Finally, the optimal output value is obtained for the aligned images. The visible image is converted into an infrared image by a deep network model), and then fed into the alignment network for similarity measurement, and the loss values are iteratively passed to the network to facilitate better mapping of the visible image to the infrared image according to the back propagation property of the neural network to obtain a more accurate similarity measure of the two infrared images. Finally, the optimal output value is obtained for the aligned images.

 

Scroll to Top
GET IN TOUCH WITH JPNV
Contact Form Demo (#3)