To see the other types of publications on this topic, follow the link: Image motion.

Journal articles on the topic 'Image motion'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Image motion.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gaffare, Mayra Garduño, Bertrand Vachon, and Armando Segovia de los Ríos. "Range image generator including robot motion." Robotica 24, no. 1 (October 31, 2005): 113–23. http://dx.doi.org/10.1017/s0263574704001547.

Full text
Abstract:
The system here described has the capability of generating range images that include robot motion. The system has two main modules, the motion and the image generator. Motion is modeled using a Bezier's curve method. To compute a range value corresponding to a pixel image, the robot position in the coordinated system is obtained from trajec-tory generation. In this way, distortion is produced in the image, or sequence of images, as a consequence of motion. The obtained range images represent scenes perceived by the robot from a specific location or during a specified dis-placement in a very “real” view.
APA, Harvard, Vancouver, ISO, and other styles
2

Jo, Hang-Chan, Hyeonwoo Jeong, Junhyuk Lee, Kyung-Sun Na, and Dae-Yu Kim. "Quantification of Blood Flow Velocity in the Human Conjunctival Microvessels Using Deep Learning-Based Stabilization Algorithm." Sensors 21, no. 9 (May 6, 2021): 3224. http://dx.doi.org/10.3390/s21093224.

Full text
Abstract:
The quantification of blood flow velocity in the human conjunctiva is clinically essential for assessing microvascular hemodynamics. Since the conjunctival microvessel is imaged in several seconds, eye motion during image acquisition causes motion artifacts limiting the accuracy of image segmentation performance and measurement of the blood flow velocity. In this paper, we introduce a novel customized optical imaging system for human conjunctiva with deep learning-based segmentation and motion correction. The image segmentation process is performed by the Attention-UNet structure to achieve high-performance segmentation results in conjunctiva images with motion blur. Motion correction processes with two steps—registration and template matching—are used to correct for large displacements and fine movements. The image displacement values decrease to 4–7 μm during registration (first step) and less than 1 μm during template matching (second step). With the corrected images, the blood flow velocity is calculated for selected vessels considering temporal signal variances and vessel lengths. These methods for resolving motion artifacts contribute insights into studies quantifying the hemodynamics of the conjunctiva, as well as other tissues.
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Chien-Chung, Hiroshi Ashida, Xirui Yang, and Pei-Yin Chen. "Seeing Global Motion in a Random Dot Image Sequence." i-Perception 11, no. 5 (September 2020): 204166952096110. http://dx.doi.org/10.1177/2041669520961104.

Full text
Abstract:
In a stimulus with multiple moving elements, an observer may perceive that the whole stimulus moves in unison if (a) one can associate an element in one frame with one in the next (correspondence) and (b) a sufficient proportion of correspondences signal a similar motion direction (coherence). We tested the necessity of these two conditions by asking the participants to rate the perceived intensity of linear, concentric, and radial motions for three types of stimuli: (a) random walk motion, in which the direction of each dot was randomly determined for each frame, (b) random image sequence, which was a set of uncorrelated random dot images presented in sequence, and (c) global motion, in which 35% of dots moved coherently. The participants perceived global motion not only in the global motion conditions but also in the random image sequences, though not in random walk motion. The type of perceived motion in the random image sequences depends on the spatial context of the stimuli. Thus, although there is neither a fixed correspondence across different frames nor a coherent motion direction, observers can still perceive global motion in the random image sequence. This result cannot be explained by motion energy or local aperture border effects.
APA, Harvard, Vancouver, ISO, and other styles
4

Long, Yawu, Ichiro Sakuma, and Naoki Tomii. "Reconstruction of Motion Images from Single Two-Dimensional Motion-Blurred Computed Tomographic Image of Aortic Valves Using In Silico Deep Learning: Proof of Concept." Applied Sciences 12, no. 18 (September 8, 2022): 9044. http://dx.doi.org/10.3390/app12189044.

Full text
Abstract:
The visualization of motion is important in the diagnosis and treatment of aortic valve disease. It is difficult to perform using computed tomography (CT) because of motion blur. Existing research focuses on suppressing or removing motion blur. The purpose of this study is to prove the feasibility of inferring motion images using motion information from a motion-blurred CT image. An in silico learning method is proposed, to infer 60 motion images from a two-dimensional (2D) motion-blurred CT image, to verify the concept. A dataset of motion-blurred CT images and motion images was generated using motion and CT simulators to train a deep neural network. The trained model was evaluated using two image similarity evaluation metrics, a structural similarity index measure (0.97 ± 0.01), and a peak signal-to-noise ratio (36.0 ± 1.3 dB), as well as three motion feature evaluation metrics, maximum opening distance error between endpoints (0.7 ± 0.6 mm), maximum-swept area velocity error between adjacent images (393.3 ± 423.3 mm2/s), and opening time error (5.5 ± 5.5 ms). According to the results, the trained model can successfully infer 60 motion images from a motion-blurred CT image. This study demonstrates the feasibility of inferring motion images from a motion-blurred CT image under simulated conditions.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou-xiang Jin, Zhou-xiang Jin, and Hao Qin Zhou-xiang Jin. "Generative Adversarial Network Based on Multi-feature Fusion Strategy for Motion Image Deblurring." 電腦學刊 33, no. 1 (February 2022): 031–41. http://dx.doi.org/10.53106/199115992022023301004.

Full text
Abstract:
<p>Deblurring of motion images is a part of the field of image restoration. The deblurring of motion images is not only difficult to estimate the motion parameters, but also contains complex factors such as noise, which makes the deblurring algorithm more difficult. Image deblurring can be divided into two categories: one is the non-blind image deblurring with known fuzzy kernel, and the other is the blind image deblurring with unknown fuzzy kernel. The traditional motion image deblurring networks ignore the non-uniformity of motion blurred images and cannot effectively recover the high frequency details and remove artifacts. In this paper, we propose a new generative adversarial network based on multi-feature fusion strategy for motion image deblurring. An adaptive residual module composed of deformation convolution module and channel attention module is constructed in the generative network. Where, the deformation convolution module learns the shape variables of motion blurred image features, and can dynamically adjust the shape and size of the convolution kernel according to the deformation information of the image, thus improving the ability of the network to adapt to image deformation. The channel attention module adjusts the extracted deformation features to obtain more high-frequency features and enhance the texture details of the restored image. Experimental results on public available GOPRO dataset show that the proposed algorithm improves the peak signal-to-noise ratio (PSNR) and is able to reconstruct high quality images with rich texture details compared to other motion image deblurring methods.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Bai, Chongxin, Kewen Liu, Shi Chen, Zhao Li, Weida Xie, Qingjia Bao, and Chaoyang Liu. "Dual-domain unsupervised network for removing motion artifact related to Gadoxetic acid-enhanced MRI." Journal of Physics: Conference Series 2258, no. 1 (April 1, 2022): 012037. http://dx.doi.org/10.1088/1742-6596/2258/1/012037.

Full text
Abstract:
Abstract The transient severe motion may cause severe image degradation during gadoxetic acid-enhanced arterial phase imaging. This work proposes a new dual-domain unsupervised motion artifacts disentanglement network for motion correction related to gadoxetic acid-enhanced MRI. We assume that motion-free images and motion-corrupted images belong to the different domains, then the motion correction is converted to the image-to-image translation problem. The image-to-image translation within the same domain is designed to constrain autoencoders to learn the feature representation. And the cross-domain translation explores the cycle consistency in the absence of paired images. Experimental results demonstrate that our method can effectively reduce artifacts in the gadoxetic acid-enhanced images.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen chen and Daohui Bi. "A Motion Image Pose Contour Extraction Method Based on B-Spline Wavelet." International Journal of Antennas and Propagation 2021 (October 26, 2021): 1–8. http://dx.doi.org/10.1155/2021/4553143.

Full text
Abstract:
In order to improve the accuracy of traditional motion image pose contour extraction and shorten the extraction time, a motion image pose contour extraction method based on B-spline wavelet is proposed. Moving images are acquired through the visual system, the information fusion process is used to perform statistical analysis on the images containing motion information, the location of the motion area is determined, convolutional neural network technology is used to preprocess the initial motion image pose contour, and B-spline wavelet theory is used. The preprocessed motion image pose contour is detected, combined with the heuristic search method to obtain the pose contour points, and the motion image pose contour extraction is completed. The simulation results show that the proposed method has higher accuracy and shorter extraction time in extracting motion image pose contours.
APA, Harvard, Vancouver, ISO, and other styles
8

Fan, Yu, and Xue Feng Wu. "Study on Motion Blur Image Restoration Algorithms." Advanced Materials Research 753-755 (August 2013): 2976–79. http://dx.doi.org/10.4028/www.scientific.net/amr.753-755.2976.

Full text
Abstract:
The restore algorithm of the image blurred by motion is proposed, and a mathematical model based on motion blur system is eomtrueted£®The Point spread function of the motion blur is given£®According to the characteristics of blurred images£¬the parameters of point spread function are estimated ,and three methods are introduced for image restoration. The three methods are inverse filtering of image restoration,Lucy-Richardson image restoration and Wiener image restoration.The principles of the three image restoration methods are analyzed. The motion blurred image restoration experiment is made. The results show that the visibility of the image is improved ,and the image restoration is more stable.
APA, Harvard, Vancouver, ISO, and other styles
9

Kumar, Ankur N., Kurt W. Short, and David W. Piston. "A Motion Correction Framework for Time Series Sequences in Microscopy Images." Microscopy and Microanalysis 19, no. 2 (February 15, 2013): 433–50. http://dx.doi.org/10.1017/s1431927612014250.

Full text
Abstract:
AbstractWith the advent of in vivo laser scanning fluorescence microscopy techniques, time-series and three-dimensional volumes of living tissue and vessels at micron scales can be acquired to firmly analyze vessel architecture and blood flow. Analysis of a large number of image stacks to extract architecture and track blood flow manually is cumbersome and prone to observer bias. Thus, an automated framework to accomplish these analytical tasks is imperative. The first initiative toward such a framework is to compensate for motion artifacts manifest in these microscopy images. Motion artifacts in in vivo microscopy images are caused by respiratory motion, heart beats, and other motions from the specimen. Consequently, the amount of motion present in these images can be large and hinders further analysis of these images. In this article, an algorithmic framework for the correction of time-series images is presented. The automated algorithm is comprised of a rigid and a nonrigid registration step based on shape contexts. The framework performs considerably well on time-series image sequences of the islets of Langerhans and provides for the pivotal step of motion correction in the further automatic analysis of microscopy images.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Qi Shen, and Jian Gong Chen. "PSF Estimation and Image Restoration for Motion Blurred Images." Advanced Materials Research 562-564 (August 2012): 2124–27. http://dx.doi.org/10.4028/www.scientific.net/amr.562-564.2124.

Full text
Abstract:
Point spread function (PSF) estimation and image restoration algorithm are the hotspots In the research of motion blurred image restoration. In order to improve the efficacy of image restoration, an improved algorithm named quadric transforms (QT) method is proposed in this paper by analyzing the restoration process of motion blurred images. Firstly, Fourier transform and homomorphism transform are applied to the original motion blurred image, and then the Fourier transform and homomorphism transform are used again to the obtained spectrum image. Secondly, the motion blur direction is estimated by Radon transform. Thirdly, the motion blur length is found by differential autocorrelation operations. Finally, utilizing the estimated blur direction and blur length, the motion blurred image is restored by Wiener filtering. Experimental results show that the proposed QT method can get more accurate estimated motion blur angles than the primary transform (PT, that is, Fourier transform and homomorphism transform are used one time) method and can get better restored images under the meaning of peak signal to noise ratio (PSNR).
APA, Harvard, Vancouver, ISO, and other styles
11

Jiang, Xue, Peng Sun, and Hui Wang. "An Algorithm for Electronic Image Stabilization." Applied Mechanics and Materials 738-739 (March 2015): 590–93. http://dx.doi.org/10.4028/www.scientific.net/amm.738-739.590.

Full text
Abstract:
In most video systems,the carrier of camera expierences all kinds of motions which make the video ambiguous and unstable.Usually,the gray projection matching algorithm can only stabilize the translation motion between frames,but the rotation motion of frames in a video sequence.By zoning the image into few small regions,the rotation motion could be ignored,then, local movement vector is aqcuired by the grey projection matching method and compensate the movement between frames.Then for the rotation movement,it can be stabilized by the compensation with circle projection matching.
APA, Harvard, Vancouver, ISO, and other styles
12

Katsulai, Hiroshi. "Extraction and Motion Estimation of 3D Objects from Stereo Image Sequence." Journal of Robotics and Mechatronics 8, no. 6 (December 20, 1996): 561–70. http://dx.doi.org/10.20965/jrm.1996.p0561.

Full text
Abstract:
It is important to estimate motion parameters and structures of 3D objects from the image sequence, and a variety of methods to do so have been proposed so far. In this paper, a method is described of estimating the motion and structure of a 3D object from the stereo image sequence. In this method, based on the correspondence of 2D segments in each stereo pair, using a newly devised similarity measure, 3D segments are reconstructed, then grouped into rigid objects, making use of the correspondence of 3D segments in successive images, and parameters of rigid motions are computed for each group.
APA, Harvard, Vancouver, ISO, and other styles
13

Song, Yu Xiang, and Yan Mei Zhang. "Parameter Estimation and Restoration of Motion Blurred Image." Applied Mechanics and Materials 608-609 (October 2014): 855–59. http://dx.doi.org/10.4028/www.scientific.net/amm.608-609.855.

Full text
Abstract:
according to the real motion blur image restoration problems, analyze the difference between the image features and Simulation of real blurred images, this paper proposes a method that applied to real image degradation parameter estimation. First calculate the degraded image using cepstrum, taking the cepstrum to binary image using absolute value of minimum gray as the threshold, and then remove the center cross bright line; and then use formula of point to line to calculate the distance of bright fringe direction of binary image, that is direction of motion blur; the direction of motion blur is rotated to the horizontal direction by the degraded image center of rotation axis, divided the autocorrelation method to calculate fuzzy scale. To estimate the point spread function is take into the Wiener filtering algorithm to recover images, image restoration effect prove that parameter estimation results are correct.
APA, Harvard, Vancouver, ISO, and other styles
14

Tekwani, Hema, and Krishna Raj. "Correlation-Based Template Tracking Of Moving Object." International journal of electrical and computer engineering systems 13, no. 3 (April 19, 2022): 175–82. http://dx.doi.org/10.32985/ijeces.13.3.2.

Full text
Abstract:
This paper presents the correlation-based motion estimation technique for the 3D displacement of objects. Two high-speed cameras are configured as a stereovision system and synchronized in real-time. Finger and hand motions are captured in form of digital images at 1500 fps and 2000 fps respectively. A complete motion acquisition system is calibrated to determine the intrinsic and extrinsic parameters which were later used in the correlation algorithm. The grayscale image frames acquired from the cameras are correlated using square templates of 10x10 pixels created from the reference image. The finger and hand motion are discussed with varying camera speed as a measure of brightness inconsistency. The observations in the correlation coefficient indicate that the proposed algorithm is efficient up to 20 and 50 templates for the finger and hand motion cases respectively. The correlation coefficient for finger motion was increased to 0.987 and 0.972 for the left and right cameras, respectively, while the correlation coefficient for hand motion was 0.924 and 0.898. The proposed algorithm is developed in MATLAB and validated by tracing the sinusoidal motion of a solid rectangular element from the image correlation technique and an accelerometer sensor mounted over the block.
APA, Harvard, Vancouver, ISO, and other styles
15

Mitsugami, Ikuhisa. "Bundler: Structure from motion for unordered image collections." Journal of The Institute of Image Information and Television Engineers 65, no. 4 (2011): 479–82. http://dx.doi.org/10.3169/itej.65.479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hadar, Ofer. "Image motion restoration from a sequence of images." Optical Engineering 35, no. 10 (October 1, 1996): 2898. http://dx.doi.org/10.1117/1.600959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kim, Baeck Oon, and Hong Yeon Cho. "Image processing for video images of buoy motion." Ocean Science Journal 40, no. 4 (December 2005): 213–20. http://dx.doi.org/10.1007/bf03023521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chandramouli, Paramanand, Meiguang Jin, Daniele Perrone, and Paolo Favaro. "Plenoptic Image Motion Deblurring." IEEE Transactions on Image Processing 27, no. 4 (April 2018): 1723–34. http://dx.doi.org/10.1109/tip.2017.2775062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Cafforio, C., F. Rocca, and S. Tubaro. "Motion compensated image interpolation." IEEE Transactions on Communications 38, no. 2 (1990): 215–22. http://dx.doi.org/10.1109/26.47856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

CHUNG, SUSANA T. L., HAROLD E. BEDELL, and Harold E. Bedell. "Congenital Nystagmus Image Motion." Optometry and Vision Science 74, no. 5 (May 1997): 266–72. http://dx.doi.org/10.1097/00006324-199705000-00021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Rieger, J. H., and D. T. Lawton. "Processing differential image motion." Journal of the Optical Society of America A 2, no. 2 (February 1, 1985): 354. http://dx.doi.org/10.1364/josaa.2.000354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Xue Feng, and Yu Fan. "A Research on the Optimization of Fuzzy Image." Applied Mechanics and Materials 409-410 (September 2013): 1593–96. http://dx.doi.org/10.4028/www.scientific.net/amm.409-410.1593.

Full text
Abstract:
The restore algorithm of the image blurred by motion is proposed, and a mathematical model based on motion blur system is eomtrueted£®The Point spread function of the motion blur is given According to the characteristics of blurred images the parameters of point spread function are estimated ,and three methods are introduced for image restoration. The three methods are inverse filtering of image restoration, Lucy-Richardson image restoration and Wiener image restoration. The principles of the three image restoration methods are analyzed. The motion blurred image restoration experiment is made. The results show that the visibility of the image is improved, and the image restoration is more stable.
APA, Harvard, Vancouver, ISO, and other styles
23

Michelini, Mario, and Helmut Mayer. "EFFICIENT WIDE BASELINE STRUCTURE FROM MOTION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 3, 2016): 99–106. http://dx.doi.org/10.5194/isprsannals-iii-3-99-2016.

Full text
Abstract:
This paper presents a Structure from Motion approach for complex unorganized image sets. To achieve high accuracy and robustness, image triplets are employed and (an approximate) camera calibration is assumed to be known. The focus lies on a complete linking of images even in case of large image distortions, e.g., caused by wide baselines, as well as weak baselines. A method for embedding image descriptors into Hamming space is proposed for fast image similarity ranking. The later is employed to limit the number of pairs to be matched by a wide baseline method. An iterative graph-based approach is proposed formulating image linking as the search for a terminal Steiner minimum tree in a line graph. Finally, additional links are determined and employed to improve the accuracy of the pose estimation. By this means, loops in long image sequences are implicitly closed. The potential of the proposed approach is demonstrated by results for several complex image sets also in comparison with VisualSFM.
APA, Harvard, Vancouver, ISO, and other styles
24

Michelini, Mario, and Helmut Mayer. "EFFICIENT WIDE BASELINE STRUCTURE FROM MOTION." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences III-3 (June 3, 2016): 99–106. http://dx.doi.org/10.5194/isprs-annals-iii-3-99-2016.

Full text
Abstract:
This paper presents a Structure from Motion approach for complex unorganized image sets. To achieve high accuracy and robustness, image triplets are employed and (an approximate) camera calibration is assumed to be known. The focus lies on a complete linking of images even in case of large image distortions, e.g., caused by wide baselines, as well as weak baselines. A method for embedding image descriptors into Hamming space is proposed for fast image similarity ranking. The later is employed to limit the number of pairs to be matched by a wide baseline method. An iterative graph-based approach is proposed formulating image linking as the search for a terminal Steiner minimum tree in a line graph. Finally, additional links are determined and employed to improve the accuracy of the pose estimation. By this means, loops in long image sequences are implicitly closed. The potential of the proposed approach is demonstrated by results for several complex image sets also in comparison with VisualSFM.
APA, Harvard, Vancouver, ISO, and other styles
25

Mansukhbhai, Shersiya Trupti. "Motion Blurred Image Restoration for Text Image." International Journal for Research in Applied Science and Engineering Technology 9, no. 5 (May 31, 2021): 110–15. http://dx.doi.org/10.22214/ijraset.2021.34053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ge, Shishun, and Chunhong Zhu. "Application of Motion Video Analysis System Based on SISA Neural Network in Sports Training." Advances in Multimedia 2022 (July 7, 2022): 1–11. http://dx.doi.org/10.1155/2022/3692902.

Full text
Abstract:
This paper proposes the study of motion video image classification and recognition, extracts the motion target image features, designs the image classification process, and establishes the neural network image classification model to complete the image recognition. In view of the different angles of the same element, the motion video image classification and recognition under the neural network are completed by using the error back-propagation algorithm. The performance of the proposed method is verified by simulation experiments. Experimental results show that the proposed method has a high recognition rate of moving video images, the accuracy rate is more than 98%, and the image recognition classification is comprehensive. The proposed method can classify the elements in the motion video image, which solves the technical problem that the traditional method cannot identify unclear images and has low recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
27

Senseman, David M., and Kay A. Robbins. "High-Speed VSD Imaging of Visually Evoked Cortical Waves: Decomposition Into Intra- and Intercortical Wave Motions." Journal of Neurophysiology 87, no. 3 (March 1, 2002): 1499–514. http://dx.doi.org/10.1152/jn.00475.2001.

Full text
Abstract:
In the pond turtle, Pseudemys scripta elegans, visually evoked cortical waves propagate at different velocities within the primary visual area compared with waves that pass into the secondary visual area. In an effort to separate intra- and intercortical wave motions, movies of visually evoked cortical waves recorded by high-speed voltage-sensitive dye (VSD) imaging were subjected to Karhunen-Loéve (KL) decomposition. This procedure decomposes the VSD movies into a series of basis images that capture different spatial patterns of coherent activity. Most of the energy of the compound wave motion (>95%) was captured by the three largest basis images, M1,1 , M1,2 , and M2,1 . Based on visual comparison with maps of wave front latency, KL basis image M1,2 appears to capture the spread of depolarization within the primary visual area, whereas KL basis image M2,1 appears to capture the spread of depolarization from the primary into the secondary visual area. The contribution of different basis images to the intra- and intercortical wave motions was tested by reconstructing the response using different combinations of KL basis images. Only KL basis images M1,1 and M1,2 were needed to reconstruct intracortical wave motion, while basis images M1,1 and M2,1 were needed to reconstruct intercortical wave motion. It was also found that the direction and speed of wave propagation could be deduced by visual inspection of the basis image projections on to the original data set. The relative advantage of KL decomposition for the analysis of complex wave motions captured by VSD imaging is discussed.
APA, Harvard, Vancouver, ISO, and other styles
28

Iwamura, Kiyohiko, Jun Younes Louhi Kasahara, Alessandro Moro, Atsushi Yamashita, and Hajime Asama. "Image Captioning Using Motion-CNN with Object Detection." Sensors 21, no. 4 (February 10, 2021): 1270. http://dx.doi.org/10.3390/s21041270.

Full text
Abstract:
Automatic image captioning has many important applications, such as the depiction of visual contents for visually impaired people or the indexing of images on the internet. Recently, deep learning-based image captioning models have been researched extensively. For caption generation, they learn the relation between image features and words included in the captions. However, image features might not be relevant for certain words such as verbs. Therefore, our earlier reported method included the use of motion features along with image features for generating captions including verbs. However, all the motion features were used. Since not all motion features contributed positively to the captioning process, unnecessary motion features decreased the captioning accuracy. As described herein, we use experiments with motion features for thorough analysis of the reasons for the decline in accuracy. We propose a novel, end-to-end trainable method for image caption generation that alleviates the decreased accuracy of caption generation. Our proposed model was evaluated using three datasets: MSR-VTT2016-Image, MSCOCO, and several copyright-free images. Results demonstrate that our proposed method improves caption generation performance.
APA, Harvard, Vancouver, ISO, and other styles
29

Yoshida, Nobukiyo, Hajime Kageyama, Hiroyuki Akai, Koichiro Yasaka, Haruto Sugawara, Yukinori Okada, and Akira Kunimatsu. "Motion correction in MR image for analysis of VSRAD using generative adversarial network." PLOS ONE 17, no. 9 (September 14, 2022): e0274576. http://dx.doi.org/10.1371/journal.pone.0274576.

Full text
Abstract:
Voxel-based specific region analysis systems for Alzheimer’s disease (VSRAD) are clinically used to measure the atrophied hippocampus captured by magnetic resonance imaging (MRI). However, motion artifacts during acquisition of images may distort the results of the analysis. This study aims to evaluate the usefulness of the Pix2Pix network in motion correction for the input image of VSRAD analysis. Seventy-three patients examined with MRI were distinguished into the training group (n = 51) and the test group (n = 22). To create artifact images, the k-space images were manipulated. Supervised deep learning was employed to obtain a Pix2Pix that generates motion-corrected images, with artifact images as the input data and original images as the reference data. The results of the VSRAD analysis (severity of voxel of interest (VOI) atrophy, the extent of gray matter (GM) atrophy, and extent of VOI atrophy) were recorded for artifact images and motion-corrected images, and were then compared with the original images. For comparison, the image quality of Pix2Pix generated motion-corrected image was also compared with that of U-Net. The Bland-Altman analysis showed that the mean of the limits of agreement was smaller for the motion-corrected images compared to the artifact images, suggesting successful motion correction by the Pix2Pix. The Spearman’s rank correlation coefficients between original and motion-corrected images were almost perfect for all results (severity of VOI atrophy: 0.87–0.99, extent of GM atrophy: 0.88–00.98, extent of VOI atrophy: 0.90–1.00). Pix2Pix generated motion-corrected images that showed generally improved quantitative and qualitative image qualities compared with the U-net generated motion-corrected images. Our findings suggest that motion correction using Pix2Pix is a useful method for VSRAD analysis.
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Chih Ho, and Sheng-Min Hsieh. "The Research of Aesthetics of Type as Image in Motion-Targeting Video Poetics at Type Motion: Type as Image in Motion Exhibition in Taiwan." Advances in Social Sciences Research Journal 7, no. 6 (June 22, 2020): 190–207. http://dx.doi.org/10.14738/assrj.76.8387.

Full text
Abstract:
Characters are signs and symbols that record our thoughts and feelings and allow the documentation of events and history. Later, the appearance of motion images marked a new milestone in the use and application of characters. Not only were the original function of characters improved and enhanced, text that integrate sound and images are also able to communicate much more diverse and abundant information. This technique is commonly found in cinema, television, advertisement, and animation. Thanks to technological advances, the combination of characters, texts, or types and images once again changed how we read. It has also created new meaning for our time. Today, type image seems to have achieved an aesthetic autonomy of their own. This has a profound impact on image and art creation and human communication. The emergence of cinema art in the late 19th century brought motion into written media and greatly expanded the possibilities of art. In today’s world of instant communication media, text and images face unprecedented changes. Chinese characters are one of the most ancient writing systems in human history. Unlike western alphabet, each Chinese character has its own form, sound, and meaning. Chinese characters are a highly figurative cultural element. This essay takes Chinese characters and the works featured in the concrete poetry/sound poetry and fragment poetry categories in the National Taiwan Museum of Fine Arts “Type Motion: Type as Image in Motion” exhibition as the subject of study to examine the history of text and media and changes in the way we deliver information and communicate. This essay also provides an analysis of the relationship between text and motion image and the interdependency between culture and technology and media. The connections and differences between Chinese characters in different time and space is also investigated to highlight the uniqueness of the characters as a medium, its application in motion writing techniques and aesthetic forms. This essay focuses on the following four topics: Artistic expression and styles related to the development of type as image in motion. Video poetics: the association between poetics and video images, poetic framework, and analysis of film poetry. Structure, format, characteristics, and presentation of meaning in concrete poetry/sound poetry, and fragment poetry. how Chinese characters are used in Taiwan and the aesthetic features of type through the exhibited works.
APA, Harvard, Vancouver, ISO, and other styles
31

Abolhasannejad, Vahid, Xiaoming Huang, and Nader Namazi. "Developing an Optical Image-Based Method for Bridge Deformation Measurement Considering Camera Motion." Sensors 18, no. 9 (August 21, 2018): 2754. http://dx.doi.org/10.3390/s18092754.

Full text
Abstract:
Since deformation estimation may lead to errors occurring when the camera vibrates, it is necessary to remove the image global motion before computing real bridge deformation. In this study, a combination of image motion correction algorithm and 2D image-based deformation measurement technique was utilized to address the issue of camera motion during the image data acquisition for bridge deformation measurement. Based on the proposed methodology, the image motion parameters were estimated by defining an effective sub-image in the image and using Iterative Affine Motion Estimator. Then the estimated parameters were applied to all pixels of each captured image to remove the motion effect. Finally, the corrected images were used to analyze by a 2D image-based deformation measurement technique in order to extract and measure real bridge deformation by tracking artificial or natural targets. The proposed methodology was validated by two experiments in the lab and field environments. Achieved results show the accuracy and reliability of the proposed methodology.
APA, Harvard, Vancouver, ISO, and other styles
32

Chan, Jacky C. P., and Edmond S. L. Ho. "Emotion Transfer for 3D Hand and Full Body Motion Using StarGAN." Computers 10, no. 3 (March 22, 2021): 38. http://dx.doi.org/10.3390/computers10030038.

Full text
Abstract:
In this paper, we propose a new data-driven framework for 3D hand and full-body motion emotion transfer. Specifically, we formulate the motion synthesis task as an image-to-image translation problem. By presenting a motion sequence as an image representation, the emotion can be transferred by our framework using StarGAN. To evaluate our proposed method’s effectiveness, we first conducted a user study to validate the perceived emotion from the captured and synthesized hand motions. We further evaluate the synthesized hand and full body motions qualitatively and quantitatively. Experimental results show that our synthesized motions are comparable to the captured motions and those created by an existing method in terms of naturalness and visual quality.
APA, Harvard, Vancouver, ISO, and other styles
33

Yoon, Chiho, Changyeop Lee, Keecheol Shin, and Chulhong Kim. "Motion Compensation for 3D Multispectral Handheld Photoacoustic Imaging." Biosensors 12, no. 12 (November 29, 2022): 1092. http://dx.doi.org/10.3390/bios12121092.

Full text
Abstract:
Three-dimensional (3D) handheld photoacoustic (PA) and ultrasound (US) imaging performed using mechanical scanning are more useful than conventional 2D PA/US imaging for obtaining local volumetric information and reducing operator dependence. In particular, 3D multispectral PA imaging can capture vital functional information, such as hemoglobin concentrations and hemoglobin oxygen saturation (sO2), of epidermal, hemorrhagic, ischemic, and cancerous diseases. However, the accuracy of PA morphology and physiological parameters is hampered by motion artifacts during image acquisition. The aim of this paper is to apply appropriate correction to remove the effect of such motion artifacts. We propose a new motion compensation method that corrects PA images in both axial and lateral directions based on structural US information. 3D PA/US imaging experiments are performed on a tissue-mimicking phantom and a human wrist to verify the effects of the proposed motion compensation mechanism and the consequent spectral unmixing results. The structural motions and sO2 values are confirmed to be successfully corrected by comparing the motion-compensated images with the original images. The proposed method is expected to be useful in various clinical PA imaging applications (e.g., breast cancer, thyroid cancer, and carotid artery disease) that are susceptible to motion contamination during multispectral PA image analysis.
APA, Harvard, Vancouver, ISO, and other styles
34

Haneishi, Hideaki, Masayuki Kanai, Yoshitaka Tamai, Atsushi Sakohira, and Kazuyoshi Suga. "Registration and Summation of Respiratory-Gated or Breath-Hold PET Images Based on Deformation Estimation of Lung from CT Image." Computational and Mathematical Methods in Medicine 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/9713280.

Full text
Abstract:
Lung motion due to respiration causes image degradation in medical imaging, especially in nuclear medicine which requires long acquisition times. We have developed a method for image correction between the respiratory-gated (RG) PET images in different respiration phases or breath-hold (BH) PET images in an inconsistent respiration phase. In the method, the RG or BH-PET images in different respiration phases are deformed under two criteria: similarity of the image intensity distribution and smoothness of the estimated motion vector field (MVF). However, only these criteria may cause unnatural motion estimation of lung. In this paper, assuming the use of a PET-CT scanner, we add another criterion that is the similarity for the motion direction estimated from inhalation and exhalation CT images. The proposed method was first applied to a numerical phantom XCAT with tumors and then applied to BH-PET image data for seven patients. The resultant tumor contrasts and the estimated motion vector fields were compared with those obtained by our previous method. Through those experiments we confirmed that the proposed method can provide an improved and more stable image quality for both RG and BH-PET images.
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Haiying. "Three-Dimensional Image Recognition of Athletes' Wrong Motions Based on Edge Detection." Journal Européen des Systèmes Automatisés 53, no. 5 (November 15, 2020): 733–38. http://dx.doi.org/10.18280/jesa.530516.

Full text
Abstract:
The traditional 3D visual motion amplitude tracking algorithms cannot acquire the complete contour features, not to mention the correction of wrong motions in sports training. To solve the problem, this paper designs a 3D visual image recognition method based on contourlet domain edge detection, and applies it to the recognition of athlete’s wrong motions in sports training. Firstly, the visual reconstruction and feature analysis of human motions were carried out, and the edge detection features were extracted by edge detection algorithm. Then, a 3D visual motion amplitude tracking method was proposed based on improved inverse kinematics. The simulation results show that the proposed algorithm can effectively realize the recognition of 3D visual images of athlete motions, and improve the correction and judgment ability of athlete motions.
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Huihong, and Shiming Li. "Simulation of 3D Image Reconstruction in Rigid body Motion." MATEC Web of Conferences 232 (2018): 02002. http://dx.doi.org/10.1051/matecconf/201823202002.

Full text
Abstract:
3D image reconstruction under rigid body motion is affected by rigid body motion and visual displacement factors, which leads to low quality of 3D image reconstruction and more noise, in order to improve the quality of 3D image reconstruction of rigid body motion. A 3D image reconstruction technique is proposed based on corner detection and edge contour feature extraction in this paper. Region scanning and point scanning are combined to scan rigid body moving object image. The wavelet denoising method is used to reduce the noise of the 3D image. The edge contour feature of the image is extracted. The sparse edge pixel fusion method is used to decompose the feature of the 3D image under the rigid body motion. The irregular triangulation method is used to extract and reconstruct the information features of the rigid body 3D images. The reconstructed feature points are accurately calibrated with the corner detection method to realize the effective reconstruction of the 3D images. The simulation results show that the method has good quality, high SNR of output image and high registration rate of feature points of image reconstruction, and proposed method has good performance of 3D image reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
37

Oktay, Tugrul, Harun Celik, and Ilke Turkmen. "Constrained control of helicopter vibration to reduce motion blur." Aircraft Engineering and Aerospace Technology 90, no. 9 (November 14, 2018): 1326–36. http://dx.doi.org/10.1108/aeat-02-2017-0068.

Full text
Abstract:
Purpose The purpose of this paper is to examine the success of constrained control on reducing motion blur which occurs as a result of helicopter vibration. Design/methodology/approach Constrained controllers are designed to reduce the motion blur on images taken by helicopter. Helicopter vibrations under tight and soft constrained controllers are modeled and added to images to show the performance of controllers on reducing blur. Findings The blur caused by vibration can be reduced via constrained control of helicopter. Research limitations/implications The motion of camera is modeled and assumed same as the motion of helicopter. In model of exposing image, image noise is neglected, and blur is considered as the only distorting effect on image. Practical implications Tighter constrained controllers can be implemented to take higher quality images by helicopters. Social implications Recently, aerial vehicles are widely used for aerial photography. Images taken by helicopters mostly suffer from motion blur. Reducing motion blur can provide users to take higher quality images by helicopters. Originality/value Helicopter control is performed to reduce motion blur on image for the first time. A control-oriented and physic-based model of helicopter is benefited. Helicopter vibration which causes motion blur is modeled as blur kernel to see the effect of helicopter vibration on taken images. Tight and soft constrained controllers are designed and compared to denote their performance in reducing motion blur. It is proved that images taken by helicopter can be prevented from motion blur by controlling helicopter tightly.
APA, Harvard, Vancouver, ISO, and other styles
38

Fan, Chao, Tie Jun Yang, Hong Liang Fu, and Yi Tao Liang. "Research on the Measurement Method of the Sub-Pixel Image Motion Based on Optical Correlator." Advanced Materials Research 271-273 (July 2011): 7–12. http://dx.doi.org/10.4028/www.scientific.net/amr.271-273.7.

Full text
Abstract:
To measure the sub-pixel image motion of the sequence images which are captured by the high-speed camera at the high frame rate with poor SNR, the joint transform correlator (JTC) is used. The relative image motion of the two adjacent images can be measured by inputting these images into the correlator. The principle of this method is described, and the effect of the defocussing of the CCD in the JTC on the accuracy is analyzed. To research the measurement performance of the JTC concretely, the experiment platform is set up. By using different images with different image motion, the results showed that, the JTC can measure the sub-pixel image motion entirely even for the images with SNR=1, and the measurement error submits to the normal distribution, which implies zero mean and the RMS is no more than 0.12 pixel under the conspicuous level is equal to 0.05.
APA, Harvard, Vancouver, ISO, and other styles
39

Mack, Arien, James Hill, and Steven Kahn. "Motion Aftereffects and Retinal Motion." Perception 18, no. 5 (October 1989): 649–55. http://dx.doi.org/10.1068/p180649.

Full text
Abstract:
Two experiments are described in which it was investigated whether the adaptation on which motion aftereffects (MAEs) are based is a response to retinal image motion alone or to the motion signal derived from the process which combines the image motion signal with information about eye movement (corollary discharge). In both experiments observers either fixated a stationary point or tracked a vertically moving point while a pattern (in experiment 1, a grating; in experiment 2, a random-dot pattern) drifted horizontally across the field. In the tracking condition the adapting retinal motion was oblique. In the fixation condition it was horizontal. In every case in both conditions the MAE was horizontal, in the direction opposite to that of pattern motion. These results are consistent with the hypothesis that the adaptation is a response to the motion signal derived from the comparison of eye and image motion rather than to retinal motion per se. An alternative explanation is discussed.
APA, Harvard, Vancouver, ISO, and other styles
40

Xie, Jianwen, Ruiqi Gao, Zilong Zheng, Song-Chun Zhu, and Ying Nian Wu. "Motion-Based Generator Model: Unsupervised Disentanglement of Appearance, Trackable and Intrackable Motions in Dynamic Patterns." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 12442–51. http://dx.doi.org/10.1609/aaai.v34i07.6931.

Full text
Abstract:
Dynamic patterns are characterized by complex spatial and motion patterns. Understanding dynamic patterns requires a disentangled representational model that separates the factorial components. A commonly used model for dynamic patterns is the state space model, where the state evolves over time according to a transition model and the state generates the observed image frames according to an emission model. To model the motions explicitly, it is natural for the model to be based on the motions or the displacement fields of the pixels. Thus in the emission model, we let the hidden state generate the displacement field, which warps the trackable component in the previous image frame to generate the next frame while adding a simultaneously emitted residual image to account for the change that cannot be explained by the deformation. The warping of the previous image is about the trackable part of the change of image frame, while the residual image is about the intrackable part of the image. We use a maximum likelihood algorithm to learn the model parameters that iterates between inferring latent noise vectors that drive the transition model and updating the parameters given the inferred latent vectors. Meanwhile we adopt a regularization term to penalize the norms of the residual images to encourage the model to explain the change of image frames by trackable motion. Unlike existing methods on dynamic patterns, we learn our model in unsupervised setting without ground truth displacement fields or optical flows. In addition, our model defines a notion of intrackability by the separation of warped component and residual component in each image frame. We show that our method can synthesize realistic dynamic pattern, and disentangling appearance, trackable and intrackable motions. The learned models can be useful for motion transfer, and it is natural to adopt it to define and measure intrackability of a dynamic pattern.
APA, Harvard, Vancouver, ISO, and other styles
41

Xiao, Yifan, Peter Veelaert, and Wilfried Philips. "Deep HDR Deghosting by Motion-Attention Fusion Network." Sensors 22, no. 20 (October 16, 2022): 7853. http://dx.doi.org/10.3390/s22207853.

Full text
Abstract:
Multi-exposure image fusion (MEF) methods for high dynamic range (HDR) imaging suffer from ghosting artifacts when dealing with moving objects in dynamic scenes. The state-of-the-art methods use optical flow to align low dynamic range (LDR) images before merging, introducing distortion into the aligned LDR images from inaccurate motion estimation due to large motion and occlusion. In place of pre-alignment, attention-based methods calculate the correlation between the reference LDR image and non-reference LDR images, thus excluding misaligned regions in LDR images. Nevertheless, they also exclude the saturated details at the same time. Taking advantage of both the alignment and attention-based methods, we propose an efficient Deep HDR Deghosting Fusion Network (DDFNet) guided by optical flow and image correlation attentions. Specifically, the DDFNet estimates the optical flow of the LDR images by a motion estimation module and encodes that optical flow as a flow feature. Additionally, it extracts correlation features between the reference LDR and other non-reference LDR images. The optical flow and correlation features are employed to adaptably combine information from LDR inputs in an attention-based fusion module. Following the merging of features, a decoder composed of Dense Networks reconstructs the HDR image without ghosting. Experimental results indicate that the proposed DDFNet achieves state-of-the-art image fusion performance on different public datasets.
APA, Harvard, Vancouver, ISO, and other styles
42

Beintema, J. A., and M. Lappe. "Perception of biological motion without local image motion." Proceedings of the National Academy of Sciences 99, no. 8 (April 16, 2002): 5661–63. http://dx.doi.org/10.1073/pnas.082483699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Yue, Ronggang, Humei Wang, Ting Jin, Yuting Gao, Xiaofeng Sun, Tingfei Yan, Jie Zang, Ke Yin, and Shitao Wang. "Image Motion Measurement and Image Restoration System Based on an Inertial Reference Laser." Sensors 21, no. 10 (May 11, 2021): 3309. http://dx.doi.org/10.3390/s21103309.

Full text
Abstract:
Satellites have many high-, medium-, and low-frequency micro vibration sources that lead to the optical axis jitter of the optical load and subsequently degrade the remote sensing image quality. To address this problem, this paper developed an image motion detection and restoration method based on an inertial reference laser, and describe edits principle and key components. To verify the feasibility and performance of this method, this paper also built an image motion measurement and restoration system based on an inertial reference laser, which comprised a camera (including the inertial reference laser unit and a Hartmann wavefront sensor), an integrating sphere, a simulated image target, a parallel light pope, a vibration isolation platform, a vibration generator, and a 6 degrees of freedom platform. The image restoration principle was also described. The background noise in the experiment environment was measured, and an image motion measurement accuracy experiment was performed. Verification experiments of image restoration were also conducted under various working conditions. The experiment results showed that the error of image motion detection based on the inertial reference laser was less than 0.12 pixels (root mean square). By using image motion data to improve image quality, the modulation transfer function (MTF) of the restored image was increased to 1.61–1.88 times that of the original image MTF. The image motion data could be used as feedback to the fast steering mirror to compensate for the satellite jitter in real time and to directly obtain high-quality images.
APA, Harvard, Vancouver, ISO, and other styles
44

Tian, Dapeng, Yutang Wang, Zhongshi Wang, Fuchao Wang, and Huijun Gao. "Long Integral Time Continuous Panorama Scanning Imaging Based on Bilateral Control with Image Motion Compensation." Remote Sensing 11, no. 16 (August 17, 2019): 1924. http://dx.doi.org/10.3390/rs11161924.

Full text
Abstract:
Urban remote sensing with moving carriers enables comprehensive monitoring of an urban area. High spatial resolution and wide covering are always required to improve the performance and efficiency of remote sensing. Continuous scanning imaging is a feasible solution. However, imaging motion degrades the performance of a remote sensing system. Rotating motion leads to the loss of key urban morphology information of a panorama imaging. Image translation results in blurry images. For high spatial resolution and high efficiency imaging with low illumination condition, such as imaging at dusk, long-focus lens and long integral time must be further utilized, which makes the problem more severe. In this paper, a novel image motion compensation method is proposed to compensate for image rotation and image translation simultaneously. A quantitative description of image motion, including both image rotation and image translation, is first developed based on the principle of geometrical optics and then analyzed in detail through numerical simulations. Furthermore, a comprehensive image rotation compensation method is developed based on four-channel bilateral control with sliding mode controller, at the same time image translation compensation is performed according to the quantitative relationship of the motion of the scan mirror and image translation compensator. The experimental results show that the proposed method provides effective compensation for image rotation and image translation. This enables acquisition of high spatial resolution urban panoramic images.
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Shanshan, Qingbin Huang, and Minghui Wang. "Multi-Frame Blind Super-Resolution Based on Joint Motion Estimation and Blur Kernel Estimation." Applied Sciences 12, no. 20 (October 20, 2022): 10606. http://dx.doi.org/10.3390/app122010606.

Full text
Abstract:
Multi-frame super-resolution makes up for the deficiency of sensor hardware and significantly improves image resolution by using the information of inter-frame and intra-frame images. Inaccurate blur kernel estimation will enlarge the distortion of the estimated high-resolution image. Therefore, multi-frame blind super resolution with unknown blur kernel is more challenging. For the purpose of reducing the impact of inaccurate motion estimation and blur kernel estimation on the super-resolved image, we propose a novel method combining motion estimation, blur kernel estimation and super resolution. The confidence weight of low-resolution images and the parameter value of the motion model obtained in image reconstruction are added to the modified motion estimation and blur kernel estimation. At the same time, Jacobian matrix, which can better describe the motion change, is introduced to further correct the error of motion estimation. Based on the results acquired from the experiments on synthetic data and real data, the superiority of the proposed method over others is obvious. The reconstructed high-resolution image retains the details of the image effectively, and the artifacts are greatly reduced.
APA, Harvard, Vancouver, ISO, and other styles
46

Song, Yue Hong, and Peng Fei Wang. "Frequency Characteristics Analysis of Simulated Horizontal and Vertical Motion Blurred Image." Applied Mechanics and Materials 200 (October 2012): 694–97. http://dx.doi.org/10.4028/www.scientific.net/amm.200.694.

Full text
Abstract:
This paper based on the degradation model of the blurred image by uniform rectilinear motion, simulated horizontal and vertical motion blurred images. The Fourier transform method was employed to analyze the frequency characteristics of blurred images. The simulation test result shows that ideally via the frequency characteristics of the blurred images, we can obtain the motion parameter, including distance and direction. These can be of great useful for image restoration of many blurring circumstances such as camera and printing images.
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Guanglei, Pengyu Wang, Yan Li, Tianqi Su, Xiuling Liu, and Hongrui Wang. "A Motion Artifact Reduction Method in Cerebrovascular DSA Sequence Images." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 08 (April 8, 2018): 1854022. http://dx.doi.org/10.1142/s0218001418540228.

Full text
Abstract:
Digital Subtraction Angiography (DSA) can be used for diagnosing the pathologies of vascular system including systemic vascular disease, coronary heart disease, arrhythmia, valvular disease and congenital heart disease. Previous studies have provided some image enhancement algorithms for DSA images. However, these studies are not suitable for automated processes in huge amounts of data. Furthermore, few algorithms solved the problems of image contrast corruption after artifact removal. In this paper, we propose a fully automatic method for cerebrovascular DSA sequence images artifact removal based on rigid registration and guided filter. The guided filtering method is applied to fuse the original DSA image and registered DSA image, the results of which preserve clear vessel boundary from the original DSA image and remove the artifacts by the registered procedure. The experimental evaluation with 40 DSA sequence images shows that the proposed method increases the contrast index by 24.1% for improving the quality of DSA images compared with other image enhancement methods, and can be implemented as a fully automatic procedure.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Yixiong, Mingliang Tao, Kewei Yang, and Zhenmiao Deng. "Video Superresolution Reconstruction Using Iterative Back Projection with Critical-Point Filters Based Image Matching." Advances in Multimedia 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/285969.

Full text
Abstract:
To improve the spatial resolution of reconstructed images/videos, this paper proposes a Superresolution (SR) reconstruction algorithm based on iterative back projection. In the proposed algorithm, image matching using critical-point filters (CPF) is employed to improve the accuracy of image registration. First, a sliding window is used to segment the video sequence. CPF based image matching is then performed between frames in the window to obtain pixel-level motion fields. Finally, high-resolution (HR) frames are reconstructed based on the motion fields using iterative back projection (IBP) algorithm. The CPF based registration algorithm can adapt to various types of motions in real video scenes. Experimental results demonstrate that, compared to optical flow based image matching with IBP algorithm, subjective quality improvement and an average PSNR score of 0.53 dB improvement are obtained by the proposed algorithm, when applied to video sequence.
APA, Harvard, Vancouver, ISO, and other styles
49

Barbieux, Kévin, Olivier Hautecoeur, Maurizio De Bartolomei, Manuel Carranza, and Régis Borde. "The Sentinel-3 SLSTR Atmospheric Motion Vectors Product at EUMETSAT." Remote Sensing 13, no. 9 (April 28, 2021): 1702. http://dx.doi.org/10.3390/rs13091702.

Full text
Abstract:
Atmospheric Motion Vectors (AMVs) are an important input to many Numerical Weather Prediction (NWP) models. EUMETSAT derives AMVs from several of its orbiting satellites, including the geostationary satellites (Meteosat), and its Low-Earth Orbit (LEO) satellites. The algorithm extracting the AMVs uses pairs or triplets of images, and tracks the motion of clouds or water vapour features from one image to another. Currently, EUMETSAT LEO satellite AMVs are retrieved from georeferenced images from the Advanced Very-High-Resolution Radiometer (AVHRR) on board the Metop satellites. EUMETSAT is currently preparing the operational release of an AMV product from the Sea and Land Surface Temperature Radiometer (SLSTR) on board the Sentinel-3 satellites. The main innovation in the processing, compared with AVHRR AMVs, lies in the co-registration of pairs of images: the images are first projected on an equal-area grid, before applying the AMV extraction algorithm. This approach has multiple advantages. First, individual pixels represent areas of equal sizes, which is crucial to ensure that the tracking is consistent throughout the processed image, and from one image to another. Second, this allows features that would otherwise leave the frame of the reference image to be tracked, thereby allowing more AMVs to be derived. Third, the same framework could be used for every LEO satellite, allowing an overall consistency of EUMETSAT AMV products. In this work, we present the results of this method for SLSTR by comparing the AMVs to the forecast model. We validate our results against AMVs currently derived from AVHRR and the Spinning Enhanced Visible and InfraRed Imager (SEVIRI). The release of the operational SLSTR AMV product is expected in 2022.
APA, Harvard, Vancouver, ISO, and other styles
50

Quan, Qian Qian. "A Motion Blurred Image Restoration Method Based on Wiener Filtering." Advanced Materials Research 403-408 (November 2011): 1664–67. http://dx.doi.org/10.4028/www.scientific.net/amr.403-408.1664.

Full text
Abstract:
To the deficiencies of traditional methods for avoiding motion image blurring, a motion blur image restoration method is studied based on Wiener filtering in this paper. The formation factors of motion-blurred images and the imaging process are analyzed, and the motion blur degradation model is established. It introduced the working principle of Wiener filtering, described the steps of blurred image restoration in details. The experiment testing and data analyzing are also conducted. Experimental results showed that the method can has good performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography