Academic literature on the topic 'Sliding-Window fMR image reconstruction'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sliding-Window fMR image reconstruction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sliding-Window fMR image reconstruction":

1

Zhang, Yixiong, Mingliang Tao, Kewei Yang, and Zhenmiao Deng. "Video Superresolution Reconstruction Using Iterative Back Projection with Critical-Point Filters Based Image Matching." Advances in Multimedia 2015 (2015): 1–10. http://dx.doi.org/10.1155/2015/285969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
To improve the spatial resolution of reconstructed images/videos, this paper proposes a Superresolution (SR) reconstruction algorithm based on iterative back projection. In the proposed algorithm, image matching using critical-point filters (CPF) is employed to improve the accuracy of image registration. First, a sliding window is used to segment the video sequence. CPF based image matching is then performed between frames in the window to obtain pixel-level motion fields. Finally, high-resolution (HR) frames are reconstructed based on the motion fields using iterative back projection (IBP) algorithm. The CPF based registration algorithm can adapt to various types of motions in real video scenes. Experimental results demonstrate that, compared to optical flow based image matching with IBP algorithm, subjective quality improvement and an average PSNR score of 0.53 dB improvement are obtained by the proposed algorithm, when applied to video sequence.
2

Xu, Zhongbiao, Huihui Ye, Mengye Lyu, Hongjian He, Jianhui Zhong, Yingjie Mei, Zhifeng Chen, et al. "Rigid motion correction for magnetic resonance fingerprinting with sliding-window reconstruction and image registration." Magnetic Resonance Imaging 57 (April 2019): 303–12. http://dx.doi.org/10.1016/j.mri.2018.11.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Jae-In, Chang-Uk Hyun, Hyangsun Han, and Hyun-cheol Kim. "Evaluation of Matching Costs for High-Quality Sea-Ice Surface Reconstruction from Aerial Images." Remote Sensing 11, no. 9 (May 4, 2019): 1055. http://dx.doi.org/10.3390/rs11091055.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Satellite remote sensing can be used effectively with a wide coverage and repeatability in large-scale Arctic sea-ice analysis. To produce reliable sea-ice information, satellite remote-sensing methods should be established and validated using accurate field data, but obtaining field data on Arctic sea-ice is very difficult due to limited accessibility. In this situation, digital surface models derived from aerial images can be a good alternative to topographical field data. However, to achieve this, we should discuss an additional issue, i.e., that low-textured surfaces on sea-ice can reduce the matching accuracy of aerial images. The matching performance is dependent on the matching cost and search window size used. Therefore, in order to generate high-quality sea-ice surface models, we first need to examine the influence of matching costs and search window sizes on the matching performance on low-textured sea-ice surfaces. For this reason, in this study, we evaluate the performance of matching costs in relation to changes of the search window size, using acquired aerial images of Arctic sea-ice. The evaluation concerns three factors. The first is the robustness of matching to low-textured surfaces. Matching costs for generating sea-ice surface models should have a high discriminatory power on low-textured surfaces, even with small search windows. To evaluate this, we analyze the accuracy, uncertainty, and optimal window size in terms of template matching. The second is the robustness of positioning to low-textured surfaces. One of the purposes of image matching is to determine the positions of object points that constitute digital surface models. From this point of view, we analyze the accuracy and uncertainty in terms of positioning object points. The last is the processing speed. Since the computation complexity is also an important performance indicator, we analyze the elapsed time for each of the processing steps. The evaluation results showed that the image domain costs were more effective for low-textured surfaces than the frequency domain costs. In terms of matching robustness, the image domain costs showed a better performance, even with smaller search windows. In terms of positioning robustness, the image domain costs also performed better because of the lower uncertainty. Lastly, in terms of processing speed, the PC (phase correlation) of the frequency domain showed the best performance, but the image domain costs, except MI (mutual information), were not far behind. From the evaluation results, we concluded that, among the compared matching costs, ZNCC (zero-mean normalized cross-correlation) is the most effective for sea-ice surface model generation. In addition, we found that it is necessary to adjust search window sizes properly, according to the number of textures required for reliable image matching on sea-ice surfaces, and that various uncertainties due to low-textured surfaces should be considered to determine the positions of object points.
4

Chen, Yanqiao, Yangyang Li, Licheng Jiao, Cheng Peng, Xiangrong Zhang, and Ronghua Shang. "Adversarial Reconstruction-Classification Networks for PolSAR Image Classification." Remote Sensing 11, no. 4 (February 18, 2019): 415. http://dx.doi.org/10.3390/rs11040415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Polarimetric synthetic aperture radar (PolSAR) image classification has become more and more widely used in recent years. It is well known that PolSAR image classification is a dense prediction problem. The recently proposed fully convolutional networks (FCN) model, which is very good at dealing with the dense prediction problem, has great potential in resolving the task of PolSAR image classification. Nevertheless, for FCN, there are some problems to solve in PolSAR image classification. Fortunately, Li et al. proposed the sliding window fully convolutional networks (SFCN) model to tackle the problems of FCN in PolSAR image classification. However, only when the labeled training sample is sufficient, can SFCN achieve good classification results. To address the above mentioned problem, we propose adversarial reconstruction-classification networks (ARCN), which is based on SFCN and introduces reconstruction-classification networks (RCN) and adversarial training. The merit of our method is threefold: (i) A single composite representation that encodes information for supervised image classification and unsupervised image reconstruction can be constructed; (ii) By introducing adversarial training, the higher-order inconsistencies between the true image and reconstructed image can be detected and revised. Our method can achieve impressive performance in PolSAR image classification with fewer labeled training samples. We have validated its performance by comparing it against several state-of-the-art methods. Experimental results obtained by classifying three PolSAR images demonstrate the efficiency of the proposed method.
5

Feng, Lulu, Dongdong Weng, and Bin Liang. "A Deep Facial BRDF Estimation Method Based on Image Translation." Journal of Physics: Conference Series 2363, no. 1 (November 1, 2022): 012011. http://dx.doi.org/10.1088/1742-6596/2363/1/012011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The reconstruction of photorealistic 3D face geometry, textures and reflectance (BRDF) is one of the most popular fields in computer vision, graphics and machine learning. However, the acquisition of facial reflectance remains a challenge. In this article, we propose a method for estimating the facial reflection properties of a single portrait image based on image translation. From a RGB face image, we obtain the BRDF with a large amount of detail. To achieve it, we perform a reverse engineer, which renders face images with the obtained texture map to form training data pairs based on the Blinn-Phong illumination model. We also apply random rotate-and-crop and sliding-window-crop to augment the data and optimize the network weights by minimizing the generated adversarial loss and reconstruction loss. As demonstrated in a chain of quantitative and qualitative experiments, our method achieves superior performance compared to the state-of-the-art methods.
6

Lee, Mun Bae, Geon-Ho Jahng, Hyung Joong Kim, and Oh-In Kwon. "High-frequency conductivity at Larmor-frequency in human brain using moving local window multilayer perceptron neural network." PLOS ONE 16, no. 5 (May 20, 2021): e0251417. http://dx.doi.org/10.1371/journal.pone.0251417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Magnetic resonance electrical properties tomography (MREPT) aims to visualize the internal high-frequency conductivity distribution at Larmor frequency using the B1 transceive phase data. From the magnetic field perturbation by the electrical field associated with the radiofrequency (RF) magnetic field, the high-frequency conductivity and permittivity distributions inside the human brain have been reconstructed based on the Maxwell’s equation. Starting from the Maxwell’s equation, the complex permittivity can be described as a second order elliptic partial differential equation. The established reconstruction algorithms have focused on simplifying and/or regularizing the elliptic partial differential equation to reduce the noise artifact. Using the nonlinear relationship between the Maxwell’s equation, measured magnetic field, and conductivity distribution, we design a deep learning model to visualize the high-frequency conductivity in the brain, directly derived from measured magnetic flux density. The designed moving local window multi-layer perceptron (MLW-MLP) neural network by sliding local window consisting of neighboring voxels around each voxel predicts the high-frequency conductivity distribution in each local window. The designed MLW-MLP uses a family of multiple groups, consisting of the gradients and Laplacian of measured B1 phase data, as the input layer in a local window. The output layer of MLW-MLP returns the conductivity values in each local window. By taking a non-local mean filtering approach in the local window, we reconstruct a noise suppressed conductivity image while maintaining spatial resolution. To verify the proposed method, we used B1 phase datasets acquired from eight human subjects (five subjects for training procedure and three subjects for predicting the conductivity in the brain).
7

Serag, Ahmed, Gillian Macnaught, Fiona C. Denison, Rebecca M. Reynolds, Scott I. Semple, and James P. Boardman. "Histograms of Oriented 3D Gradients for Fully Automated Fetal Brain Localization and Robust Motion Correction in 3 T Magnetic Resonance Images." BioMed Research International 2017 (2017): 1–8. http://dx.doi.org/10.1155/2017/3956363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Fetal brain magnetic resonance imaging (MRI) is a rapidly emerging diagnostic imaging tool. However, automated fetal brain localization is one of the biggest obstacles in expediting and fully automating large-scale fetal MRI processing. We propose a method for automatic localization of fetal brain in 3 T MRI when the images are acquired as a stack of 2D slices that are misaligned due to fetal motion. First, the Histogram of Oriented Gradients (HOG) feature descriptor is extended from 2D to 3D images. Then, a sliding window is used to assign a score to all possible windows in an image, depending on the likelihood of it containing a brain, and the window with the highest score is selected. In our evaluation experiments using a leave-one-out cross-validation strategy, we achieved 96% of complete brain localization using a database of 104 MRI scans at gestational ages between 34 and 38 weeks. We carried out comparisons against template matching and random forest based regression methods and the proposed method showed superior performance. We also showed the application of the proposed method in the optimization of fetal motion correction and how it is essential for the reconstruction process. The method is robust and does not rely on any prior knowledge of fetal brain development.
8

Bu, Dongdong, Shuxiang Guo, Jin Guo, He Li, and Hanze Wang. "Low-Density sEMG-Based Pattern Recognition of Unrelated Movements Rejection for Wrist Joint Rehabilitation." Micromachines 14, no. 3 (February 27, 2023): 555. http://dx.doi.org/10.3390/mi14030555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
sEMG-based pattern recognition commonly assumes a limited number of target categories, and the classifiers often predict each target category depending on probability. In wrist rehabilitation training, the patients may make movements that do not belong to the target category unconsciously. However, most pattern recognition methods can only identify limited patterns and are prone to be disturbed by abnormal movement, especially for wrist joint movements. To address the above the problem, a sEMG-based rejection method for unrelated movements is proposed to identify wrist joint unrelated movements using center loss. In this paper, the sEMG signal collected by the Myo armband is used as the input of the sEMG control method. First, the sEMG signal is processed by sliding signal window and image coding. Then, the CNN with center loss and softmax loss is used to describe the spatial information from the sEMG image to extract discriminative features and target movement recognition. Finally, the deep spatial information is used to train the AE to reject unrelated movements based on the reconstruction loss. The results show that the proposed method can realize the target movements recognition and reject unrelated movements with an F-score of 93.4% and a rejection accuracy of 95% when the recall is 0.9, which reveals the effectiveness of the proposed method.
9

Khan, S., and P. K. Gupta. "COMPARITIVE STUDY OF TREE COUNTING ALGORITHMS IN DENSE AND SPARSE VEGETATIVE REGIONS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-5 (November 19, 2018): 801–8. http://dx.doi.org/10.5194/isprs-archives-xlii-5-801-2018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<p><strong>Abstract.</strong> Tree counting can be a challenging and time consuming task, especially if done manually. This study proposes and compares three different approaches for automatic detection and counting of trees in different vegetative regions. First approach is to mark extended minima’s, extended maxima’s along with morphological reconstruction operations on an image for delineation and tree crown segmentation. To separate two touching crowns, a marker controlled watershed algorithm is used. For second approach, the color segmentation method for tree identification is used. Starting with the conversion of an RGB image to HSV color space then filtering, enhancing and thresholding to isolate trees from non-trees elements followed by watershed algorithm to separate touching tree crowns. Third approach involves deep learning method for classification of tree and non-tree, using approximately 2268 positive and 1172 negative samples each. Each segment of an image is then classified and sliding window algorithm is used to locate each tree crown. Experimentation shows that the first approach is well suited for classification of trees is dense vegetation, whereas the second approach is more suitable for detecting trees in sparse vegetation. Deep learning classification accuracy lies in between these two approaches and gave an accuracy of 92% on validation data. The study shows that deep learning can be used as a quick and effective tool to ascertain the count of trees from airborne optical imagery.</p>
10

Zou, Yajing, Amr Eldemiry, Yaxin Li, and Wu Chen. "Robust RGB-D SLAM Using Point and Line Features for Low Textured Scene." Sensors 20, no. 17 (September 2, 2020): 4984. http://dx.doi.org/10.3390/s20174984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Three-dimensional (3D) reconstruction using RGB-D camera with simultaneous color image and depth information is attractive as it can significantly reduce the cost of equipment and time for data collection. Point feature is commonly used for aligning two RGB-D frames. Due to lacking reliable point features, RGB-D simultaneous localization and mapping (SLAM) is easy to fail in low textured scenes. To overcome the problem, this paper proposes a robust RGB-D SLAM system fusing both points and lines, because lines can provide robust geometry constraints when points are insufficient. To comprehensively fuse line constraints, we combine 2D and 3D line reprojection error with point reprojection error in a novel cost function. To solve the cost function and filter out wrong feature matches, we build a robust pose solver using the Gauss–Newton method and Chi-Square test. To correct the drift of camera poses, we maintain a sliding-window framework to update the keyframe poses and related features. We evaluate the proposed system on both public datasets and real-world experiments. It is demonstrated that it is comparable to or better than state-of-the-art methods in consideration with both accuracy and robustness.

Dissertations / Theses on the topic "Sliding-Window fMR image reconstruction":

1

Amor, Zaineb. "Non-Cartesian Sparkling encoding for High spatio-temporal resolution functional Magnetic Resonance Imaging (fMRI) at 7 Tesla and beyond." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPAST032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'IRM fonctionnelle (IRMf) est actuellement l'une des techniques de neuroimagerie fonctionnelle les plus utilisées pour sonder l'activité cérébrale de manière non invasive grâce au contraste dépendant du niveau d'oxygène dans le sang (BOLD) qui reflète le couplage neurovasculaire. Elle offre un compromis intéressant entre la résolution spatiale et temporelle afin d'étudier le cerveau entier en tant qu'agrégation de systèmes fonctionnels intrinsèques. La recherche d'une résolution spatiale et/ou temporelle plus élevée en IRMf tout en préservant un rapport signal/bruit temporel suffisant~(tSNR) a généré une quantité considérable de contributions méthodologiques au cours de la dernière décennie, allant des methodes d'encodage cartésiennes ou non cartésiennes, des stratégies d'acquisition 2D ou 3D, de l'imagerie parallèle et/ou de échantillonnage compressif (CS) et des acquisitions multibande, pour n'en citer que quelques-unes. Dans ce travail, nous nous concentrons sur l'utilisation du CS dans l'IRMf, plus spécifiquement, nous considérons le schéma d'encodage SPARKLING.L'objectif principal de cette thèse est d'évaluer 3D-SPARKLING en tant que schéma d'acquisition viable pour l'IRMf à haute résolution et pour cerveau entier.À cet égard, nous avons d'abord comparé ses performances avec l'état de l'art en matière: 3D-EPI. Après avoir observé une plus grande sensibilité aux imperfections statiques et dynamiques du champ magnétique dans les données 3D-SPARKLING, nous avons établi un protocole expérimental pour les corriger. Enfin, nous avons étudié les possibilités et les limites de l'utilisation d'une reconstruction par fenêtre glissante en combinaison avec le schéma d'encodage SPARKLING pour améliorer rétrospectivement la résolution temporelle pendant la reconstruction des images en IRMf. Une étude de simulation dans laquelle la vérité terrain est contrôlée a été menée et a démontré la possibilité de détecter les oscillations à haute fréquence dans le signal BOLD et de séparer le bruit physiologique de l'activité neuronale
Functional MRI (fMRI) is currently one of the most commonly used functional neuroimaging techniques to probe brain activity non-invasively through the blood oxygen level-dependent (BOLD) contrast that reflects neurovascular coupling. It offers an interesting trade-off between spatial and temporal resolution in order to study the whole brain as an aggregation of intrinsic functional systems. The quest for higher spatial and/or temporal resolution in fMRI while preserving a sufficient temporal signal-to-noise ratio~(tSNR) has generated a tremendous amount of methodological contributions in the last decade ranging from Cartesian vs. non-Cartesian readouts, 2D vs. 3D acquisition strategies, parallel imaging and/or compressed sensing~(CS) accelerations and simultaneous multi-slice acquisitions to cite a few. In this work, we focus on the use of CS in fMRI; more specifically, we consider Spreading Projection Algorithm for Rapid K-space sampLING (SPARKLING) encoding scheme.The main focus and goal of this thesis involves the evaluation of 3D-SPARKLING as a viable acquisition scheme for high-resolution whole-brain fMRI. In this regard, we initially compared its capabilities with state-of-the-art 3D-EPI. After observing higher sensitivity to static and dynamic magnetic field imperfections in 3D-SPARKLING data, we established an experimental protocol to correct them. Finally, we studied the capabilities and limitations of employing a sliding-window reconstruction in combination with the SPARKLING encoding scheme to enhance temporal resolution during image reconstruction in fMRI retrospectively. A simulation study where the ground truth is controlled was conducted and demonstrated the possibility of detecting high-frequency oscillations in the BOLD signal and separating physiological noise from neural activity

Book chapters on the topic "Sliding-Window fMR image reconstruction":

1

Khan, Jamal, Sarah Moharem-Elgamal, and Francesca Pugliese. "Image reconstruction, postprocessing, and fundamentals of image analysis." In EACVI Handbook of Cardiovascular CT, edited by Oliver Gaemperli, Pal Maurovich-Horvat, Koen Nieman, Gianluca Pontone, and Francesca Pugliese, 43—C1.6.S11. Oxford University PressOxford, 2022. http://dx.doi.org/10.1093/med/9780192884459.003.0006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Image reconstruction process produces diagnostic images from raw CT data and uses two main approaches: filtered back projection and iterative reconstruction. A CT image consists of a matrix of pixels, typically a square matrix of 512 × 512 pixels. Pixels are assigned a CT number (Hounsfield units (HUs)) that reflects the mean attenuation of the tissues included in the corresponding voxel. HUs are also affected by the energy of the X-ray beam and are normalized by the attenuation coefficient of water. The CT number scale in the image is mapped to a visual grey scale. Window settings (level/width) allow adjustment of the way CT numbers are mapped to the grey scale. Window settings determine image brightness and contrast, and are modifiable by the user. Although coronary CT angiography raw data are first reconstructed in a series of axial images, image data are isotropic and coronary artery stenosis analysis is based on multiplanar reconstructions (MPRs) and curved MPRs constructed along the vessel of interest. Maximum intensity projection and volume-rendered views display anatomical relationships between structures and are not primarily intended for fine coronary stenosis assessment.
2

Kellogg, Susan. "Of Warriors and Working Women Gender in Later Prehispanic Mesoamerica and the Andes." In Weaving the Past, 18–52. Oxford University PressNew York, NY, 2005. http://dx.doi.org/10.1093/oso/9780195123814.003.0002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract The visual imagery of female deities (such as the decapitated Mexica goddess Coyolxauhqui; see fig. 2.1) is captivating and has drawn much attention from those interested in the later prehistory of indigenous women before the arrival of Europeans. Such complex, often contradictory, images offer scholars rich but often puzzling materials for interpreting and understanding women’s lives, their statuses, and gender ideologies in the complex cultures and civilizations of Mesoamerica and the Andes. In many cultures, such as those of the preInka Andes, visual materials constitute some of the few pieces of information available about women in the prehistoric past, and finding voices, reconstructing individual identities, or reading agency from representations and remains presents interpretive challenges. Nevertheless, such images, often dramatic in nature, offer a window onto the gendered worlds of the Latin American prehispanic past, especially those of the Classic and Post-classic periods for Mesoamerica and the Middle Horizon through late Horizon periods for the Andes, when urban, class-based societies developed.

Conference papers on the topic "Sliding-Window fMR image reconstruction":

1

Tani, Jacopo, Sandipan Mishra, and John T. Wen. "On Derivative Sampling From Image Blur for Reconstruction of Band-Limited Signals." In ASME 2014 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/dscc2014-6180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Image sensors are typically characterized by slow sampling rates, which limit their efficacy in signal reconstruction applications. Their integrative nature though produces image blur when the exposure window is long enough to capture relative motion of the observed object relative to the sensor. Image blur contains more information on the observed dynamics than the typically used centroids, i.e., time averages of the motion within the exposure window. Parameters characterizing the observed motion, such as the signal derivatives at specified sampling instants, can be used for signal reconstruction through the derivative sampling extension of the known sampling theorem. Using slow image based sensors as derivative samplers allows for reconstruction of faster signals, overcoming Nyquist limitations. In this manuscript, we present an algorithm to extract values of a signal and its derivatives from blurred image measurements at specified sampling instants, i.e. the center of the exposure windows, show its application in two signal reconstruction numerical examples and provide a numerical study on the sensitivity of the extracted values to significant problem parameters.
2

Huh, Sam S., W. L. Rogers, and Neal H. Clinthorne. "On-line sliding-window list-mode PET image reconstruction for a surgical pet imaging probe." In 2008 IEEE Nuclear Science Symposium and Medical Imaging conference (2008 NSS/MIC). IEEE, 2008. http://dx.doi.org/10.1109/nssmic.2008.4774493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kobayashi, Seiji, and Demetri Psaltis. "Novel method for computer-generated holography based on neural networks." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1990. http://dx.doi.org/10.1364/oam.1990.mff6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Since Lohman et al. invented computer-generated holography, several methods have been tried to improve its image quality. We describe a method based on a feedback neural network that is designed to find the optimum (minimum-error) hologram by minimizing its error function. In this network, neurons are allowed to take only two output states, which directly represent the binary states of the corresponding pixels on the hologram. Thus, the number of neurons in the network equals the number of pixels in the hologram. The threshold value for each neuron and the connection strength between the neurons are determined by the Fourier transform of the desired wave front and the window function, respectively. The window function specifies the area of interest on the reconstruction plane, inside of which the image quality is to be evaluated. We have calculated several holograms by this network; starting from a random initial state and a 512 × 512 pixel hologram, the neural network typically converged after 2 hours of calculation (on a Sun SPARC station). The resultant holograms were recorded on an optical disk. The optically reconstructed images have excellent signal-to-noise ratios inside the specified reconstruction area.
4

Conchello, Jose A., J. Peter Zelten, Frank C. Miele, Bruce H. Davis, and Eric W. Hansen. "Enhanced 3-D reconstruction from confocal microscope images." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1988. http://dx.doi.org/10.1364/oam.1988.thff1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Confocal scanning microscopes are known to possess superior optical sectioning capabilities compared to conventional microscopes. Out-of-focus contributions in a through-focus series of images are significantly reduced by the confocal geometry but not completely removed. This paper reports our initial investigations Into a posterioriimage processing (i.e., deconvolution) for further improvement of depth resolution in confocal microscopy. This project is part of a larger effort In laser scanning fluorescence microscopy for biological and biophysical analyses in living cells. The instrument is built around a standard inverted microscope stand, enabling the use of standard optics, micromanipulation apparatus, and conventional (including video) microscopy in conjunction with laser scanning. The beam is scanned across the specimen by a pair of galvanometer-mounted mirrors driven by a programmable controller which can operate In three modes: full raster scan; region of interest; and random-access (point-hopping). After taking a scout image with laser scanning or video, the user will select isolated points or regions of interest for further analysis via a graphic user interface implemented on the system’s host computer. Experimental parameters such as detector integration times are set up with a window-style menu.
5

Jiang, Peng, Lihui Peng, and Deyun Xiao. "Characterization of Window Functions for Regularization of Electrical Capacitance Tomography Image Reconstruction." In MULTIPHASE FLOW: THE ULTIMATE MEASUREMENT CHALLENGE: Proc.of The 5th Int. Symp. on Measurement Techniques for Multiphase Flows (5th ISMTMF); 2nd Int. Wrkshp.on Process Tomography (IWPT-2) (As a part of ISMTMF); 5th ISMTMF/IWPT-2, 2006-Macau/Zhuhai). AIP, 2007. http://dx.doi.org/10.1063/1.2747517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shi, Zhe, Wenwu Zhou, Tianlun Zhang, Yingzheng Liu, Bo Meng, Yihong He, and Fei Zeng. "A 3-D reconstruction approach for the IR-camera-based turbine blade temperature measurements." In GPPS Hong Kong24. GPPS, 2023. http://dx.doi.org/10.33737/gpps23-tc-035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The reconstruction of 3D turbine blade surface temperature is significant to the engine thermal diagnosis and cooling design. The infrared camera is widely used, and the pose of the camera is vital for restoring the 3-D temperature field from the 2-D IR images. However, most pose estimation methods are designed for RGB images and are not suitable for IR images. Therefore, an approach based on Perspective-n-Point (PnP) algorithm and optimization method is proposed. The PnP algorithm is applied to obtain a rough pose, then the optimization method is adopted to improve accuracy. This approach was validated on a turbine cascade test bench. A turbine blade was heated by hot air and photographed by an IR camera. The corners of the optical window on the test bench were used as feature points for the PnP algorithm. The average reprojection error of the edge points in the IR images served as the cost function of the optimization method. The results show that the optimization method can significantly reduce the average edge point reprojection error and improve the accuracy of reconstruction. This approach is beneficial to IR-camera-based studies.
7

Park, Hong-Seok, Chintal Shah, and Rima Shah. "An Enhanced Development of 3D Intra-Oral Scanner Using Fringe-Projection Technique for Dentistry." In ASME 2016 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/imece2016-66427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
3D shape recognization technique is rapidly advancing from last decade in the fields of manufacturing, computer science, entertainment and medical technology. Due to the restriction of size and area of cavity, it’s challenging to develop such non-contact optical technologies for scanning. 3D digitization technology plays a vital role in the field of dentistry benefiting dentists and patients by eliminating long time procedures for making the prosthesis/abutments and results into ultimate comfort. In this paper, we have recounted a design of a prototype for a three-dimensional intra-oral scanner using the principle of fringe projection and active triangulation method. LED as a light source passed through the liquid crystal on silicon (LCoS) which radiates the light into three colors and strikes onto the collimating lens assembly and then passed through the optical deflectors. Once the light strikes the object through scanning window it is guided back through the flat reflectors and the fringe pattern on the object is stored into the gray encoding plate. With the help of camera all these images are stored. After the acquisition of images, firstly it will calculate the phase distribution using four-step phase shifting algorithm and unwrap the wrap phase which helps us in getting accurate images. Later, we get display of scanned oral cavity onto the computer screen. Phase-height mapping algorithm has been realized for the reconstruction of the 3D real time reconstruction of the scanned oral cavity which helps us in fast scanning with accurate data. A novel approach of LED as a light source and LCoS display for scattering light fragments into three different colors helps us to scan more effectively for registration of dental surfaces from the patient’s mouth more accurately. Apart from that, its sleek design helps to scan with less pain to the patient’s having low mouth opening. Experiment was performed on the prototype of denture and using this proposed method we have achieved the accuracy of 25μm and it took around 180 sec for the full arc scan of the lower oral cavity. The result of scanned data was checked using the CAD/CAM software for dentistry and compared with the prototype data of denture. Further this image can be used for making prosthesis/abutment directly into production using 3D printing machine or the milling machine. Thus, an abutment or prosthesis obtained with this method is of high quality and eliminates conventional long procedures which helps in reducing pain of patient’s and helps dentists to attain more patients in less time.
8

Whitney, W. T., B. J. Feldman, and Michael T. Duignan. "Phase conjugation of 2.91-μm HF laser radiation via stimulated Brillouin scattering." In International Laser Science Conference. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/ils.1986.thj6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We report the results of experiments demonstrating stimulated Brillouin scattering (SBS) and phase conjugation of a single-line HF laser. The 2.91 μm was selected by an intracavity grating yielding up to 3J in a 1.5-μS pulse. The flat-flat optical cavity was carefully aligned to maximize far-field intensity and the resulting spot diameter was found to be near the diffraction limit. The magnified far-field intensity profiles were imaged with film burns and with a linear pyroelectric array. This high spatial quality beam was intentionally distorted by passing it through a roughened NaCl window and the distortion was verified by examining the far-field intensity profile. This distorted beam was then focused into a cell containing xenon at 45 atm. The resulting SBS reflection passed back through the NaCl aberrator and the diffraction-limited far-field intensity profile of this reflected beam was found to be restored. This demonstration of random phase aberration and reconstruction of the laser beam provides evidence that the SBS reflection is the phase conjugate of the input beam. (12min)

To the bibliography