Добірка наукової літератури з теми "MRI IMAGE"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "MRI IMAGE".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "MRI IMAGE"

1

Zhang, Huixian, Hailong Li, Jonathan R. Dillman, Nehal A. Parikh, and Lili He. "Multi-Contrast MRI Image Synthesis Using Switchable Cycle-Consistent Generative Adversarial Networks." Diagnostics 12, no. 4 (March 26, 2022): 816. http://dx.doi.org/10.3390/diagnostics12040816.

Повний текст джерела
Анотація:
Multi-contrast MRI images use different echo and repetition times to highlight different tissues. However, not all desired image contrasts may be available due to scan-time limitations, suboptimal signal-to-noise ratio, and/or image artifacts. Deep learning approaches have brought revolutionary advances in medical image synthesis, enabling the generation of unacquired image contrasts (e.g., T1-weighted MRI images) from available image contrasts (e.g., T2-weighted images). Particularly, CycleGAN is an advanced technique for image synthesis using unpaired images. However, it requires two separate image generators, demanding more training resources and computations. Recently, a switchable CycleGAN has been proposed to address this limitation and successfully implemented using CT images. However, it remains unclear if switchable CycleGAN can be applied to cross-contrast MRI synthesis. In addition, whether switchable CycleGAN is able to outperform original CycleGAN on cross-contrast MRI image synthesis is still an open question. In this paper, we developed a switchable CycleGAN model for image synthesis between multi-contrast brain MRI images using a large set of publicly accessible pediatric structural brain MRI images. We conducted extensive experiments to compare switchable CycleGAN with original CycleGAN both quantitatively and qualitatively. Experimental results demonstrate that switchable CycleGAN is able to outperform CycleGAN model on pediatric MRI brain image synthesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yang, Huan, Pengjiang Qian, and Chao Fan. "An Indirect Multimodal Image Registration and Completion Method Guided by Image Synthesis." Computational and Mathematical Methods in Medicine 2020 (June 30, 2020): 1–10. http://dx.doi.org/10.1155/2020/2684851.

Повний текст джерела
Анотація:
Multimodal registration is a challenging task due to the significant variations exhibited from images of different modalities. CT and MRI are two of the most commonly used medical images in clinical diagnosis, since MRI with multicontrast images, together with CT, can provide complementary auxiliary information. The deformable image registration between MRI and CT is essential to analyze the relationships among different modality images. Here, we proposed an indirect multimodal image registration method, i.e., sCT-guided multimodal image registration and problematic image completion method. In addition, we also designed a deep learning-based generative network, Conditional Auto-Encoder Generative Adversarial Network, called CAE-GAN, combining the idea of VAE and GAN under a conditional process to tackle the problem of synthetic CT (sCT) synthesis. Our main contributions in this work can be summarized into three aspects: (1) We designed a new generative network called CAE-GAN, which incorporates the advantages of two popular image synthesis methods, i.e., VAE and GAN, and produced high-quality synthetic images with limited training data. (2) We utilized the sCT generated from multicontrast MRI as an intermediary to transform multimodal MRI-CT registration into monomodal sCT-CT registration, which greatly reduces the registration difficulty. (3) Using normal CT as guidance and reference, we repaired the abnormal MRI while registering the MRI to the normal CT.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Destyningtias, Budiani, Andi Kurniawan Nugroho, and Sri Heranurweni. "Analisa Citra Medis Pada Pasien Stroke dengan Metoda Peregangan Kontras Berbasis ImageJ." eLEKTRIKA 10, no. 1 (June 19, 2019): 15. http://dx.doi.org/10.26623/elektrika.v10i1.1105.

Повний текст джерела
Анотація:
<p>This study aims to develop medical image processing technology, especially medical images of CT scans of stroke patients. Doctors in determining the severity of stroke patients usually use medical images of CT scans and have difficulty interpreting the extent of bleeding. Solutions are used with contrast stretching which will distinguish cell tissue, skull bone and type of bleeding. This study uses contrast stretching from the results of CT Scan images produced by first turning the DICOM Image into a JPEG image using the help of the ImageJ program. The results showed that the histogram equalization method and statistical texture analysis could be used to distinguish normal MRI and abnormal MRI detected by stroke.</p><p><strong>Keywords : </strong>Stroke, MRI, Dicom, JPEG, ImageJ, Contrast Stretching<strong></strong></p><p> </p>
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bellam, Kiranmai, N. Krishnaraj, T. Jayasankar, N. B. Prakash, and G. R. Hemalakshmi. "Adaptive Multimodal Image Fusion with a Deep Pyramidal Residual Learning Network." Journal of Medical Imaging and Health Informatics 11, no. 8 (August 1, 2021): 2135–43. http://dx.doi.org/10.1166/jmihi.2021.3763.

Повний текст джерела
Анотація:
Multimodal medical imaging is an indispensable requirement in the treatment of various pathologies to accelerate care. Rather than discrete images, a composite image combining complementary features from multimodal images is highly informative for clinical examinations, surgical planning, and progress monitoring. In this paper, a deep learning fusion model is proposed for the fusion of medical multimodal images. Based on pyramidal and residual learning units, the proposed model, strengthened with adaptive fusion rules, is tested on image pairs from a standard dataset. The potential of the proposed model for enhanced image exams is shown by fusion studies with deep network images and quantitative output metrics of magnetic resonance imaging and positron emission tomography (MRI/PET) and magnetic resonance imaging and single-photon emission computed tomography (MRI/SPECT). The proposed fusion model achieves the Structural Similarity Index Measure (SSIM) values of 0.9502 and 0.8103 for the MRI/SPECT and MRI/PET MRI/SPECT image sets, signifying the perceptual visual consistency of the fused images. Testing is performed on 20 pairs of MRI/SPECT and MRI/PET images. Similarly, the Mutual Information (MI) values of 2.7455 and 2.7776 obtained for the MRI/SPECT and MRI/PET image sets, indicating the model’s ability to capture the information content from the source images to the composite image. Further, the proposed model allows deploying its variants, introducing refinements on the basic model suitable for the fusion of low and high-resolution medical images of diverse modalities.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Schramm, Georg, and Claes Nøhr Ladefoged. "Metal artifact correction strategies in MRI-based attenuation correction in PET/MRI." BJR|Open 1, no. 1 (November 2019): 20190033. http://dx.doi.org/10.1259/bjro.20190033.

Повний текст джерела
Анотація:
In hybrid positron emission tomography (PET) and MRI systems, attenuation correction for PET image reconstruction is commonly based on processing of dedicated MR images. The image quality of the latter is strongly affected by metallic objects inside the body, such as e.g. dental implants, endoprostheses, or surgical clips which all lead to substantial artifacts that propagate into MRI-based attenuation images. In this work, we review publications about metal artifact correction strategies in MRI-based attenuation correction in PET/MRI. Moreover, we also give an overview about publications investigating the impact of MRI-based attenuation correction metal artifacts on the reconstructed PET image quality and quantification.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

., Swapnali Matkar. "IMAGE SEGMENTATION METHODS FOR BRAIN MRI IMAGES." International Journal of Research in Engineering and Technology 04, no. 03 (March 25, 2015): 263–66. http://dx.doi.org/10.15623/ijret.2015.0403045.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Singh, Ram, and Lakhwinder Kaur. "Noise-residue learning convolutional network model for magnetic resonance image enhancement." Journal of Physics: Conference Series 2089, no. 1 (November 1, 2021): 012029. http://dx.doi.org/10.1088/1742-6596/2089/1/012029.

Повний текст джерела
Анотація:
Abstract Magnetic Resonance Image (MRI) is an important medical image acquisition technique used to acquire high contrast images of human body anatomical structures and soft tissue organs. MRI system does not use any harmful radioactive ionized material like x-rays and computerized tomography (CT) imaging techniques. High-resolution MRI is desirable in many clinical applications such as tumor segmentation, image registration, edges & boundary detection, and image classification. During MRI acquisition, many practical constraints limit the MRI quality by introducing random Gaussian noise and some other artifacts by the thermal energy of the patient body, random scanner voltage fluctuations, body motion artifacts, electronics circuits impulse noise, etc. High-resolution MRI can be acquired by increasing scan time, but considering patient comfort, it is not preferred in practice. Hence, postacquisition image processing techniques are used to filter noise contents and enhance the MRI quality to make it fit for further image analysis tasks. The main motive of MRI enhancement is to reconstruct a high-quality MRI while improving and retaining its important features. The new deep learning image denoising and artifacts removal methods have shown tremendous potential for high-quality image reconstruction from noise degraded MRI while preserving useful image information. This paper presents a noise-residue learning convolution neural network (CNN) model to denoise and enhance the quality of noise-corrupted low-resolution MR images. The proposed technique shows better performance in comparison with other conventional MRI enhancement methods. The reconstructed image quality is evaluated by the peak-signal-to-noise ratio (PSNR) and structural similarity index (SSIM) metrics by optimizing information loss in reconstructed MRI measured in mean squared error (MSE) metric.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Yan, Rong. "The Value of Convolutional-Neural-Network-Algorithm-Based Magnetic Resonance Imaging in the Diagnosis of Sports Knee Osteoarthropathy." Scientific Programming 2021 (July 2, 2021): 1–11. http://dx.doi.org/10.1155/2021/2803857.

Повний текст джерела
Анотація:
The application value of the convolutional neural network (CNN) algorithm in the diagnosis of sports knee osteoarthropathy was investigated in this study. A network model was constructed in this experiment for image analysis of magnetic resonance imaging (MRI) technology. Then, 100 cases of sports knee osteoarthropathy patients and 50 healthy volunteers were selected. Digital radiography (DR) images and MRI images of all the research objects were collected after the inclusion of the two groups. Besides, the important physiological representations were extracted from their image data graphs, and the hidden complex relationships were learned. The state without input results was judged through convolutional network calculation, and the result prediction was given. On this basis, there was an analysis of the diagnostic efficiency of traditional DR images and MRI images based on CNN for patients with sports knee osteoarthropathy. The results showed that the MRI images analyzed by the CNN model showed a more obvious display rate than DR images for some nonbone changes of osteoarthritis. The correlation coefficient between MRI image rating and visual analog scale (VAS) was 0.865, which was higher than 0.713 of DR image rating, with a statistical meaning ( P < 0.01 ). For cases with mild lesions, the number of cases detected by MRI based on CNN algorithm in 0–4 image rating was 15, 18, 10, 6, and 7, respectively, which was markedly better than that of DR images. In short, the MRI examination based on the CNN image analysis model could extract important physiological representations from the image data and learn the hidden complex relationships. The convolutional network was calculated to determine the state of the uninput results and give the result predictions. Moreover, MRI examination based on the CNN image analysis model had high overall diagnostic efficiency and grading diagnostic efficiency for patients with motor knee osteoarthropathy, which was of great significance in clinical practice.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Odusami, Modupe, Rytis Maskeliūnas, and Robertas Damaševičius. "Pareto Optimized Adaptive Learning with Transposed Convolution for Image Fusion Alzheimer’s Disease Classification." Brain Sciences 13, no. 7 (July 8, 2023): 1045. http://dx.doi.org/10.3390/brainsci13071045.

Повний текст джерела
Анотація:
Alzheimer’s disease (AD) is a neurological condition that gradually weakens the brain and impairs cognition and memory. Multimodal imaging techniques have become increasingly important in the diagnosis of AD because they can help monitor disease progression over time by providing a more complete picture of the changes in the brain that occur over time in AD. Medical image fusion is crucial in that it combines data from various image modalities into a single, better-understood output. The present study explores the feasibility of employing Pareto optimized deep learning methodologies to integrate Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) images through the utilization of pre-existing models, namely the Visual Geometry Group (VGG) 11, VGG16, and VGG19 architectures. Morphological operations are carried out on MRI and PET images using Analyze 14.0 software and after which PET images are manipulated for the desired angle of alignment with MRI image using GNU Image Manipulation Program (GIMP). To enhance the network’s performance, transposed convolution layer is incorporated into the previously extracted feature maps before image fusion. This process generates feature maps and fusion weights that facilitate the fusion process. This investigation concerns the assessment of the efficacy of three VGG models in capturing significant features from the MRI and PET data. The hyperparameters of the models are tuned using Pareto optimization. The models’ performance is evaluated on the ADNI dataset utilizing the Structure Similarity Index Method (SSIM), Peak Signal-to-Noise Ratio (PSNR), Mean-Square Error (MSE), and Entropy (E). Experimental results show that VGG19 outperforms VGG16 and VGG11 with an average of 0.668, 0.802, and 0.664 SSIM for CN, AD, and MCI stages from ADNI (MRI modality) respectively. Likewise, an average of 0.669, 0.815, and 0.660 SSIM for CN, AD, and MCI stages from ADNI (PET modality) respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wu, Hongliang, Guocheng Chen, Guibao Zhang, and Minghua Dai. "Application of Multimodal Fusion Technology in Image Analysis of Pretreatment Examination of Patients with Spinal Injury." Journal of Healthcare Engineering 2022 (April 12, 2022): 1–10. http://dx.doi.org/10.1155/2022/4326638.

Повний текст джерела
Анотація:
As one of the most common imaging screening techniques for spinal injuries, MRI is of great significance for the pretreatment examination of patients with spinal injuries. With rapid iterative update of imaging technology, imaging techniques such as diffusion weighted magnetic resonance imaging (DWI), dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), and magnetic resonance spectroscopy are frequently used in the clinical diagnosis of spinal injuries. Multimodal medical image fusion technology can obtain richer lesion information by combining medical images in multiple modalities. Aiming at the two modalities of DCE-MRI and DWI images under MRI images of spinal injuries, by fusing the image data under the two modalities, more abundant lesion information can be obtained to diagnose spinal injuries. The research content includes the following: (1) A registration study based on DCE-MRI and DWI image data. To improve registration accuracy, a registration method is used, and VGG-16 network structure is selected as the basic registration network structure. An iterative VGG-16 network framework is proposed to realize the registration of DWI and DCE-MRI images. The experimental results show that the iterative VGG-16 network structure is more suitable for the registration of DWI and DCE-MRI image data. (2) Based on the fusion research of DCE-MRI and DWI image data. For the registered DCE-MRI and DWI images, this paper uses a fusion method combining feature level and decision level to classify spine images. The simple classifier decision tree, SVM, and KNN were used to predict the damage diagnosis classification of DCE-MRI and DWI images, respectively. By comparing and analyzing the classification results of the experiments, the performance of multimodal image fusion in the auxiliary diagnosis of spinal injuries was evaluated.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "MRI IMAGE"

1

Al-Abdul, Salam Amal. "Image quality in MRI." Thesis, University of Exeter, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288250.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cui, Xuelin. "Joint CT-MRI Image Reconstruction." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/86177.

Повний текст джерела
Анотація:
Modern clinical diagnoses and treatments have been increasingly reliant on medical imaging techniques. In return, medical images are required to provide more accurate and detailed information than ever. Aside from the evolution of hardware and software, multimodal imaging techniques offer a promising solution to produce higher quality images by fusing medical images from different modalities. This strategy utilizes more structural and/or functional image information, thereby allowing clinical results to be more comprehensive and better interpreted. Since their inception, multimodal imaging techniques have received a great deal of attention for achieving enhanced imaging performance. In this work, a novel joint reconstruction framework using sparse computed tomography (CT) and magnetic resonance imaging (MRI) data is developed and evaluated. The method proposed in this study is part of the planned joint CT-MRI system which assembles CT and MRI subsystems into a single entity. The CT and MRI images are synchronously acquired and registered from the hybrid CT-MRI platform. However, since their image data are highly undersampled, analytical methods, such as filtered backprojection, are unable to generate images of sufficient quality. To overcome this drawback, we resort to compressed sensing techniques, which employ sparse priors that result from an application of L1-norm minimization. To utilize multimodal information, a projection distance is introduced and is tuned to tailor the texture and pattern of final images. Specifically CT and MRI images are alternately reconstructed using the updated multimodal results that are calculated at the latest step of the iterative optimization algorithm. This method exploits the structural similarities shared by the CT and MRI images to achieve better reconstruction quality. The improved performance of the proposed approach is demonstrated using a pair of undersampled CT-MRI body images and a pair of undersampled CT-MRI head images. These images are tested using joint reconstruction, analytical reconstruction, and independent reconstruction without using multimodal imaging information. Results show that the proposed method improves about 5dB in signal-to-noise ratio (SNR) and nearly 10% in structural similarity measurements compared to independent reconstruction methods. It offers a similar quality as fully sampled analytical reconstruction, yet requires as few as 25 projections for CT and a 30% sampling rate for MRI. It is concluded that structural similarities and correlations residing in images from different modalities are useful to mutually promote the quality of image reconstruction.
Ph. D.
Medical imaging techniques play a central role in modern clinical diagnoses and treatments. Consequently, there is a constant demand to increase the overall quality of medical images. Since their inception, multimodal imaging techniques have received a great deal of attention for achieving enhanced imaging performance. Multimodal imaging techniques can provide more detailed diagnostic information by fusing medical images from different imaging modalities, thereby allowing clinical results to be more comprehensive to improve clinical interpretation. A new form of multimodal imaging technique, which combines the imaging procedures of computed tomography (CT) and magnetic resonance imaging (MRI), is known as the “omnitomography.” Both computed tomography and magnetic resonance imaging are the most commonly used medical imaging techniques today and their intrinsic properties are complementary. For example, computed tomography performs well for bones whereas the magnetic resonance imaging excels at contrasting soft tissues. Therefore, a multimodal imaging system built upon the fusion of these two modalities can potentially bring much more information to improve clinical diagnoses. However, the planned omni-tomography systems face enormous challenges, such as the limited ability to perform image reconstruction due to mechanical and hardware restrictions that result in significant undersampling of the raw data. Image reconstruction is a procedure required by both computed tomography and magnetic resonance imaging to convert raw data into final images. A general condition required to produce a decent quality of an image is that the number of samples of raw data must be sufficient and abundant. Therefore, undersampling on the omni-tomography system can cause significant degradation of the image quality or artifacts after image reconstruction. To overcome this drawback, we resort to compressed sensing techniques, which exploit the sparsity of the medical images, to perform iterative based image reconstruction for both computed tomography and magnetic resonance imaging. The sparsity of the images is found by applying sparse transform such as discrete gradient transform or wavelet transform in the image domain. With the sparsity and undersampled raw data, an iterative algorithm can largely compensate for the data inadequacy problem and it can reconstruct the final images from the undersampled raw data with minimal loss of quality. In addition, a novel “projection distance” is created to perform a joint reconstruction which further promotes the quality of the reconstructed images. Specifically, the projection distance exploits the structural similarities shared between the image of computed tomography and magnetic resonance imaging such that the insufficiency of raw data caused by undersampling is further accounted for. The improved performance of the proposed approach is demonstrated using a pair of undersampled body images and a pair of undersampled head images, each of which consists of an image of computed tomography and its magnetic resonance imaging counterpart. These images are tested using the proposed joint reconstruction method in this work, the conventional reconstructions such as filtered backprojection and Fourier transform, and reconstruction strategy without using multimodal imaging information (independent reconstruction). The results from this work show that the proposed method addressed these challenges by significantly improving the image quality from highly undersampled raw data. In particular, it improves about 5dB in signal-to-noise ratio and nearly 10% in structural similarity measurements compared to other methods. It achieves similar image quality by using less than 5% of the X-ray dose for computed tomography and 30% sampling rate for magnetic resonance imaging. It is concluded that, by using compressed sensing techniques and exploiting structural similarities, the planned joint computed tomography and magnetic resonance imaging system can perform imaging outstanding tasks with highly undersampled raw data.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Carmo, Bernardo S. "Image processing in echography and MRI." Thesis, University of Southampton, 2005. https://eprints.soton.ac.uk/194557/.

Повний текст джерела
Анотація:
This work deals with image processing for three medical imaging applications: speckle detection in 3D ultrasound, left ventricle detection in cardiac magnetic resonance imaging (MRI) and flow feature visualisation in velocity MRI. For speckle detection, a learning from data approach was taken using pattern recognition principles and low-level image features, including signal-to-noise ratio, co-occurrence matrix, asymmetric second moment, homodyned k-distribution and a proposed specklet detector. For left ventricle detection, template matching was used. Forvortex detection, a data processing framework is presented that consists of three main steps: restoration, abstraction and tracking. This thesis addresses the first two problems, implementing restoration with a total variation first order Lagrangian method, and abstraction with clustering and local linear expansion.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gu, Wei Q. "Automated tracer-independent MRI/PET image registration." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ29596.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ivarsson, Magnus. "Evaluation of 3D MRI Image Registration Methods." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139075.

Повний текст джерела
Анотація:
Image registration is the process of geometrically deforming a template image into a reference image. This technique is important and widely used within thefield of medical IT. The purpose could be to detect image variations, pathologicaldevelopment or in the company AMRA’s case, to quantify fat tissue in variousparts of the human body.From an MRI (Magnetic Resonance Imaging) scan, a water and fat tissue image isobtained. Currently, AMRA is using the Morphon algorithm to register and segment the water image in order to quantify fat and muscle tissue. During the firstpart of this master thesis, two alternative registration methods were evaluated.The first algorithm was Free Form Deformation which is a non-linear parametricbased method. The second algorithm was a non-parametric optical flow basedmethod known as the Demon algorithm. During the second part of the thesis,the Demon algorithm was used to evaluate the effect of using the fat images forregistrations.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lin, Xiangbo. "Knowledge-based image segmentation using deformable registration: application to brain MRI images." Reims, 2009. http://theses.univ-reims.fr/exl-doc/GED00001121.pdf.

Повний текст джерела
Анотація:
L'objectif de la thèse est de contribuer au recalage élastique d'images médicales intersujet-intramodalité, ainsi qu’à la segmentation d'images 3D IRM du cerveau dans le cas normal. L’algorithme des démons qui utilise les intensités des images pour le recalage est d’abord étudié. Une version améliorée est proposée en introduisant une nouvelle équation de calcul des forces pour résoudre des problèmes de recalages dans certaines régions difficiles. L'efficacité de la méthode est montrée sur plusieurs évaluations à partir de données simulées et réelles. Pour le recalage intersujet, une méthode originale de normalisation unifiant les informations spatiales et des intensités est proposée. Des contraintes topologiques sont introduites dans le modèle de déformation, visant à obtenir un recalage homéomorphique. La proposition est de corriger les points de déplacements ayant des déterminants jacobiens négatifs. Basée sur le recalage, une segmentation des structures internes est étudiée. Le principe est de construire une ontologie modélisant le connaissance a-priori de la forme des structures internes. Les formes sont représentées par une carte de distance unifiée calculée à partir de l'atlas de référence et celui déformé. Cette connaissance est injectée dans la mesure de similarité de la fonction de coût de l'algorithme. Un paramètre permet de balancer les contributions des mesures d'intensités et de formes. L'influence des différents paramètres de la méthode et des comparaisons avec d'autres méthodes de recalage ont été effectuées. De très bon résultats sont obtenus sur la segmentation des différentes structures internes du cerveau telles que les noyaux centraux et hippocampe
The research goal of this thesis is a contribution to the intra-modality inter-subject non-rigid medical image registration and the segmentation of 3D brain MRI images in normal case. The well-known Demons non-rigid algorithm is studied, where the image intensities are used as matching features. A new force computation equation is proposed to solve the mismatch problem in some regions. The efficiency is shown through numerous evaluations on simulated and real data. For intensity based inter-subject registration, normalizing the image intensities is important for satisfying the intensity correspondence requirements. A non-rigid registration method combining both intensity and spatial normalizations is proposed. Topology constraints are introduced in the deformable model to preserve an expected property in homeomorphic targets registration. The solution comes from the correction of displacement points with negative Jacobian determinants. Based on the registration, a segmentation method of the internal brain structures is studied. The basic principle is represented by ontology of prior shape knowledge of target internal structure. The shapes are represented by a unified distance map computed from the atlas and the deformed atlas, and then integrated into the similarity metric of the cost function. A balance parameter is used to adjust the contributions of the intensity and shape measures. The influence of different parameters of the method and comparisons with other registration methods were performed. Very good results are obtained on the segmentation of different internal structures of the brain such as central nuclei and hippocampus
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Soltaninejad, Mohammadreza. "Supervised learning-based multimodal MRI brain image analysis." Thesis, University of Lincoln, 2017. http://eprints.lincoln.ac.uk/30883/.

Повний текст джерела
Анотація:
Medical imaging plays an important role in clinical procedures related to cancer, such as diagnosis, treatment selection, and therapy response evaluation. Magnetic resonance imaging (MRI) is one of the most popular acquisition modalities which is widely used in brain tumour analysis and can be acquired with different acquisition protocols, e.g. conventional and advanced. Automated segmentation of brain tumours in MR images is a difficult task due to their high variation in size, shape and appearance. Although many studies have been conducted, it still remains a challenging task and improving accuracy of tumour segmentation is an ongoing field. The aim of this thesis is to develop a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from multimodal MRI images. In this thesis, firstly, the whole brain tumour is segmented from fluid attenuated inversion recovery (FLAIR) MRI, which is commonly acquired in clinics. The segmentation is achieved using region-wise classification, in which regions are derived from superpixels. Several image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomised trees (ERT) classifies each superpixel into tumour and non-tumour. Secondly, the method is extended to 3D supervoxel based learning for segmentation and classification of tumour tissue subtypes in multimodal MRI brain images. Supervoxels are generated using the information across the multimodal MRI data set. This is then followed by a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The information from the advanced protocols of diffusion tensor imaging (DTI), i.e. isotropic (p) and anisotropic (q) components is also incorporated to the conventional MRI to improve segmentation accuracy. Thirdly, to further improve the segmentation of tumour tissue subtypes, the machine-learned features from fully convolutional neural network (FCN) are investigated and combined with hand-designed texton features to encode global information and local dependencies into feature representation. The score map with pixel-wise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The machine-learned features, along with hand-designed texton features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumour. The methods are evaluated on two datasets: 1) clinical dataset, and 2) publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2013 and 2017 dataset. The experimental results demonstrate the high detection and segmentation performance of the III single modal (FLAIR) method. The average detection sensitivity, balanced error rate (BER) and the Dice overlap measure for the segmented tumour against the ground truth for the clinical data are 89.48%, 6% and 0.91, respectively; whilst, for the BRATS dataset, the corresponding evaluation results are 88.09%, 6% and 0.88, respectively. The corresponding results for the tumour (including tumour core and oedema) in the case of multimodal MRI method are 86%, 7%, 0.84, for the clinical dataset and 96%, 2% and 0.89 for the BRATS 2013 dataset. The results of the FCN based method show that the application of the RF classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth for the BRATS 2013 dataset is 0.88, 0.80 and 0.73 for complete tumor, core and enhancing tumor, respectively, which is competitive to the state-of-the-art methods. The corresponding results for BRATS 2017 dataset are 0.86, 0.78 and 0.66 respectively. The methods demonstrate promising results in the segmentation of brain tumours. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. In the experiments, texton has demonstrated its advantages of providing significant information to distinguish various patterns in both 2D and 3D spaces. The segmentation accuracy has also been largely increased by fusing information from multimodal MRI images. Moreover, a unified framework is present which complementarily integrates hand-designed features with machine-learned features to produce more accurate segmentation. The hand-designed features from shallow network (with designable filters) encode the prior-knowledge and context while the machine-learned features from a deep network (with trainable filters) learn the intrinsic features. Both global and local information are combined using these two types of networks that improve the segmentation accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Daga, P. "Towards efficient neurosurgery : image analysis for interventional MRI." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1449559/.

Повний текст джерела
Анотація:
Interventional magnetic resonance imaging (iMRI) is being increasingly used for performing imageguided neurosurgical procedures. Intermittent imaging through iMRI can help a neurosurgeon visualise the target and eloquent brain areas during neurosurgery and lead to better patient outcome. MRI plays an important role in planning and performing neurosurgical procedures because it can provide highresolution anatomical images that can be used to discriminate between healthy and diseased tissue, as well as identify location and extent of functional areas. This is of significant clinical utility as it helps the surgeons maximise target resection and avoid damage to functionally important brain areas. There is clinical interest in propagating the pre-operative surgical information to the intra-operative image space as this allows the surgeons to utilise the pre-operatively generated surgical plans during surgery. The current state of the art neuronavigation systems achieve this by performing rigid registration of pre-operative and intra-operative images. As the brain undergoes non-linear deformations after craniotomy (brain shift), the rigidly registered pre-operative images do not accurately align anymore with the intra-operative images acquired during surgery. This limits the accuracy of these neuronavigation systems and hampers the surgeon’s ability to perform more aggressive interventions. In addition, intra-operative images are typically of lower quality with susceptibility artefacts inducing severe geometric and intensity distortions around areas of resection in echo planar MRI images, significantly reducing their utility in the intraoperative setting. This thesis focuses on development of novel methods for an image processing workflow that aims to maximise the utility of iMRI in neurosurgery. I present a fast, non-rigid registration algorithm that can leverage information from both structural and diffusion weighted MRI images to localise target lesions and a critical white matter tract, the optic radiation, during surgical management of temporal lobe epilepsy. A novel method for correcting susceptibility artefacts in echo planar MRI images is also developed, which combines fieldmap and image registration based correction techniques. The work developed in this thesis has been validated and successfully integrated into the surgical workflow at the National Hospital for Neurology and Neurosurgery in London and is being clinically used to inform surgical decisions.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Chi, Wenjun. "MRI image analysis for abdominal and pelvic endometriosis." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:27efaa89-85cd-4f8b-ab67-b786986c42e3.

Повний текст джерела
Анотація:
Endometriosis is an oestrogen-dependent gynaecological condition defined as the presence of endometrial tissue outside the uterus cavity. The condition is predominantly found in women in their reproductive years, and associated with significant pelvic and abdominal chronic pain and infertility. The disease is believed to affect approximately 33% of women by a recent study. Currently, surgical intervention, often laparoscopic surgery, is the gold standard for diagnosing the disease and it remains an effective and common treatment method for all stages of endometriosis. Magnetic resonance imaging (MRI) of the patient is performed before surgery in order to locate any endometriosis lesions and to determine whether a multidisciplinary surgical team meeting is required. In this dissertation, our goal is to use image processing techniques to aid surgical planning. Specifically, we aim to improve quality of the existing images, and to automatically detect bladder endometriosis lesion in MR images as a form of bladder wall thickening. One of the main problems posed by abdominal MRI is the sparse anisotropic frequency sampling process. As a consequence, the resulting images consist of thick slices and have gaps between those slices. We have devised a method to fuse multi-view MRI consisting of axial/transverse, sagittal and coronal scans, in an attempt to restore an isotropic densely sampled frequency plane of the fused image. In addition, the proposed fusion method is steerable and is able to fuse component images in any orientation. To achieve this, we apply the Riesz transform for image decomposition and reconstruction in the frequency domain, and we propose an adaptive fusion rule to fuse multiple Riesz-components of images in different orientations. The adaptive fusion is parameterised and switches between combining frequency components via the mean and maximum rule, which is effectively a trade-off between smoothing the intrinsically noisy images while retaining the sharp delineation of features. We first validate the method using simulated images, and compare it with another fusion scheme using the discrete wavelet transform. The results show that the proposed method is better in both accuracy and computational time. Improvements of fused clinical images against unfused raw images are also illustrated. For the segmentation of the bladder wall, we investigate the level set approach. While the traditional gradient based feature detection is prone to intensity non-uniformity, we present a novel way to compute phase congruency as a reliable feature representation. In order to avoid the phase wrapping problem with inverse trigonometric functions, we devise a mathematically elegant and efficient way to combine multi-scale image features via geometric algebra. As opposed to the original phase congruency, the proposed method is more robust against noise and hence more suitable for clinical data. To address the practical issues in segmenting the bladder wall, we suggest two coupled level set frameworks to utilise information in two different MRI sequences of the same patients - the T2- and T1-weighted image. The results demonstrate a dramatic decrease in the number of failed segmentations done using a single kind of image. The resulting automated segmentations are finally validated by comparing to manual segmentations done in 2D.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hagio, Tomoe, and Tomoe Hagio. "Parametric Mapping and Image Analysis in Breast MRI." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/621809.

Повний текст джерела
Анотація:
Breast cancer is the most common and the second most fatal cancer among women in the U.S. Current knowledge indicates that there is a relationship between high breast density (measured by mammography) and increased breast cancer risk. However, the biology behind this relationship is not well understood. This may be due to the limited information provided by mammography which only yields information on the relative amount of fibroglandular to adipose tissue in the breast. In our studies, breast density is assessed using quantitative MRI, in which MRI-based tissue-dependent parameters are derived voxel-wise by mathematically modeling the acquired MRI signals. Specifically, we use data from a radial gradient- and spin-echo imaging technique, previously developed in our group, to assess fat fraction and T₂ of the water component in relation to breast density. In addition, we use diffusion-weighted imaging to obtain another parameter, apparent diffusion coefficient (ADC) of the water component in the breast. Each parametric map provides a different type of information: fat fraction gives the amount of fat present in the voxel, the T₂ of water spin relaxation is sensitive to the water component in the tissue, and the ADC of water yields other type of information, such as tissue cellularity. The challenge in deriving these parameters from breast MRI data is the presence of abundant fat in the breast, which can cause artifacts in the images and can also affect the parameter estimation. We approached this problem by modifying the imaging sequence (as in the case of diffusion-weighted imaging) and by exploring new signal models that describe the MRI signal accounting for the presence of fat. In this work, we present the improvements made in the imaging sequence and in the parametric mapping algorithms using simulation and phantom experiments. We also present preliminary results in vivo in the context of breast density-related tissue characterization.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "MRI IMAGE"

1

Brant, William E. Body MRI cases. New York: Oxford University Press, 2013.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Song, In-chʻan. MRI ŭi hwajil pʻyŏngka kisul kaebal =: Technology development of MRI image quality evaluation. [Seoul]: Sikpʻum Ŭiyakpʻum Anjŏnchʻŏng, 2007.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

author, Shah Lubdha M., and Nielsen Jared A. author, eds. Specialty imaging: Functional MRI. Salt Lake City, Utah: Amirsys, 2014.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Brain imaging with MRI and CT: An image pattern approach. Cambridge: Cambridge University Press, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Contrast-enhanced MRI of the breast. Basel: Karger, 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

R, Beck, ed. Contrast-enhanced MRI of the breast. 2nd ed. Berlin: Springer, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

MRI of the lumbar spine: A practical approach to image interpretation. Thorofare, N.J: Slack, 1987.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

W, Bancroft Laura, and Bridges, Mellena D., M.D., eds. MRI normal variants and pitfalls. Philadelphia, PA: Lippincott Williams and Wilkins, 2009.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Poldrack, Russell A. Handbook of functional MRI data analysis. Cambridge: Cambridge University Press, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ciulla, Carlo. Improved signal and image interpolation in biomedical applications: The case of magnetic resonance imaging (MRI). Hershey PA: Medical Information Science Reference, 2009.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "MRI IMAGE"

1

Ashburner, J., and K. J. Friston. "Image Registration." In Functional MRI, 285–99. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-58716-0_26.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zeng, Gengsheng Lawrence. "MRI Reconstruction." In Medical Image Reconstruction, 175–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-05368-9_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Rajan, Sunder S. "Image Contrast and Pulse Sequences." In MRI, 40–65. New York, NY: Springer New York, 1998. http://dx.doi.org/10.1007/978-1-4612-1632-2_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

English, Philip T., and Christine Moore. "Image Production." In MRI for Radiographers, 37–43. London: Springer London, 1995. http://dx.doi.org/10.1007/978-1-4471-3403-9_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

English, Philip T., and Christine Moore. "Image Quality." In MRI for Radiographers, 45–50. London: Springer London, 1995. http://dx.doi.org/10.1007/978-1-4471-3403-9_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

English, Philip T., and Christine Moore. "Image Artifacts." In MRI for Radiographers, 51–70. London: Springer London, 1995. http://dx.doi.org/10.1007/978-1-4471-3403-9_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Murray, Rachel, and Natasha Werpy. "Image interpretation and artefacts." In Equine MRI, 101–45. Chichester, UK: John Wiley & Sons, Ltd, 2016. http://dx.doi.org/10.1002/9781118786574.ch4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Qu, Liangqiong, Yongqin Zhang, Zhiming Cheng, Shuang Zeng, Xiaodan Zhang, and Yuyin Zhou. "Multimodality MRI Synthesis." In Medical Image Synthesis, 163–87. Boca Raton: CRC Press, 2023. http://dx.doi.org/10.1201/9781003243458-14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Weishaupt, Dominik, Victor D. Köchli, and Borut Marincek. "Image Contrast." In How does MRI work?, 11–20. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-662-07805-1_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ahrar, Kamran, and R. Jason Stafford. "MRI-Guided Biopsy." In Percutaneous Image-Guided Biopsy, 49–63. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-8217-8_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "MRI IMAGE"

1

Singh, Upasana, and Manoj Kumar Choubey. "A Review: Image Enhancement on MRI Images." In 2021 5th International Conference on Information Systems and Computer Networks (ISCON). IEEE, 2021. http://dx.doi.org/10.1109/iscon52037.2021.9702464.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Otazo, Ricardo, Ramiro Jordan, Fa-Hsuan Lin, and Stefan Posse. "Superresolution Parallel MRI." In 2007 IEEE International Conference on Image Processing. IEEE, 2007. http://dx.doi.org/10.1109/icip.2007.4379269.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Faghihpirayesh, Razieh, Davood Karimi, Deniz Erdogmus, and Ali Gholipour. "Automatic brain pose estimation in fetal MRI." In Image Processing, edited by Ivana Išgum and Olivier Colliot. SPIE, 2023. http://dx.doi.org/10.1117/12.2647613.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Carlos, Justin Bernard A., Francisco Emmanuel T. Munsayac, Nilo T. Bugtai, and Renann G. Baldovino. "MRI Knee Image Enhancement using Image Processing." In 2021 IEEE 13th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management (HNICEM). IEEE, 2021. http://dx.doi.org/10.1109/hnicem54116.2021.9732053.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wang, Jiacheng, Hao Li, Han Liu, Dewei Hu, Daiwei Lu, Keejin Yoon, Kelsey Barter, Francesca Bagnato, and Ipek Oguz. "SSL2 Self-Supervised Learning meets semi-supervised learning: multiple clerosis segmentation in 7T-MRI from large-scale 3T-MRI." In Image Processing, edited by Ivana Išgum and Olivier Colliot. SPIE, 2023. http://dx.doi.org/10.1117/12.2654522.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Manduca, Armando, David S. Lake, Natalia Khaylova, and Richard L. Ehman. "Image-space automatic motion correction for MRI images." In Medical Imaging 2004, edited by J. Michael Fitzpatrick and Milan Sonka. SPIE, 2004. http://dx.doi.org/10.1117/12.532952.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Devadas, Prathima, G. Kalaiarasi, and M. Selvi. "Intensity based Image Registration on Brain MRI Images." In 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA). IEEE, 2020. http://dx.doi.org/10.1109/icirca48905.2020.9183191.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

DSouza, Adora M., Lele Chen, Yue Wu, Anas Z. Abidin, Chenliang Xu, and Axel Wismüller. "MRI tumor segmentation with densely connected 3D CNN." In Image Processing, edited by Elsa D. Angelini and Bennett A. Landman. SPIE, 2018. http://dx.doi.org/10.1117/12.2293394.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Plassard, Andrew J., L. Taylor Davis, Allen T. Newton, Susan M. Resnick, Bennett A. Landman, and Camilo Bermudez. "Learning implicit brain MRI manifolds with deep learning." In Image Processing, edited by Elsa D. Angelini and Bennett A. Landman. SPIE, 2018. http://dx.doi.org/10.1117/12.2293515.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Noothout, Julia, Elbrich Postma, Sanne Boesveldt, Bob D. de Vos, Paul Smeets, and Ivana Išgum. "Automatic segmentation of the olfactory bulbs in MRI." In Image Processing, edited by Bennett A. Landman and Ivana Išgum. SPIE, 2021. http://dx.doi.org/10.1117/12.2580354.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "MRI IMAGE"

1

Yang, Xiaofeng, Tian Liu, Jani Ashesh, Hui Mao, and Walter Curran. Fusion of Ultrasound Tissue-Typing Images with Multiparametric MRI for Image-guided Prostate Cancer Radiation Therapy. Fort Belvoir, VA: Defense Technical Information Center, October 2014. http://dx.doi.org/10.21236/ada622473.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Baumgaertel, Jessica A., Paul A. Bradley, and Ian L. Tregillis. 65036 MMI data matched qualitatively by RAGE (with mix) synthetic MMI images. Office of Scientific and Technical Information (OSTI), February 2014. http://dx.doi.org/10.2172/1122056.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Grossberg, Stephen. A MURI Center for Intelligent Biomimetic Image Processing and Classification. Fort Belvoir, VA: Defense Technical Information Center, November 2007. http://dx.doi.org/10.21236/ada474727.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Garrett, A. J. Ground truth measurements plan for the Multispectral Thermal Imager (MTI) satellite. Office of Scientific and Technical Information (OSTI), January 2000. http://dx.doi.org/10.2172/752199.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kurdziel, Karen, Michael Hagan, Jeffrey Williamson, Donna McClish, Panos Fatouros, Jerry Hirsch, Rhonda Hoyle, Kristin Schmidt, Dorin Tudor, and Jie Liu. Multimodality Image-Guided HDR/IMRT in Prostate Cancer: Combined Molecular Targeting Using Nanoparticle MR, 3D MRSI, and 11C Acetate PET Imaging. Fort Belvoir, VA: Defense Technical Information Center, August 2005. http://dx.doi.org/10.21236/ada446542.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Martin, Kathi, Nick Jushchyshyn, and Claire King. James Galanos Evening Gown c. 1957. Drexel Digital Museum, 2018. http://dx.doi.org/10.17918/jkyh-1b56.

Повний текст джерела
Анотація:
The URL links to a website page in the Drexel Digital Museum (DDM) fashion image archive containing a 3D interactive panorama of an evening suit by American fashion designer James Galanos with related text. This evening gown is from Galanos' Fall 1957 collection. It is embellished with polychrome glass beads in a red and green tartan plaid pattern on a base of silk . It was a gift of Mrs. John Thouron and is in The James G. Galanos Archive at Drexel University. The panorama is an HTML5 formatted version of an ultra-high resolution ObjectVR created from stitched tiles captured with GigaPan technology. It is representative the ongoing research of the DDM, an international, interdisciplinary group of researchers focused on production, conservation and dissemination of new media for exhibition of historic fashion.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Martin, Kathi, Nick Jushchyshyn, and Daniel Caulfield-Sriklad. 3D Interactive Panorama Jessie Franklin Turner Evening Gown c. 1932. Drexel Digital Museum, 2015. http://dx.doi.org/10.17918/9zd6-2x15.

Повний текст джерела
Анотація:
The 3D Interactive Panorama provides multiple views and zoom in details of a bias cut evening gown by Jessie Franklin Turner, an American woman designer in the 1930s. The gown is constructed from pink 100% silk charmeuse with piping along the bodice edges and design lines. It has soft tucks at the neckline and small of back, a unique strap detail in the back and a self belt. The Interactive is part of the Drexel Digital Museum, an online archive of fashion images. The original gown is part of the Fox Historic Costume, Drexel University, a Gift of Mrs. Lewis H. Pearson 64-59-7.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Marcot, Bruce, M. Jorgenson, Thomas Douglas, and Patricia Nelsen. Photographic aerial transects of Fort Wainwright, Alaska. Engineer Research and Development Center (U.S.), August 2022. http://dx.doi.org/10.21079/11681/45283.

Повний текст джерела
Анотація:
This report presents the results of low-altitude photographic transects conducted over the training areas of US Army Garrison Fort Wainwright, in the boreal biome of central Alaska, to document baseline land-cover conditions. Flights were conducted via a Cessna™ 180 on two flight paths over portions of the Tanana Flats, Yukon, and Donnelly Training Areas and covered 486 mi (782 km) while documenting GPS waypoints. Nadir photographs were made with two GoPro™ cameras operating at 5 sec time-lapse intervals and with a handheld digital camera for oblique imagery. This yielded 6,063 GoPro photos and 706 oblique photos. Each image was intersected with a land-cover-classification map, collectively representing 38 of the 44 cover categories.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Lamontagne, M., K. B. S. Burke, and L. Olson. Felt reports and impact of the November 25, 1988, magnitude 5.9 Saguenay, Quebec, earthquake sequence. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/328194.

Повний текст джерела
Анотація:
The November 25, 1988, moment magnitude 5.9 (Mw) Saguenay earthquake is one of the largest eastern Canadian earthquakes of the 20th century. It was preceded by a magnitude (MN) 4.7 foreshock and followed by very few aftershocks considering the magnitude of the main shock. The largest aftershock was a magnitude (MN) 4.3 event. This Open File (OF) Report presents a variety of documents (including original and interpreted felt information, images, newspaper clippings, various engineering reports on the damage, mass movements). This OF updates the report of Cajka and Drysdale (1994) with additional material, including descriptions of the foreshock and largest aftershock. Most of the felt report information come from replies of a questionnaire sent to postmasters in more than 2000 localities in Canada and in the United States. Images of the original felt reports from Canada are included. The OF also includes information gathered in damage assessments and newspaper accounts. For each locality, the interpreted information is presented in a digital table. The fields include the name, latitude and longitude of the municipality and the interpreted intensity on the Modified Mercalli Intensity (MMI) scale (most of which are the interpretations of Cajka and Drysdale, 1996). When available or significant, excerpts of the felt reports are added. This OF Report also includes images from contemporary newspapers that describe the impact. In addition, information contained in post-earthquake reports are discussed together with pictures of damage and mass movements. Finally, a GoogleEarth kmz file is added for viewing the felt information reports within a spatial tool.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Martin, Kathi, Nick Jushchyshyn, and Claire King. Christian Lacroix Evening gown c.1990. Drexel Digital Museum, 2017. http://dx.doi.org/10.17918/wq7d-mc48.

Повний текст джерела
Анотація:
The URL links to a website page in the Drexel Digital Museum (DDM) fashion image archive containing a 3D interactive panorama of an evening gown by French fashion designer Christian Lacroix with related text. This evening gown by Christian Lacroix is from his Fall 1990 collection. It is constructed from silk plain weave, printed with an abstract motif in the bright, deep colors of the local costumes of Lacroix's native Arles, France; and embellished with diamanté and insets of handkerchief edged silk chiffon. Ruffles of pleated silk organza in a neutral bird feather print and also finished with a handkerchief edge, accentuate the asymmetrical draping of the gown. Ruching, controlled by internal drawstrings and ties, creates volume and a slight pouf, a nod to 'le pouf' silhouette Lacroix popularized in his collection for Patou in 1986. Decorative boning on the front of the bodice reflects Lacroix's early education as a costume historian and his sartorial reinterpretation of historic corsets. It is from the private collection of Mari Shaw. The panorama is an HTML5 formatted version of an ultra-high resolution ObjectVR created from stitched tiles captured with GigaPan technology. It is representative the ongoing research of the DDM, an international, interdisciplinary group of researchers focused on production, conservation and dissemination of new media for exhibition of historic fashion.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії