Gotowa bibliografia na temat „Multi-modal imaging”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Multi-modal imaging”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Multi-modal imaging"

1

Mohankumar, Arthi, i Roshni Mohan. "Multi-Modal imaging of torpedo maculopathy". TNOA Journal of Ophthalmic Science and Research 61, nr 1 (2023): 143. http://dx.doi.org/10.4103/tjosr.tjosr_9_22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Alilet, Mona, Julien Behr, Jean-Philippe Nueffer, Benoit Barbier-Brion i Sébastien Aubry. "Multi-modal imaging of the subscapularis muscle". Insights into Imaging 7, nr 6 (17.10.2016): 779–91. http://dx.doi.org/10.1007/s13244-016-0526-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Watkin, Kenneth L., i Michael A. McDonald. "Multi-Modal Contrast Agents". Academic Radiology 9, nr 2 (luty 2002): S285—S289. http://dx.doi.org/10.1016/s1076-6332(03)80205-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Merkle, Arno, Leah L. Lavery, Jeff Gelb i Nicholas Piché. "Fusing Multi-scale and Multi-modal 3D Imaging and Characterization". Microscopy and Microanalysis 20, S3 (sierpień 2014): 820–21. http://dx.doi.org/10.1017/s1431927614005820.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Blinowska, Katarzyna, Gernot Müller-Putz, Vera Kaiser, Laura Astolfi, Katrien Vanderperren, Sabine Van Huffel i Louis Lemieux. "Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration". Computational Intelligence and Neuroscience 2009 (2009): 1–10. http://dx.doi.org/10.1155/2009/813607.

Pełny tekst źródła
Streszczenie:
Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship.
Style APA, Harvard, Vancouver, ISO itp.
6

Dong, Di, Jie Tian, Yakang Dai, Guorui Yan, Fei Yang i Ping Wu. "Unified reconstruction framework for multi-modal medical imaging". Journal of X-Ray Science and Technology 19, nr 1 (2011): 111–26. http://dx.doi.org/10.3233/xst-2010-0281.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Bansal, Reema, Nitin Kumar i Monika Balyan. "Multi-modal imaging in benign familial fleck retina". Indian Journal of Ophthalmology 69, nr 6 (2021): 1641. http://dx.doi.org/10.4103/ijo.ijo_633_21.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Merk, Vivian, Johan Decelle, Si Chen i Derk Joester. "Multi-modal correlative chemical imaging of aquatic microorganisms". Microscopy and Microanalysis 27, S1 (30.07.2021): 298–300. http://dx.doi.org/10.1017/s1431927621001641.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Cole, Laura M., Joshua Handley, Emmanuelle Claude, Catherine J. Duckett, Hardeep S. Mudhar, Karen Sisley i Malcolm R. Clench. "Multi-Modal Mass Spectrometric Imaging of Uveal Melanoma". Metabolites 11, nr 8 (23.08.2021): 560. http://dx.doi.org/10.3390/metabo11080560.

Pełny tekst źródła
Streszczenie:
Matrix assisted laser desorption ionisation mass spectrometry imaging (MALDI-MSI), was used to obtain images of lipids and metabolite distribution in formalin fixed and embedded in paraffin (FFPE) whole eye sections containing primary uveal melanomas (UM). Using this technique, it was possible to obtain images of lysophosphatidylcholine (LPC) type lipid distribution that highlighted the tumour regions. Laser ablation inductively coupled plasma mass spectrometry images (LA-ICP-MS) performed on UM sections showed increases in copper within the tumour periphery and intratumoural zinc in tissue from patients with poor prognosis. These preliminary data indicate that multi-modal MSI has the potential to provide insights into the role of trace metals and cancer metastasis.
Style APA, Harvard, Vancouver, ISO itp.
10

Beckus, Andre, Alexandru Tamasan i George K. Atia. "Multi-Modal Non-Line-of-Sight Passive Imaging". IEEE Transactions on Image Processing 28, nr 7 (lipiec 2019): 3372–82. http://dx.doi.org/10.1109/tip.2019.2896517.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Multi-modal imaging"

1

Kachatkou, Anton S. "Instrumentation for multi-dimensional multi-modal imaging in microscopy". Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.509391.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Blattmann, Marc [Verfasser], Hans [Akademischer Betreuer] Zappe, Çağlar [Akademischer Betreuer] Ataman i Andreas [Akademischer Betreuer] Seifert. "Concept for a multi-modal endoscopic imaging system". Freiburg : Universität, 2017. http://d-nb.info/1148929363/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hoffman, David. "Hybrid PET/MRI Nanoparticle Development and Multi-Modal Imaging". VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3253.

Pełny tekst źródła
Streszczenie:
The development of hybrid PET/MRI imaging systems needs to be paralleled with the development of a hybrid intrinsic PET/MRI probes. The aim of this work was to develop and validate a novel radio-superparamagnetic nanoparticle (r-SPNP) for hybrid PET/MRI imaging. This was achieved with the synthesis of superparamagnetic iron oxide nanoparticles (SPIONs) that intrinsically incorporated 59Fe and manganese iron oxide nanoparticles (MIONs) that intrinsically incorporated 52Mn. Both [59Fe]-SPIONs and [52Mn]-MIONs were produced through thermal decomposition synthesis. The physiochemical characteristics of the r-SPNPs were assessed with TEM, DLS, and zeta-potential measurements, as well as in imaging phantom studies. The [59Fe]-SPIONs were evaluated in vivo with biodistribution and MR imaging studies. The biodistrubution studies of [59Fe]-SPIONs showed uptake in the liver. This corresponded with major MR signal contrast measured in the liver. 52Mn was produced on natural chromium through the 52Cr(p,n)52Mn reaction. The manganese radionuclides were separated from the target material through a liquid-liquid extraction. The αVβ3 integrin binding of [52Mn]-MION-cRGDs was evaluated with αVβ3 integrin solid phase assays, and the expression of αVβ3 integrin in U87MG xenograft tumors was characterized with fluorescence flow cytometry. [52Mn]-MION-cRGDs were used for in vivo PET and MR imaging of U87MG xenograft tumor bearing mice. PET data showed increased [52Mn]-MION-cRGD uptake compared with untargeted [52Mn]-MIONs. ROI analysis of PET and MRI data showed that MR contrasted corresponded with PET signal. Future work will utilize [52Mn]-MION-cRGDs in other tumor models and with hybrid PET/MRI imaging systems.
Style APA, Harvard, Vancouver, ISO itp.
4

Halai, Ajay Devshi. "Multi-modal imaging of brain networks subserving speech comprehension". Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/multimodal-imaging-of-brain-networks-subserving-speech-comprehension(8f1b55b1-6d06-452e-8efc-8f1bb89fd481).html.

Pełny tekst źródła
Streszczenie:
Neurocognitive models of speech comprehension generally outline either the spatial or temporal organisation of speech processing and rarely consider combining the two to provide a more complete model. Simultaneous EEG-fMRI recordings have the potential to link these domains, due to the complementary high spatial (fMRI) and temporal (EEG) sensitivities. Although the neural basis of speech comprehension has been investigated intensively during the past few decades there are still some important outstanding questions. For instance, there is considerable evidence from neuropsychology and other convergent sources that the anterior temporal lobe (ATL) should play an important role in accessing meaning. However, fMRI studies do not usually highlight this area, possibly because magnetic susceptibility artefacts cause severe signal loss within the ventral ATL (vATL). In this thesis EEG and fMRI were used to refine the spatial and temporal components of neurocognitive models of speech comprehension, and to attempt to provide a combined spatial and temporal model. Chapter 2 describes an EEG study that was conducted while participants listened to intelligible and unintelligible single words. A two-pass processing framework best explained the results, which showed comprehension to proceed in a somewhat hierarchical manner; however, top-down processes were involved during the early stages. These early processes were found to originate from the mid-superior temporal gyrus (STG) and inferior frontal gyrus (IFG), while the late processes were found within ATL and IFG regions. Chapter 3 compared two novel fMRI methods known to overcome signal loss within vATL: dual-echo and spin-echo fMRI. The results showed dual-echo fMRI outperformed spin-echo fMRI in vATL regions, as well as extra temporal regions. Chapter 4 harnessed the dual-echo method to investigate a speech comprehension task (sentences). Intelligibility related activation was found in bilateral STG, left vATL and left IFG. This is consistent with converging evidence implicating the vATL in semantic processing. Chapter 5 describes how simultaneous EEG-fMRI was used to investigate word comprehension. The results showed activity in superior temporal sulcus (STS), vATL and IFG. The temporal profile showed that these nodes were most active around 400 ms (specifically the anterior STS and vATL), while the vATL was consistently active across the whole epoch. Overall, these studies suggest that models of speech comprehension need to be updated to include the vATL region, as a way of accessing semantic meaning. Furthermore, the temporal evolution is best explained within a two-pass framework. The early top-down influence of vATL regions attempt to map speech-like sounds onto semantic representations. Successful mapping, and therefore comprehension, is achieved around 400 ms in the vATL and anterior STS.
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Xue. "An Integrated Multi-modal Registration Technique for Medical Imaging". FIU Digital Commons, 2017. https://digitalcommons.fiu.edu/etd/3512.

Pełny tekst źródła
Streszczenie:
Registration of medical imaging is essential for aligning in time and space different modalities and hence consolidating their strengths for enhanced diagnosis and for the effective planning of treatment or therapeutic interventions. The primary objective of this study is to develop an integrated registration method that is effective for registering both brain and whole-body images. We seek in the proposed method to combine in one setting the excellent registration results that FMRIB Software Library (FSL) produces with brain images and the excellent results of Statistical Parametric Mapping (SPM) when registering whole-body images. To assess attainment of these objectives, the following registration tasks were performed: (1) FDG_CT with FLT_CT images, (2) pre-operation MRI with intra-operation CT images, (3) brain only MRI with corresponding PET images, and (4) MRI T1 with T2, T1 with FLAIR, and T1 with GE images. Then, the results of the proposed method will be compared to those obtained using existing state-of-the-art registration methods such as SPM and FSL. Initially, three slices were chosen from the reference image, and the normalized mutual information (NMI) was calculated between each of them for every slice in the moving image. The three pairs with the highest NMI values were chosen. The wavelet decomposition method is applied to minimize the computational requirements. An initial search applying a genetic algorithm is conducted on the three pairs to obtain three sets of registration parameters. The Powell method is applied to reference and moving images to validate the three sets of registration parameters. A linear interpolation method is then used to obtain the registration parameters for all remaining slices. Finally, the aligned registered image with the reference image were displayed to show the different performances of the 3 methods, namely the proposed method, SPM and FSL by gauging the average NMI values obtained in the registration results. Visual observations are also provided in support of these NMI values. For comparative purposes, tests using different multi-modal imaging platforms are performed.
Style APA, Harvard, Vancouver, ISO itp.
6

Li, Lin. "Multi-scale spectral embedding representation registration (MSERg) for multi-modal imaging registration". Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1467902012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Mera-Pirttijarvi, Ross Jalmari. "Targeted multi-modal imaging : using the Ugi reaction with metals". Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/targeted-multimodal-imaging-using-the-ugi-reaction-with-metals(00ca616e-b8bd-466a-86dc-d1799851fbd1).html.

Pełny tekst źródła
Streszczenie:
The current 'gold standard method' of detecting cancer relies on microscopic examination by specialised pathologists. However, there are risks associated with surgery and biopsies and so the ability to diagnose cancer and other diseases in a non-invasive manner is highly attractive. There are many imaging techniques suitable for this, each with their own advantages and disadvantages, which can be improved by the use of contrast agents. The incorporation of targeting vectors allows for the specific imaging of desired tissues. Further to this, the incorporation of more than one contrast agent into one imaging agent allows for multi-modal imaging of cancerous tissue and other diseases. This allows for the advantages of different techniques to be used simultaneously and is an emerging field. The methods for the synthesis of these drugs can be synthetically demanding and low yielding due to linear synthetic strategies. The use of multi-component reactions would be a major benefit and the Ugi reaction is particularly attractive due to the incorporation of four components and the biocompatible bis-amide motif of Ugi products. This work serves as an extension to previous work based on Ugi reactions of metal complexes, which showed that amine and carboxylic acid appended lanthanide and carboxylic acid appended d-metal complexes can be used as stable building blocks in the formation of mono-metallic complexes. This work presents the synthesis of aldehyde appended lanthanide complexes and their use in Wittig and Ugi chemistry in the synthesis of mono-metallic complexes. The previously synthesised amine appended lanthanide complexes 1, 3, 4 were also synthesised to be used as a feedstock in subsequent Ugi reactions. A number of carboxylic acid appended d-metal complexes and cyanine dyes were synthesised according literature procedures. Both the bis-acid appended d-metal complexes and cyanine dyes were used unsuccessfully in the Ugi reaction. However, the mono-acid d-metal complexes were used successfully in the Ugi reaction in keeping with previous reports. These were used as the third feedstock for the synthesis of trimetallic complexes along with the aldehyde and amine appended lanthanide complexes via the Ugi reaction. In addition, a number of Ugi reactions were performed on organic compounds. The use of p-toluic acid gave five Ugi compounds, which were characterised and gave the expected results. However, the use of biotin as the carboxylic acid component gave four compounds that were complex to characterise and suggested that the incorporated biotin may not serve as a viable targeting vector. One of the p-toluic acid Ugi products was reacted further and a biotin moiety was incorporated with a (CH2)6 spacer. Spectroscopic evidence suggested that the biotin would still act as a viable targeting vector. Overall, this work serves to set the scene for the synthesis of targeted tri-metallic multi-modal imaging agents using stable metal complexes as building blocks in the Ugi reaction.
Style APA, Harvard, Vancouver, ISO itp.
8

Chan, Ho-Ming. "A supervised learning framework for multi-modal rigid registration with applications to angiographic images /". View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20CHAN.

Pełny tekst źródła
Streszczenie:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 60-62). Also available in electronic version. Access restricted to campus users.
Style APA, Harvard, Vancouver, ISO itp.
9

Yao, Nailin, i 姚乃琳. "Visual hallucinations in Parkinson's disease : a multi-modal MRI investigation". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/196477.

Pełny tekst źródła
Streszczenie:
Background Visual hallucinations (VH) are an important non-motor complication of Parkinson’s disease (PD) which carries a negative prognosis, but their biological basis is unclear. Multi-modal magnetic resonance imaging (MRI) can be used to evaluate structural and functional brain mechanisms underpinning VH in PD. Methods To assess cerebral microstructure and resting functional activities in patients with idiopathic PD and VH, I compared PD patients with VH (PDVH) and PD patients without VH (PDnonVH), while healthy controls (HC) were also recruited for comparison. Diffusion tensor imaging was used to calculate mean diffusivity (MD) and fractional anisotropy (FA). Structural MRI was used to calculate voxel-based intensity of grey matter (GM) and white matter (WM) across the entire brain and compared among groups. Furthermore, functional magnetic resonance imaging of the brain, acquired during rest, was processed to calculate the amplitude of low-frequency fluctuations (ALFF) and functional connectivity (FC) to inform a model of VH. In addition, hippocampal volume, shape, mean diffusivity and FC across the whole brain was further examined. Hippocampal dependent visual spatial memory performance was compared between groups, and predicted correlations with hippocampal microstructural indices and VH severity were tested. Results In the first study, PDVH had lower FA than both PDnonVH and HC in the right occipital lobe and left parietal lobe, but increased FA in the right infero-medial fronto-occipital fasciculus and posterior inferior longitudinal fasciculus. Moreover, PDVH patients showed less GM volume compared to PDnonVH in the right lingual gyrus of the occipital lobe. In the second study, PDVH patients compared to non-hallucinators showed lower ALFF in occipital lobes, with greater ALFF in temporo-parietal region, limbic lobe and right cerebellum. The PDVH group also showed alteration in functional connectivity between occipital region and corticostriatal regions. Finally in the third study, although there were no gross hippocampal volume and shape differences across groups, individuals with PDVH had higher diffusivity in hippocampus than PDnonVH and HC. Both PD groups had significantly poorer visuospatial memory compared to HC. Poorer visuospatial memory was correlated with higher hippocampal diffusivity in HC and more severe VH in the PDVH group.FC between hippocampus and primary visual cortex, dorsal/ventral visual pathways was also lower in PDVH than other groups, whereas FC between hippocampus and default mode network regions was greater in PDVH group compared to others. Conclusion Compared to PDnonVH groups, the PDVH group had multiple structural deficits in primary and associative visual cortices. In term of hemodynamic activity, the PDVH group had lower ALFF in occipital lobe, but greater ALFF in regions that comprise the dorsal visual pathway. Moreover, this lower ALFF in the primary visual cortex was accompanied by lower functional connectivity across components of the ventral/dorsal visual pathway in the PDVH group compared to the PDnonVH group. Moreover, evidence supporting a specific role for the hippocampus in PDVH was obtained. In the absence of gross macrostructural anomalies, hippocampal microstructure and functional connectivity was compromised in PDVH. I observed an association between visuospatial memory and hippocampal integrity and suggest that hippocampal pathology and consequent disruption in visuospatial memory plays a key contribution to VH in PD. Thus, in the PDVH group, "bottom-up" primary visual cortex and “top-down” visual association pathways and attentional networks appear to be disrupted.
published_or_final_version
Psychiatry
Doctoral
Doctor of Philosophy
Style APA, Harvard, Vancouver, ISO itp.
10

Petersen, Steffen E. "Insights into cardiac remodelling by multi-modal magnetic resonance imaging and spectroscopy". Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.419318.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Multi-modal imaging"

1

Dynamic brain imaging: Multi-modal methods and in vivo applications. Totowa, N.J: Humana, 2009.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Hyder, Fahmeed. Dynamic Brain Imaging: Multi-Modal Methods and in Vivo Applications. Humana Press, 2014.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Wu, Min, Lu Yang, Dong-Hyun Kim, Changqiang Wu i Peng Mi, red. Bottom-Up Approach: a Route for Effective Multi-modal Imaging of Tumors. Frontiers Media SA, 2022. http://dx.doi.org/10.3389/978-2-88974-453-4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Planning Coverage of Points of Interest via Multiple Imaging Surveillance Assets: A Multi-Modal Approach. Storming Media, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Multi-modal imaging"

1

Langmann, Benjamin. "Multi-Modal Background Subtraction". W Wide Area 2D/3D Imaging, 63–73. Wiesbaden: Springer Fachmedien Wiesbaden, 2014. http://dx.doi.org/10.1007/978-3-658-06457-0_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Sossi, Vesna. "Multi-modal Imaging and Image Fusion". W Small Animal Imaging, 293–314. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-12945-2_22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Zhou, Ziqi, Xinna Guo, Wanqi Yang, Yinghuan Shi, Luping Zhou, Lei Wang i Ming Yang. "Cross-Modal Attention-Guided Convolutional Network for Multi-modal Cardiac Segmentation". W Machine Learning in Medical Imaging, 601–10. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32692-0_69.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Zaman, Akib, Lu Zhang, Jingwen Yan i Dajiang Zhu. "Multi-modal Image Prediction via Spatial Hybrid U-Net". W Multiscale Multimodal Medical Imaging, 1–9. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37969-8_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Krmicek, Vojtech, i Michèle Sebag. "Functional Brain Imaging with Multi-objective Multi-modal Evolutionary Optimization". W Parallel Problem Solving from Nature - PPSN IX, 382–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11844297_39.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Pöschl, Christiane, i Otmar Scherzer. "Distance Measures and Applications to Multi-Modal Variational Imaging". W Handbook of Mathematical Methods in Imaging, 111–38. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-0-387-92920-0_4.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Chartsias, Agisilaos, Thomas Joyce, Rohan Dharmakumar i Sotirios A. Tsaftaris. "Adversarial Image Synthesis for Unpaired Multi-modal Cardiac Data". W Simulation and Synthesis in Medical Imaging, 3–13. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68127-6_1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Tong, Tong, Katherine Gray, Qinquan Gao, Liang Chen i Daniel Rueckert. "Nonlinear Graph Fusion for Multi-modal Classification of Alzheimer’s Disease". W Machine Learning in Medical Imaging, 77–84. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24888-2_10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Ge, Hongkun, Guorong Wu, Li Wang, Yaozong Gao i Dinggang Shen. "Hierarchical Multi-modal Image Registration by Learning Common Feature Representations". W Machine Learning in Medical Imaging, 203–11. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-24888-2_25.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zhang, Sen, Changzheng Zhang, Lanjun Wang, Cixing Li, Dandan Tu, Rui Luo, Guojun Qi i Jiebo Luo. "MSAFusionNet: Multiple Subspace Attention Based Deep Multi-modal Fusion Network". W Machine Learning in Medical Imaging, 54–62. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-32692-0_7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Multi-modal imaging"

1

Prümmer, M., J. Hornegger, M. Pfister i A. Dörfler. "Multi-modal 2D-3D non-rigid registration". W Medical Imaging, redaktorzy Joseph M. Reinhardt i Josien P. W. Pluim. SPIE, 2006. http://dx.doi.org/10.1117/12.652321.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Duric, Neb, Cuiping Li, Peter Littrup, Carri Glide-Hurst, Lianjie Huang, Jessica Lupinacci, Steven Schmidt, Olsi Rama, Lisa Bey-Knight i Yang Xu. "Multi-modal breast imaging with ultrasound tomography". W Medical Imaging, redaktorzy Stephen A. McAleavey i Jan D'hooge. SPIE, 2008. http://dx.doi.org/10.1117/12.772203.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Méndez, Carlos Andrés, Paul Summers i Gloria Menegaz. "A multi-view approach to multi-modal MRI cluster ensembles". W SPIE Medical Imaging, redaktorzy Sebastien Ourselin i Martin A. Styner. SPIE, 2014. http://dx.doi.org/10.1117/12.2042327.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Vetter, Christoph, Christoph Guetter, Chenyang Xu i Rüdiger Westermann. "Non-rigid multi-modal registration on the GPU". W Medical Imaging, redaktorzy Josien P. W. Pluim i Joseph M. Reinhardt. SPIE, 2007. http://dx.doi.org/10.1117/12.709629.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Caban, Jesus J., David Liao, Jianhua Yao, Daniel J. Mollura, Bernadette Gochuico i Terry Yoo. "Enhancing image classification models with multi-modal biomarkers". W SPIE Medical Imaging, redaktorzy Ronald M. Summers i Bram van Ginneken. SPIE, 2011. http://dx.doi.org/10.1117/12.878084.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Alic, Lejla, Joost C. Haeck, Stefan Klein, Karin Bol, Sandra T. van Tiel, Piotr A. Wielopolski, Magda Bijster i in. "Multi-modal image registration: matching MRI with histology". W SPIE Medical Imaging, redaktorzy Robert C. Molthen i John B. Weaver. SPIE, 2010. http://dx.doi.org/10.1117/12.844123.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Hyman, Alexandra, Lingling Zhao i Xavier Intes. "Multi-modal Imaging Cassette for Small Animal Molecular Imaging". W 2013 39th Annual Northeast Bioengineering Conference (NEBEC). IEEE, 2013. http://dx.doi.org/10.1109/nebec.2013.25.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Larson-prior, Linda, John Zempel i Abraham Snyder. "Imaging Across Scale: the Promise of Multi-modal Imaging". W 2006 IEEE/NLM Life Science Systems and Applications Workshop. IEEE, 2006. http://dx.doi.org/10.1109/lssa.2006.250435.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Li, Xia, Thomas E. Yankeelov, Glenn Rosen, John C. Gore i Benoit M. Dawant. "Multi-modal inter-subject registration of mouse brain images". W Medical Imaging, redaktorzy Joseph M. Reinhardt i Josien P. W. Pluim. SPIE, 2006. http://dx.doi.org/10.1117/12.652407.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Kim, Eun Young, i Hans Johnson. "Multi-structure segmentation of multi-modal brain images using artificial neural networks". W SPIE Medical Imaging, redaktorzy Benoit M. Dawant i David R. Haynor. SPIE, 2010. http://dx.doi.org/10.1117/12.844613.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Multi-modal imaging"

1

Wong, Stephen T., i Jared C. Gilliam. Mobile, Multi-modal, Label-Free Imaging Probe Analysis of Choroidal Oximetry and Retinal Hypoxia. Fort Belvoir, VA: Defense Technical Information Center, październik 2015. http://dx.doi.org/10.21236/ada624123.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii