Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Segmentation Multimodale“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Segmentation Multimodale" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Segmentation Multimodale"
Nai, Ying-Hwey, Bernice W. Teo, Nadya L. Tan, Koby Yi Wei Chua, Chun Kit Wong, Sophie O’Doherty, Mary C. Stephenson et al. „Evaluation of Multimodal Algorithms for the Segmentation of Multiparametric MRI Prostate Images“. Computational and Mathematical Methods in Medicine 2020 (20.10.2020): 1–12. http://dx.doi.org/10.1155/2020/8861035.
Der volle Inhalt der QuelleSun, Qixuan, Nianhua Fang, Zhuo Liu, Liang Zhao, Youpeng Wen und Hongxiang Lin. „HybridCTrm: Bridging CNN and Transformer for Multimodal Brain Image Segmentation“. Journal of Healthcare Engineering 2021 (01.10.2021): 1–10. http://dx.doi.org/10.1155/2021/7467261.
Der volle Inhalt der QuellePan, Mingyuan, Yonghong Shi und Zhijian Song. „Segmentation of Gliomas Based on a Double-Pathway Residual Convolution Neural Network Using Multi-Modality Information“. Journal of Medical Imaging and Health Informatics 10, Nr. 11 (01.11.2020): 2784–94. http://dx.doi.org/10.1166/jmihi.2020.3216.
Der volle Inhalt der QuelleDesser, Dmitriy, Francisca Assunção, Xiaoguang Yan, Victor Alves, Henrique M. Fernandes und Thomas Hummel. „Automatic Segmentation of the Olfactory Bulb“. Brain Sciences 11, Nr. 9 (28.08.2021): 1141. http://dx.doi.org/10.3390/brainsci11091141.
Der volle Inhalt der QuelleJain, Raunak, Faith Lee, Nianhe Luo, Harpreet Hyare und Anand S. Pandit. „A Practical Guide to Manual and Semi-Automated Neurosurgical Brain Lesion Segmentation“. NeuroSci 5, Nr. 3 (02.08.2024): 265–75. http://dx.doi.org/10.3390/neurosci5030021.
Der volle Inhalt der QuelleZhu, Yuchang, und Nanfeng Xiao. „Simple Scalable Multimodal Semantic Segmentation Model“. Sensors 24, Nr. 2 (22.01.2024): 699. http://dx.doi.org/10.3390/s24020699.
Der volle Inhalt der QuelleFarag, A. A., A. S. El-Baz und G. Gimel'farb. „Precise segmentation of multimodal images“. IEEE Transactions on Image Processing 15, Nr. 4 (April 2006): 952–68. http://dx.doi.org/10.1109/tip.2005.863949.
Der volle Inhalt der QuelleYou, Siming. „Deep learning in autonomous driving: Advantages, limitations, and innovative solutions“. Applied and Computational Engineering 75, Nr. 1 (05.07.2024): 147–53. http://dx.doi.org/10.54254/2755-2721/75/20240528.
Der volle Inhalt der QuelleZuo, Qiang, Songyu Chen und Zhifang Wang. „R2AU-Net: Attention Recurrent Residual Convolutional Neural Network for Multimodal Medical Image Segmentation“. Security and Communication Networks 2021 (10.06.2021): 1–10. http://dx.doi.org/10.1155/2021/6625688.
Der volle Inhalt der QuelleZhang, Yong, Yu-mei Zhou, Zhen-hong Liao, Gao-yuan Liu und Kai-can Guo. „Artificial Intelligence-Guided Subspace Clustering Algorithm for Glioma Images“. Journal of Healthcare Engineering 2021 (26.02.2021): 1–9. http://dx.doi.org/10.1155/2021/5573010.
Der volle Inhalt der QuelleDissertationen zum Thema "Segmentation Multimodale"
Bricq, Stéphanie. „Segmentation d’images IRM anatomiques par inférence bayésienne multimodale et détection de lésions“. Université Louis Pasteur (Strasbourg) (1971-2008), 2008. https://publication-theses.unistra.fr/public/theses_doctorat/2008/BRICQ_Stephanie_2008.pdf.
Der volle Inhalt der QuelleMedical imaging provides a growing number of data. Automatic segmentation has become a fundamental step for quantitative analysis of these images in many brain diseases such as multiple sclerosis (MS). We focused our study on brain MRI segmentation and MS lesion detection. At first we proposed a method of brain tissue segmentation based on hidden Markov chains taking into account neighbourhood information. This method can also include prior information provided by a probabilistic atlas and takes into account the artefacts appearing on MR images. Then we extended this method to detect MS lesions thanks to a robust estimator and prior information provided by a probabilistic atlas. We have also developed a 3D MRI segmentation method based on statistical active contours to refine the lesion segmentation. The results were compared with other existing methods of segmentation, and with manual expert segmentations
Bricq, Stéphanie Collet Christophe Armspach Jean-Paul. „Segmentation d'images IRM anatomiques par inférence bayésienne multimodale et détection de lésions“. Strasbourg : Université de Strasbourg, 2009. http://eprints-scd-ulp.u-strasbg.fr:8080/1143/01/BRICQ_Stephanie_2008-protege.pdf.
Der volle Inhalt der QuelleToulouse, Tom. „Estimation par stéréovision multimodale de caractéristiques géométriques d’un feu de végétation en propagation“. Thesis, Corte, 2015. http://www.theses.fr/2015CORT0009/document.
Der volle Inhalt der QuelleThis thesis presents the geometrical characteristics measurement of spreading vegetation fires with multimodal stereovision systems. Image processing and 3D registration are used in order to obtain a three-dimensional modeling of the fire at each instant of image acquisition and then to compute fire front characteristics like its position, its rate of spread, its height, its width, its inclination, its surface and its volume. The first important contribution of this thesis is the fire pixel detection. A benchmark of fire pixel detection algorithms and of those that are developed in this thesis have been on a database of 500 vegetation fire images of the visible spectra which have been characterized according to the fire properties in the image (color, smoke, luminosity). Five fire pixel detection algorithms based on fusion of data from visible and near-infrared spectra images have also been developed and tested on another database of 100 multimodal images. The second important contribution of this thesis is about the use of images fusion for the optimization of the matching point’s number between the multimodal stereo images.The second important contribution of this thesis is the registration method of 3D fire points obtained with stereovision systems. It uses information collected from a housing containing a GPS and an IMU card which is positioned on each stereovision systems. With this registration, a method have been developed to extract the geometrical characteristics when the fire is spreading.The geometrical characteristics estimation device have been evaluated on a car of known dimensions and the results obtained confirm the good accuracy of the device. The results obtained from vegetation fires are also presented
Kijak, Ewa. „Structuration multimodale des vidéos de sport par modèles stochastiques“. Phd thesis, Université Rennes 1, 2003. http://tel.archives-ouvertes.fr/tel-00532944.
Der volle Inhalt der QuelleGAUTHIER, GERVAIS. „Applications de la morphologie mathematique fonctionnelle : analyse des textures en niveaux de gris et segmentation par approche multimodale“. Caen, 1995. http://www.theses.fr/1995CAEN2050.
Der volle Inhalt der QuellePham, Quoc Cuong. „Segmentation et mise en correspondance en imagerie cardiaque multimodale conduites par un modèle anatomique bi-cavités du coeur“. Grenoble INPG, 2002. http://www.theses.fr/2002INPG0153.
Der volle Inhalt der QuelleIrace, Zacharie. „Modélisation statistique et segmentation d'images TEP : application à l'hétérogénéité et au suivi de tumeurs“. Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/12201/1/irace.pdf.
Der volle Inhalt der QuelleToulouse, Tom. „Estimation par stéréovision multimodale de caractéristiques géométriques d'un feu de végétation en propagation“. Doctoral thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/26472.
Der volle Inhalt der QuelleThis thesis presents the geometrical characteristics measurement of spreading vegetation fires with multimodal stereovision systems. Image processing and 3D registration are used in order to obtain a three-dimensional modeling of the fire at each instant of image acquisition and then to compute fire front characteristics like its position, its rate of spread, its height, its width, its inclination, its surface and its volume. The first important contribution of this thesis is the fire pixel detection. A benchmark of fire pixel detection algorithms of the litterature and of those that are developed in this thesis have been on a database of 500 vegetation fire images of the visible spectra which have been characterized according to the fire properties in the image (color, smoke, luminosity). Five fire pixel detection algorithms based on fusion of data from visible and near-infrared spectra images have also been developed and tested on another database of 100 multimodal images. The second important contribution of this thesis is about the use of images fusion for the optimization of the matching point’s number between the multimodal stereo images. The second important contribution of this thesis is the registration method of 3D fire points obtained with stereovision systems. It uses information collected from a housing containing a GPS and an IMU card which is positioned on each stereovision systems. With this registration, a method have been developed to extract the geometrical characteristics when the fire is spreading. The geometrical characteristics estimation device have been evaluated on a car of known dimensions and the results obtained confirm the good accuracy of the device. The results obtained from vegetation fires are also presented. Key words: wildland fire, stereovision, image processing segmentation, multimodal.
Baban, a. erep Thierry Roland. „Contribution au développement d'un système intelligent de quantification des nutriments dans les repas d'Afrique subsaharienne“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP100.
Der volle Inhalt der QuelleMalnutrition, including under- and overnutrition, is a global health challenge affecting billions of people. It impacts all organ systems and is a significant risk factor for noncommunicable diseases such as cardiovascular diseases, diabetes, and some cancers. Assessing food intake is crucial for preventing malnutrition but remains challenging. Traditional methods for dietary assessment are labor-intensive and prone to bias. Advancements in AI have made Vision-Based Dietary Assessment (VBDA) a promising solution for automatically analyzing food images to estimate portions and nutrition. However, food image segmentation in VBDA faces challenges due to food's non-rigid structure, high intra-class variation (where the same dish can look very different), inter-class resemblance (where different foods appear similar) and scarcity of publicly available datasets.Almost all food segmentation research has focused on Asian and Western foods, with no datasets for African cuisines. However, African dishes often involve mixed food classes, making accurate segmentation challenging. Additionally, research has largely focus on RGB images, which provides color and texture but may lack geometric detail. To address this, RGB-D segmentation combines depth data with RGB images. Depth images provide crucial geometric details that enhance RGB data, improve object discrimination, and are robust to factors like illumination and fog. Despite its success in other fields, RGB-D segmentation for food is underexplored due to difficulties in collecting food depth images.This thesis makes key contributions by developing new deep learning models for RGB (mid-DeepLabv3+) and RGB-D (ESeNet-D) image segmentation and introducing the first food segmentation datasets focused on African food images. Mid-DeepLabv3+ is based on DeepLabv3+, featuring a simplified ResNet backbone with and added skip layer (middle layer) in the decoder and SimAM attention mechanism. This model offers an optimal balance between performance and efficiency, matching DeepLabv3+'s performance while cutting computational load by half. ESeNet-D consists on two encoder branches using EfficientNetV2 as backbone, with a fusion block for multi-scale integration and a decoder employing self-calibrated convolution and learned interpolation for precise segmentation. ESeNet-D outperforms many RGB and RGB-D benchmark models while having fewer parameters and FLOPs. Our experiments show that, when properly integrated, depth information can significantly improve food segmentation accuracy. We also present two new datasets: AfricaFoodSeg for “food/non-food” segmentation with 3,067 images (2,525 for training, 542 for validation), and CamerFood focusing on Cameroonian cuisine. CamerFood datasets include CamerFood10 with 1,422 images from ten food classes, and CamerFood15, an enhanced version with 15 food classes, 1,684 training images, and 514 validation images. Finally, we address the challenge of scarce depth data in RGB-D food segmentation by demonstrating that Monocular Depth Estimation (MDE) models can aid in generating effective depth maps for RGB-D datasets
Ercolessi, Philippe. „Extraction multimodale de la structure narrative des épisodes de séries télévisées“. Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2056/.
Der volle Inhalt der QuelleOur contributions concern the extraction of the structure of TV series episodes at two hierarchical levels. The first level of structuring is to find the scene transitions based on the analysis of the color information and the speakers involved in the scenes. We show that the analysis of the speakers improves the result of a color-based segmentation into scenes. It is common to see several stories (or lines of action) told in parallel in a single TV series episode. Thus, the second level of structure is to cluster scenes into stories. We seek to deinterlace the stories in order to visualize the different lines of action independently. The main difficulty is to determine the most relevant descriptors for grouping scenes belonging to the same story. We explore the use of descriptors from the three different modalities described above. We also propose methods to combine these three modalities. To address the variability of the narrative structure of TV series episodes, we propose a method that adapts to each episode. It can automatically select the most relevant clustering method among the various methods we propose. Finally, we developed StoViz, a tool for visualizing the structure of a TV series episode (scenes and stories). It allows an easy browsing of each episode, revealing the different stories told in parallel. It also allows playback of episodes story by story, and visualizing a summary of the episode by providing a short overview of each story
Bücher zum Thema "Segmentation Multimodale"
Menze, Bjoern, und Spyridon Bakas, Hrsg. Multimodal Brain Tumor Segmentation and Beyond. Frontiers Media SA, 2021. http://dx.doi.org/10.3389/978-2-88971-170-3.
Der volle Inhalt der QuelleBuchteile zum Thema "Segmentation Multimodale"
Poulisse, Gert-Jan, und Marie-Francine Moens. „Multimodal News Story Segmentation“. In Proceedings of the First International Conference on Intelligent Human Computer Interaction, 95–101. New Delhi: Springer India, 2009. http://dx.doi.org/10.1007/978-81-8489-203-1_7.
Der volle Inhalt der QuelleShah, Rajiv, und Roger Zimmermann. „Lecture Video Segmentation“. In Multimodal Analysis of User-Generated Multimedia Content, 173–203. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61807-4_6.
Der volle Inhalt der QuelleWang, Yaping, Hongjun Jia, Pew-Thian Yap, Bo Cheng, Chong-Yaw Wee, Lei Guo und Dinggang Shen. „Groupwise Segmentation Improves Neuroimaging Classification Accuracy“. In Multimodal Brain Image Analysis, 185–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33530-3_16.
Der volle Inhalt der QuelleDielmann, Alfred, und Steve Renals. „Multistream Dynamic Bayesian Network for Meeting Segmentation“. In Machine Learning for Multimodal Interaction, 76–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-30568-2_7.
Der volle Inhalt der QuelleZhang, Daoqiang, Qimiao Guo, Guorong Wu und Dinggang Shen. „Sparse Patch-Based Label Fusion for Multi-Atlas Segmentation“. In Multimodal Brain Image Analysis, 94–102. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33530-3_8.
Der volle Inhalt der QuelleSoldea, Octavian, Trung Doan, Andrew Webb, Mark van Buchem, Julien Milles und Radu Jasinschi. „Simultaneous Brain Structures Segmentation Combining Shape and Pose Forces“. In Multimodal Brain Image Analysis, 143–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24446-9_18.
Der volle Inhalt der QuellePoot, Dirk H. J., Marleen de Bruijne, Meike W. Vernooij, M. Arfan Ikram und Wiro J. Niessen. „Improved Tissue Segmentation by Including an MR Acquisition Model“. In Multimodal Brain Image Analysis, 152–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24446-9_19.
Der volle Inhalt der QuelleYu, Hao, Jie Zhao und Li Zhang. „Vessel Segmentation via Link Prediction of Graph Neural Networks“. In Multiscale Multimodal Medical Imaging, 34–43. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18814-5_4.
Der volle Inhalt der QuelleWang, Yi-Qing, und Giovanni Palma. „Liver Segmentation Quality Control in Multi-sequence MR Studies“. In Multiscale Multimodal Medical Imaging, 54–62. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18814-5_6.
Der volle Inhalt der QuelleCárdenes, Rubén, Meritxell Bach, Ying Chi, Ioannis Marras, Rodrigo de Luis, Mats Anderson, Peter Cashman und Matthieu Bultelle. „Multimodal Evaluation for Medical Image Segmentation“. In Computer Analysis of Images and Patterns, 229–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74272-2_29.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Segmentation Multimodale"
Wang, Zheng, Xinliang Zhang und Junkun Zhao. „Sribble Supervised Multimodal Medical Image Segmentation“. In 2024 International Joint Conference on Neural Networks (IJCNN), 1–9. IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650603.
Der volle Inhalt der QuelleXia, Zhuofan, Dongchen Han, Yizeng Han, Xuran Pan, Shiji Song und Gao Huang. „GSVA: Generalized Segmentation via Multimodal Large Language Models“. In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3858–69. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.00370.
Der volle Inhalt der QuelleAhmad, Nisar, und Yao-Tien Chen. „3D Brain Tumor Segmentation in Multimodal MRI Images“. In 2024 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), 543–44. IEEE, 2024. http://dx.doi.org/10.1109/icce-taiwan62264.2024.10674099.
Der volle Inhalt der QuelleDong, Shaohua, Yunhe Feng, Qing Yang, Yan Huang, Dongfang Liu und Heng Fan. „Efficient Multimodal Semantic Segmentation via Dual-Prompt Learning“. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 14196–203. IEEE, 2024. https://doi.org/10.1109/iros58592.2024.10801872.
Der volle Inhalt der QuelleAwudong, Buhailiqiemu, und Qi Li. „Improved Brain Tumor Segmentation Framework Based on Multimodal MRI and Cascaded Segmentation Strategy“. In 2024 International Conference on Intelligent Computing and Data Mining (ICDM), 58–61. IEEE, 2024. http://dx.doi.org/10.1109/icdm63232.2024.10762056.
Der volle Inhalt der QuelleXu, Rongtao, Changwei Wang, Duzhen Zhang, Man Zhang, Shibiao Xu, Weiliang Meng und Xiaopeng Zhang. „DefFusion: Deformable Multimodal Representation Fusion for 3D Semantic Segmentation“. In 2024 IEEE International Conference on Robotics and Automation (ICRA), 7732–39. IEEE, 2024. http://dx.doi.org/10.1109/icra57147.2024.10610465.
Der volle Inhalt der QuelleSankar, Shreeram, D. V. Santhosh Kumar, P. Kumar und M. Rakesh Kumar. „Multimodal Fusion for Brain Medical Image Segmentation using MMSegNet“. In 2024 4th International Conference on Intelligent Technologies (CONIT), 1–11. IEEE, 2024. http://dx.doi.org/10.1109/conit61985.2024.10627205.
Der volle Inhalt der QuelleHan, Siyuan, Yao Wang und Qian Wang. „Multimodal Medical Image Segmentation Algorithm Based on Convolutional Neural Networks“. In 2024 Second International Conference on Networks, Multimedia and Information Technology (NMITCON), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/nmitcon62075.2024.10698930.
Der volle Inhalt der QuelleSun, Yue, Zelong Zhang, Hong Shangguan, Jie Yang, Xiong Zhang und Yuhuan Zhang. „A Multiscale Attention Multimodal Cooperative Learning Stroke Lesion Segmentation Network“. In 2024 9th International Conference on Intelligent Computing and Signal Processing (ICSP), 1084–87. IEEE, 2024. http://dx.doi.org/10.1109/icsp62122.2024.10743876.
Der volle Inhalt der QuelleHuang, Chao, Weichao Cai, Qiuping Jiang und Zhihua Wang. „Multimodal Representation Distribution Learning for Medical Image Segmentation“. In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/459.
Der volle Inhalt der Quelle