Academic literature on the topic 'Segmentation Multimodale'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Segmentation Multimodale.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Segmentation Multimodale"
Nai, Ying-Hwey, Bernice W. Teo, Nadya L. Tan, Koby Yi Wei Chua, Chun Kit Wong, Sophie O’Doherty, Mary C. Stephenson, et al. "Evaluation of Multimodal Algorithms for the Segmentation of Multiparametric MRI Prostate Images." Computational and Mathematical Methods in Medicine 2020 (October 20, 2020): 1–12. http://dx.doi.org/10.1155/2020/8861035.
Full textSun, Qixuan, Nianhua Fang, Zhuo Liu, Liang Zhao, Youpeng Wen, and Hongxiang Lin. "HybridCTrm: Bridging CNN and Transformer for Multimodal Brain Image Segmentation." Journal of Healthcare Engineering 2021 (October 1, 2021): 1–10. http://dx.doi.org/10.1155/2021/7467261.
Full textPan, Mingyuan, Yonghong Shi, and Zhijian Song. "Segmentation of Gliomas Based on a Double-Pathway Residual Convolution Neural Network Using Multi-Modality Information." Journal of Medical Imaging and Health Informatics 10, no. 11 (November 1, 2020): 2784–94. http://dx.doi.org/10.1166/jmihi.2020.3216.
Full textDesser, Dmitriy, Francisca Assunção, Xiaoguang Yan, Victor Alves, Henrique M. Fernandes, and Thomas Hummel. "Automatic Segmentation of the Olfactory Bulb." Brain Sciences 11, no. 9 (August 28, 2021): 1141. http://dx.doi.org/10.3390/brainsci11091141.
Full textJain, Raunak, Faith Lee, Nianhe Luo, Harpreet Hyare, and Anand S. Pandit. "A Practical Guide to Manual and Semi-Automated Neurosurgical Brain Lesion Segmentation." NeuroSci 5, no. 3 (August 2, 2024): 265–75. http://dx.doi.org/10.3390/neurosci5030021.
Full textZhu, Yuchang, and Nanfeng Xiao. "Simple Scalable Multimodal Semantic Segmentation Model." Sensors 24, no. 2 (January 22, 2024): 699. http://dx.doi.org/10.3390/s24020699.
Full textFarag, A. A., A. S. El-Baz, and G. Gimel'farb. "Precise segmentation of multimodal images." IEEE Transactions on Image Processing 15, no. 4 (April 2006): 952–68. http://dx.doi.org/10.1109/tip.2005.863949.
Full textYou, Siming. "Deep learning in autonomous driving: Advantages, limitations, and innovative solutions." Applied and Computational Engineering 75, no. 1 (July 5, 2024): 147–53. http://dx.doi.org/10.54254/2755-2721/75/20240528.
Full textZuo, Qiang, Songyu Chen, and Zhifang Wang. "R2AU-Net: Attention Recurrent Residual Convolutional Neural Network for Multimodal Medical Image Segmentation." Security and Communication Networks 2021 (June 10, 2021): 1–10. http://dx.doi.org/10.1155/2021/6625688.
Full textZhang, Yong, Yu-mei Zhou, Zhen-hong Liao, Gao-yuan Liu, and Kai-can Guo. "Artificial Intelligence-Guided Subspace Clustering Algorithm for Glioma Images." Journal of Healthcare Engineering 2021 (February 26, 2021): 1–9. http://dx.doi.org/10.1155/2021/5573010.
Full textDissertations / Theses on the topic "Segmentation Multimodale"
Bricq, Stéphanie. "Segmentation d’images IRM anatomiques par inférence bayésienne multimodale et détection de lésions." Université Louis Pasteur (Strasbourg) (1971-2008), 2008. https://publication-theses.unistra.fr/public/theses_doctorat/2008/BRICQ_Stephanie_2008.pdf.
Full textMedical imaging provides a growing number of data. Automatic segmentation has become a fundamental step for quantitative analysis of these images in many brain diseases such as multiple sclerosis (MS). We focused our study on brain MRI segmentation and MS lesion detection. At first we proposed a method of brain tissue segmentation based on hidden Markov chains taking into account neighbourhood information. This method can also include prior information provided by a probabilistic atlas and takes into account the artefacts appearing on MR images. Then we extended this method to detect MS lesions thanks to a robust estimator and prior information provided by a probabilistic atlas. We have also developed a 3D MRI segmentation method based on statistical active contours to refine the lesion segmentation. The results were compared with other existing methods of segmentation, and with manual expert segmentations
Bricq, Stéphanie Collet Christophe Armspach Jean-Paul. "Segmentation d'images IRM anatomiques par inférence bayésienne multimodale et détection de lésions." Strasbourg : Université de Strasbourg, 2009. http://eprints-scd-ulp.u-strasbg.fr:8080/1143/01/BRICQ_Stephanie_2008-protege.pdf.
Full textToulouse, Tom. "Estimation par stéréovision multimodale de caractéristiques géométriques d’un feu de végétation en propagation." Thesis, Corte, 2015. http://www.theses.fr/2015CORT0009/document.
Full textThis thesis presents the geometrical characteristics measurement of spreading vegetation fires with multimodal stereovision systems. Image processing and 3D registration are used in order to obtain a three-dimensional modeling of the fire at each instant of image acquisition and then to compute fire front characteristics like its position, its rate of spread, its height, its width, its inclination, its surface and its volume. The first important contribution of this thesis is the fire pixel detection. A benchmark of fire pixel detection algorithms and of those that are developed in this thesis have been on a database of 500 vegetation fire images of the visible spectra which have been characterized according to the fire properties in the image (color, smoke, luminosity). Five fire pixel detection algorithms based on fusion of data from visible and near-infrared spectra images have also been developed and tested on another database of 100 multimodal images. The second important contribution of this thesis is about the use of images fusion for the optimization of the matching point’s number between the multimodal stereo images.The second important contribution of this thesis is the registration method of 3D fire points obtained with stereovision systems. It uses information collected from a housing containing a GPS and an IMU card which is positioned on each stereovision systems. With this registration, a method have been developed to extract the geometrical characteristics when the fire is spreading.The geometrical characteristics estimation device have been evaluated on a car of known dimensions and the results obtained confirm the good accuracy of the device. The results obtained from vegetation fires are also presented
Kijak, Ewa. "Structuration multimodale des vidéos de sport par modèles stochastiques." Phd thesis, Université Rennes 1, 2003. http://tel.archives-ouvertes.fr/tel-00532944.
Full textGAUTHIER, GERVAIS. "Applications de la morphologie mathematique fonctionnelle : analyse des textures en niveaux de gris et segmentation par approche multimodale." Caen, 1995. http://www.theses.fr/1995CAEN2050.
Full textPham, Quoc Cuong. "Segmentation et mise en correspondance en imagerie cardiaque multimodale conduites par un modèle anatomique bi-cavités du coeur." Grenoble INPG, 2002. http://www.theses.fr/2002INPG0153.
Full textIrace, Zacharie. "Modélisation statistique et segmentation d'images TEP : application à l'hétérogénéité et au suivi de tumeurs." Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/12201/1/irace.pdf.
Full textToulouse, Tom. "Estimation par stéréovision multimodale de caractéristiques géométriques d'un feu de végétation en propagation." Doctoral thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/26472.
Full textThis thesis presents the geometrical characteristics measurement of spreading vegetation fires with multimodal stereovision systems. Image processing and 3D registration are used in order to obtain a three-dimensional modeling of the fire at each instant of image acquisition and then to compute fire front characteristics like its position, its rate of spread, its height, its width, its inclination, its surface and its volume. The first important contribution of this thesis is the fire pixel detection. A benchmark of fire pixel detection algorithms of the litterature and of those that are developed in this thesis have been on a database of 500 vegetation fire images of the visible spectra which have been characterized according to the fire properties in the image (color, smoke, luminosity). Five fire pixel detection algorithms based on fusion of data from visible and near-infrared spectra images have also been developed and tested on another database of 100 multimodal images. The second important contribution of this thesis is about the use of images fusion for the optimization of the matching point’s number between the multimodal stereo images. The second important contribution of this thesis is the registration method of 3D fire points obtained with stereovision systems. It uses information collected from a housing containing a GPS and an IMU card which is positioned on each stereovision systems. With this registration, a method have been developed to extract the geometrical characteristics when the fire is spreading. The geometrical characteristics estimation device have been evaluated on a car of known dimensions and the results obtained confirm the good accuracy of the device. The results obtained from vegetation fires are also presented. Key words: wildland fire, stereovision, image processing segmentation, multimodal.
Baban, a. erep Thierry Roland. "Contribution au développement d'un système intelligent de quantification des nutriments dans les repas d'Afrique subsaharienne." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP100.
Full textMalnutrition, including under- and overnutrition, is a global health challenge affecting billions of people. It impacts all organ systems and is a significant risk factor for noncommunicable diseases such as cardiovascular diseases, diabetes, and some cancers. Assessing food intake is crucial for preventing malnutrition but remains challenging. Traditional methods for dietary assessment are labor-intensive and prone to bias. Advancements in AI have made Vision-Based Dietary Assessment (VBDA) a promising solution for automatically analyzing food images to estimate portions and nutrition. However, food image segmentation in VBDA faces challenges due to food's non-rigid structure, high intra-class variation (where the same dish can look very different), inter-class resemblance (where different foods appear similar) and scarcity of publicly available datasets.Almost all food segmentation research has focused on Asian and Western foods, with no datasets for African cuisines. However, African dishes often involve mixed food classes, making accurate segmentation challenging. Additionally, research has largely focus on RGB images, which provides color and texture but may lack geometric detail. To address this, RGB-D segmentation combines depth data with RGB images. Depth images provide crucial geometric details that enhance RGB data, improve object discrimination, and are robust to factors like illumination and fog. Despite its success in other fields, RGB-D segmentation for food is underexplored due to difficulties in collecting food depth images.This thesis makes key contributions by developing new deep learning models for RGB (mid-DeepLabv3+) and RGB-D (ESeNet-D) image segmentation and introducing the first food segmentation datasets focused on African food images. Mid-DeepLabv3+ is based on DeepLabv3+, featuring a simplified ResNet backbone with and added skip layer (middle layer) in the decoder and SimAM attention mechanism. This model offers an optimal balance between performance and efficiency, matching DeepLabv3+'s performance while cutting computational load by half. ESeNet-D consists on two encoder branches using EfficientNetV2 as backbone, with a fusion block for multi-scale integration and a decoder employing self-calibrated convolution and learned interpolation for precise segmentation. ESeNet-D outperforms many RGB and RGB-D benchmark models while having fewer parameters and FLOPs. Our experiments show that, when properly integrated, depth information can significantly improve food segmentation accuracy. We also present two new datasets: AfricaFoodSeg for “food/non-food” segmentation with 3,067 images (2,525 for training, 542 for validation), and CamerFood focusing on Cameroonian cuisine. CamerFood datasets include CamerFood10 with 1,422 images from ten food classes, and CamerFood15, an enhanced version with 15 food classes, 1,684 training images, and 514 validation images. Finally, we address the challenge of scarce depth data in RGB-D food segmentation by demonstrating that Monocular Depth Estimation (MDE) models can aid in generating effective depth maps for RGB-D datasets
Ercolessi, Philippe. "Extraction multimodale de la structure narrative des épisodes de séries télévisées." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2056/.
Full textOur contributions concern the extraction of the structure of TV series episodes at two hierarchical levels. The first level of structuring is to find the scene transitions based on the analysis of the color information and the speakers involved in the scenes. We show that the analysis of the speakers improves the result of a color-based segmentation into scenes. It is common to see several stories (or lines of action) told in parallel in a single TV series episode. Thus, the second level of structure is to cluster scenes into stories. We seek to deinterlace the stories in order to visualize the different lines of action independently. The main difficulty is to determine the most relevant descriptors for grouping scenes belonging to the same story. We explore the use of descriptors from the three different modalities described above. We also propose methods to combine these three modalities. To address the variability of the narrative structure of TV series episodes, we propose a method that adapts to each episode. It can automatically select the most relevant clustering method among the various methods we propose. Finally, we developed StoViz, a tool for visualizing the structure of a TV series episode (scenes and stories). It allows an easy browsing of each episode, revealing the different stories told in parallel. It also allows playback of episodes story by story, and visualizing a summary of the episode by providing a short overview of each story
Books on the topic "Segmentation Multimodale"
Menze, Bjoern, and Spyridon Bakas, eds. Multimodal Brain Tumor Segmentation and Beyond. Frontiers Media SA, 2021. http://dx.doi.org/10.3389/978-2-88971-170-3.
Full textBook chapters on the topic "Segmentation Multimodale"
Poulisse, Gert-Jan, and Marie-Francine Moens. "Multimodal News Story Segmentation." In Proceedings of the First International Conference on Intelligent Human Computer Interaction, 95–101. New Delhi: Springer India, 2009. http://dx.doi.org/10.1007/978-81-8489-203-1_7.
Full textShah, Rajiv, and Roger Zimmermann. "Lecture Video Segmentation." In Multimodal Analysis of User-Generated Multimedia Content, 173–203. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61807-4_6.
Full textWang, Yaping, Hongjun Jia, Pew-Thian Yap, Bo Cheng, Chong-Yaw Wee, Lei Guo, and Dinggang Shen. "Groupwise Segmentation Improves Neuroimaging Classification Accuracy." In Multimodal Brain Image Analysis, 185–93. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33530-3_16.
Full textDielmann, Alfred, and Steve Renals. "Multistream Dynamic Bayesian Network for Meeting Segmentation." In Machine Learning for Multimodal Interaction, 76–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-30568-2_7.
Full textZhang, Daoqiang, Qimiao Guo, Guorong Wu, and Dinggang Shen. "Sparse Patch-Based Label Fusion for Multi-Atlas Segmentation." In Multimodal Brain Image Analysis, 94–102. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33530-3_8.
Full textSoldea, Octavian, Trung Doan, Andrew Webb, Mark van Buchem, Julien Milles, and Radu Jasinschi. "Simultaneous Brain Structures Segmentation Combining Shape and Pose Forces." In Multimodal Brain Image Analysis, 143–51. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24446-9_18.
Full textPoot, Dirk H. J., Marleen de Bruijne, Meike W. Vernooij, M. Arfan Ikram, and Wiro J. Niessen. "Improved Tissue Segmentation by Including an MR Acquisition Model." In Multimodal Brain Image Analysis, 152–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24446-9_19.
Full textYu, Hao, Jie Zhao, and Li Zhang. "Vessel Segmentation via Link Prediction of Graph Neural Networks." In Multiscale Multimodal Medical Imaging, 34–43. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18814-5_4.
Full textWang, Yi-Qing, and Giovanni Palma. "Liver Segmentation Quality Control in Multi-sequence MR Studies." In Multiscale Multimodal Medical Imaging, 54–62. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-18814-5_6.
Full textCárdenes, Rubén, Meritxell Bach, Ying Chi, Ioannis Marras, Rodrigo de Luis, Mats Anderson, Peter Cashman, and Matthieu Bultelle. "Multimodal Evaluation for Medical Image Segmentation." In Computer Analysis of Images and Patterns, 229–36. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74272-2_29.
Full textConference papers on the topic "Segmentation Multimodale"
Wang, Zheng, Xinliang Zhang, and Junkun Zhao. "Sribble Supervised Multimodal Medical Image Segmentation." In 2024 International Joint Conference on Neural Networks (IJCNN), 1–9. IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650603.
Full textXia, Zhuofan, Dongchen Han, Yizeng Han, Xuran Pan, Shiji Song, and Gao Huang. "GSVA: Generalized Segmentation via Multimodal Large Language Models." In 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 3858–69. IEEE, 2024. http://dx.doi.org/10.1109/cvpr52733.2024.00370.
Full textAhmad, Nisar, and Yao-Tien Chen. "3D Brain Tumor Segmentation in Multimodal MRI Images." In 2024 International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), 543–44. IEEE, 2024. http://dx.doi.org/10.1109/icce-taiwan62264.2024.10674099.
Full textDong, Shaohua, Yunhe Feng, Qing Yang, Yan Huang, Dongfang Liu, and Heng Fan. "Efficient Multimodal Semantic Segmentation via Dual-Prompt Learning." In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 14196–203. IEEE, 2024. https://doi.org/10.1109/iros58592.2024.10801872.
Full textAwudong, Buhailiqiemu, and Qi Li. "Improved Brain Tumor Segmentation Framework Based on Multimodal MRI and Cascaded Segmentation Strategy." In 2024 International Conference on Intelligent Computing and Data Mining (ICDM), 58–61. IEEE, 2024. http://dx.doi.org/10.1109/icdm63232.2024.10762056.
Full textXu, Rongtao, Changwei Wang, Duzhen Zhang, Man Zhang, Shibiao Xu, Weiliang Meng, and Xiaopeng Zhang. "DefFusion: Deformable Multimodal Representation Fusion for 3D Semantic Segmentation." In 2024 IEEE International Conference on Robotics and Automation (ICRA), 7732–39. IEEE, 2024. http://dx.doi.org/10.1109/icra57147.2024.10610465.
Full textSankar, Shreeram, D. V. Santhosh Kumar, P. Kumar, and M. Rakesh Kumar. "Multimodal Fusion for Brain Medical Image Segmentation using MMSegNet." In 2024 4th International Conference on Intelligent Technologies (CONIT), 1–11. IEEE, 2024. http://dx.doi.org/10.1109/conit61985.2024.10627205.
Full textHan, Siyuan, Yao Wang, and Qian Wang. "Multimodal Medical Image Segmentation Algorithm Based on Convolutional Neural Networks." In 2024 Second International Conference on Networks, Multimedia and Information Technology (NMITCON), 1–5. IEEE, 2024. http://dx.doi.org/10.1109/nmitcon62075.2024.10698930.
Full textSun, Yue, Zelong Zhang, Hong Shangguan, Jie Yang, Xiong Zhang, and Yuhuan Zhang. "A Multiscale Attention Multimodal Cooperative Learning Stroke Lesion Segmentation Network." In 2024 9th International Conference on Intelligent Computing and Signal Processing (ICSP), 1084–87. IEEE, 2024. http://dx.doi.org/10.1109/icsp62122.2024.10743876.
Full textHuang, Chao, Weichao Cai, Qiuping Jiang, and Zhihua Wang. "Multimodal Representation Distribution Learning for Medical Image Segmentation." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/459.
Full text