Journal articles on the topic 'Visualisation et segmentation en 3D'

To see the other types of publications on this topic, follow the link: Visualisation et segmentation en 3D.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visualisation et segmentation en 3D.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gaifas, Lorenzo, Moritz A. Kirchner, Joanna Timmins, and Irina Gutsche. "Blik is an extensible 3D visualisation tool for the annotation and analysis of cryo-electron tomography data." PLOS Biology 22, no. 4 (April 30, 2024): e3002447. http://dx.doi.org/10.1371/journal.pbio.3002447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Powerful, workflow-agnostic and interactive visualisation is essential for the ad hoc, human-in-the-loop workflows typical of cryo-electron tomography (cryo-ET). While several tools exist for visualisation and annotation of cryo-ET data, they are often integrated as part of monolithic processing pipelines, or focused on a specific task and offering limited reusability and extensibility. With each software suite presenting its own pros and cons and tools tailored to address specific challenges, seamless integration between available pipelines is often a difficult task. As part of the effort to enable such flexibility and move the software ecosystem towards a more collaborative and modular approach, we developed blik, an open-source napari plugin for visualisation and annotation of cryo-ET data (source code: https://github.com/brisvag/blik). blik offers fast, interactive, and user-friendly 3D visualisation thanks to napari, and is built with extensibility and modularity at the core. Data is handled and exposed through well-established scientific Python libraries such as numpy arrays and pandas dataframes. Reusable components (such as data structures, file read/write, and annotation tools) are developed as independent Python libraries to encourage reuse and community contribution. By easily integrating with established image analysis tools—even outside of the cryo-ET world—blik provides a versatile platform for interacting with cryo-ET data. On top of core visualisation features—interactive and simultaneous visualisation of tomograms, particle picks, and segmentations—blik provides an interface for interactive tools such as manual, surface-based and filament-based particle picking, and image segmentation, as well as simple filtering tools. Additional self-contained napari plugins developed as part of this work also implement interactive plotting and selection based on particle features, and label interpolation for easier segmentation. Finally, we highlight the differences with existing software and showcase blik’s applicability in biological research.
2

Clement, Alice M., Richard Cloutier, Jing Lu, Egon Perilli, Anton Maksimenko, and John Long. "A fresh look at Cladarosymblema narrienense, a tetrapodomorph fish (Sarcopterygii: Megalichthyidae) from the Carboniferous of Australia, illuminated via X-ray tomography." PeerJ 9 (December 10, 2021): e12597. http://dx.doi.org/10.7717/peerj.12597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background The megalichthyids are one of several clades of extinct tetrapodomorph fish that lived throughout the Devonian–Permian periods. They are advanced “osteolepidid-grade” fishes that lived in freshwater swamp and lake environments, with some taxa growing to very large sizes. They bear cosmine-covered bones and a large premaxillary tusk that lies lingually to a row of small teeth. Diagnosis of the family remains controversial with various authors revising it several times in recent works. There are fewer than 10 genera known globally, and only one member definitively identified from Gondwana. Cladarosymblema narrienense Fox et al. 1995 was described from the Lower Carboniferous Raymond Formation in Queensland, Australia, on the basis of several well-preserved specimens. Despite this detailed work, several aspects of its anatomy remain undescribed. Methods Two especially well-preserved 3D fossils of Cladarosymblema narrienense, including the holotype specimen, are scanned using synchrotron or micro-computed tomography (µCT), and 3D modelled using specialist segmentation and visualisation software. New anatomical detail, in particular internal anatomy, is revealed for the first time in this taxon. A novel phylogenetic matrix, adapted from other recent work on tetrapodomorphs, is used to clarify the interrelationships of the megalichthyids and confirm the phylogenetic position of C. narrienense. Results Never before seen morphological details of the palate, hyoid arch, basibranchial skeleton, pectoral girdle and axial skeleton are revealed and described. Several additional features are confirmed or updated from the original description. Moreover, the first full, virtual cranial endocast of any tetrapodomorph fish is presented and described, giving insight into the early neural adaptations in this group. Phylogenetic analysis confirms the monophyly of the Megalichthyidae with seven genera included (Askerichthys, Cladarosymblema, Ectosteorhachis, Mahalalepis, Megalichthys, Palatinichthys, and Sengoerichthys). The position of the megalichthyids as sister group to canowindrids, crownward of “osteolepidids” (e.g.,Osteolepis and Gogonasus), but below “tristichopterids” such as Eusthenopteron is confirmed, but our findings suggest further work is required to resolve megalichthyid interrelationships.
3

Leahey, Lucy G., Ralph E. Molnar, Kenneth Carpenter, Lawrence M. Witmer, and Steven W. Salisbury. "Cranial osteology of the ankylosaurian dinosaur formerly known asMinmisp. (Ornithischia: Thyreophora) from the Lower Cretaceous Allaru Mudstone of Richmond, Queensland, Australia." PeerJ 3 (December 8, 2015): e1475. http://dx.doi.org/10.7717/peerj.1475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Minmiis the only known genus of ankylosaurian dinosaur from Australia. Seven specimens are known, all from the Lower Cretaceous of Queensland. Only two of these have been described in any detail: the holotype specimenMinmi paravertebrafrom the Bungil Formation near Roma, and a near complete skeleton from the Allaru Mudstone on Marathon Station near Richmond, preliminarily referred to a possible new species ofMinmi. The Marathon specimen represents one of the world’s most complete ankylosaurian skeletons and the best-preserved dinosaurian fossil from eastern Gondwana. Moreover, among ankylosaurians, its skull is one of only a few in which the majority of sutures have not been obliterated by dermal ossifications or surface remodelling. Recent preparation of the Marathon specimen has revealed new details of the palate and narial regions, permitting a comprehensive description and thus providing new insights cranial osteology of a basal ankylosaurian. The skull has also undergone computed tomography, digital segmentation and 3D computer visualisation enabling the reconstruction of its nasal cavity and endocranium. The airways of the Marathon specimen are more complicated than non-ankylosaurian dinosaurs but less so than derived ankylosaurians. The cranial (brain) endocast is superficially similar to those of other ankylosaurians but is strongly divergent in many important respects. The inner ear is extremely large and unlike that of any dinosaur yet known. Based on a high number of diagnostic differences between the skull of the Marathon specimen and other ankylosaurians, we consider it prudent to assign this specimen to a new genus and species of ankylosaurian.Kunbarrasaurus ieversigen. et sp. nov. represents the second genus of ankylosaurian from Australia and is characterised by an unusual melange of both primitive and derived characters, shedding new light on the evolution of the ankylosaurian skull.
4

Jung, Y., H. Kim, B. Park, H. Lee, B. Kim, M. Bang, J. Lee, M. Oh, and G. Cho. "EP02.14: The new 3D‐based fetal segmentation and visualisation method." Ultrasound in Obstetrics & Gynecology 62, S1 (October 2023): 107. http://dx.doi.org/10.1002/uog.26634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kang, Hanwen, and Chao Chen. "Fruit detection, segmentation and 3D visualisation of environments in apple orchards." Computers and Electronics in Agriculture 171 (April 2020): 105302. http://dx.doi.org/10.1016/j.compag.2020.105302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Colombo, E., T. Fick, G. Esposito, M. Germans, L. Regli, and T. van Doormaal. "Segmentation techniques of cerebral arteriovenous malformations for 3D visualisation: a systematic review." Brain and Spine 2 (2022): 101415. http://dx.doi.org/10.1016/j.bas.2022.101415.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Petitpas, Laurent, and Hugo Harter. "Aide de l’imagerie 3D pour le diagnostic d’une Classe II asymétrique." Revue d'Orthopédie Dento-Faciale 55, no. 3 (September 2021): 371–82. http://dx.doi.org/10.1051/odf/2021024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Depuis maintenant plusieurs années, nous pouvons compter sur l’utilisation des outils numériques d’imagerie 3D pour affiner un diagnostic orthodontique qui se veut de plus en plus précis. Ces différents outils 3D permettent de mettre en évidence de manière plus importante les dysmorphoses, notamment en visualisant le siège de nombreuses asymétries, et ce grâce à la réalisation de superpositions 3D des empreintes optiques et des enregistrements de CBCT (Cone Beam Computed Tomography). Cet article montre de nombreuses possibilités quant à la visualisation d’un patient virtualisé en 3D présentant une Classe II asymétrique dentaire et diverses dysmorphoses.
8

Andary, Antoine, Alexis Guedon, and Odile Plaisant. "Le cingulum de Déjerine à nos jours et sa visualisation 3D." Morphologie 105, no. 350 (September 2021): S25. http://dx.doi.org/10.1016/j.morpho.2021.05.074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Luo, Tess X. H., Wallace W. L. Lai, and Zhanzhan Lei. "Intensity Normalisation of GPR C-Scans." Remote Sensing 15, no. 5 (February 27, 2023): 1309. http://dx.doi.org/10.3390/rs15051309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The three-dimensional (3D) ground-penetrating radar (GPR) has been widely applied in subsurface surveys and imaging, and the quality of the resulting C-scan images is determined by the spatial resolution and visualisation contrast. Previous studies have standardised the suitable spatial resolution of GPR C-scans; however, their measurement normalisation remains arbitrary. Human bias is inevitable in C-scan interpretation because different visualisation algorithms lead to different interpretation results. Therefore, an objective scheme for mapping GPR signals after standard processing to the visualisation contrast should be established. Focusing on two typical scenarios, a reinforced concrete structure and an urban underground, this study illustrated that the essential parameters were greyscale thresholding and transformation mapping. By quantifying the normalisation performance with the integration of image segmentation and structural similarity index measure, a greyscale threshold was developed in which the normalised standard deviation of the unit intensity of any surveyed object was two. A transformation function named “bipolar” was also shown to balance the maintenance of real reflections at the target objects. By providing academia/industry with an object-based approach, this study contributes to solving the final unresolved issue of 3D GPR imaging (i.e., image contrast) to better eliminate the interfering noise and better mitigate human bias for any one-off/touch-based imaging and temporal change detection.
10

Patekar, Rahul, Prashant Shukla Kumar, Hong-Seng Gan, and Muhammad Hanif Ramlee. "Automated Knee Bone Segmentation and Visualisation Using Mask RCNN and Marching Cube: Data From The Osteoarthritis Initiative." ASM Science Journal 17 (April 13, 2022): 1–7. http://dx.doi.org/10.32802/asmscj.2022.968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this work, an automated knee bone segmentation model is proposed. A mask region-based convolutional neural network (RCNN) algorithm is developed to segment the bone and reconstructed into 3D object by using Marching-Cube algorithm. The proposed method is divided into two stages. First, the Mask RCNN is introduced to segment subchondral knee bone from the input MRI sequence. In the second stage, the segmented output from Mask R-CNN is fed as input to the Marching cube algorithm for the 3D reconstruction of knee subchondral bone. The proposed method achieved high dice similarity scores for femur bone 95.35%, tibia bone 95.3%, and patella bone 94.40% using a Mask R-CNN with Resnet-50 as backbone architecture. Improved dice similarity scores for femur bone 97.11%, tibia bone 97.33%, and patella bone 97.05% are obtained by Mask RCNN with Resnet-101 as backbone architecture. It is noted that the Mask RCNN framework has demonstrated efficient and accurate knee subchondral bone detection as well as segmentation for input MRI sequences.
11

Koribalski, Bärbel S. "Source Finding and Visualisation." Publications of the Astronomical Society of Australia 29, no. 3 (2012): 213. http://dx.doi.org/10.1071/asv29n3_pr.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Large radio surveys such as those proposed for several SKA pathfinder and precursor telescopes require (automated) source-finding algorithms that are highly reliable, complete, and fast. Similarly, sophisticated visualisation tools are needed to explore the resulting survey data together with their multi-wavelength counterparts.In this PASA Special Issue on Source Finding and Visualisation, several advanced source-finding algorithms, including novel methods for radio continuum, polarisation and spectral line surveys are described, tested and compared. The process of finding sources can be considered one of many important steps (e.g., pre-processing, source finding and characterisation, cataloguing/post-processing) in the production of astronomical source catalogues, on which much of the survey science is based.The Australian SKA Pathfinder (ASKAP), equipped with novel Chequerboard phased array feeds, will be a powerful 21-cm survey machine. For the large volumes of ASKAP data, a special version of Duchamp, called Selavy, is being developed in consultation with the ASKAP survey science teams (Whiting & Humphreys 2012). Extensive testing and comparisons of existing and new source-finding algorithms were carried out for this PASA Special Issue and are presented by Westmeier, Popping & Serra (2012), Westerlund, Harris & Westmeier (2012), Popping et al. (2012), Huynh et al. (2012), Hollitt & Johnston-Hollitt (2012), Walsh et al. (2012), George, Stil & Keller (2012), Marsh & Jarrett (2012), Allison, Sadler & Whiting (2012) and Jurek & Brown (2012). An overview on spectral line source finding and visualisation is given by Koribalski (2012), while Hassan, Fluke & Barnes (2012) look into real-time 3D volume rendering of large (Tbytes) astronomical data cubes.
12

Medved, M. S., S. D. Rud, G. E. Trufanov, and D. S. Lebedev. "The intraoperative visualisation technique during lead implantation into the cardiac conductive system: aspects of computed tomography: prospective study." Diagnostic radiology and radiotherapy 14, no. 3 (October 5, 2023): 46–52. http://dx.doi.org/10.22328/2079-5343-2023-14-3-46-52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
INTRODUCTION: The lead implantation into the cardiac conduction system (CCS) is the most physiological method of pacing nowadays. «The method of intraoperative visualization and control of the lead position for permanent electrocardiostimulation during implantation of the lead in the CCS» has been developed for reduce the number of non-targeted implantations. This method based on the integration into the angiograph system 3D-reconstruction of the heart converted to computed tomography (CT) in the form of a mask against the background of fluoroscopy. CT is an important stage of the intraoperative visualization technique (IVT).OBJECTIVE: The aim of the study was to adapt the protocol of CT examination of the heart with contrast to construct a partially segmented 3D-reconstruction of the heart on an angiographic complex for subsequent use during of the lead implantation in the CCS within the framework of the author’s IVT.MATERIALS AND METHODS: As part of the development of the IVT, 21 CT studies of the heart were selected from own database. The step of the gradient of the density difference of the contrasted blood is about 10 HU, the range of the difference of densitometric parameters of the «left ventricle (LV) — right ventricle (RV)» from 0 HU to 200 HU. As well as selected 11 CT studies of the heart. The step of the gradient of the difference of densitometric indicators the contrasted blood in «the RV cavity — myocardium» is about 10 HU, the range is from 0 HU to 100 HU. All CT scans are alternately loaded into the angiograph, followed by the creation of a 3D model of the heart using basic software.RESULTS: It’s necessary to exceed the degree of contrast of the LV cavity over the RV cavity by at least 80 HU to perform partial segmentation on the left and right chambers of a 3D-model of the heart in an angiographic complex that does not have a specialized segmentation module. A sufficiently large part of the left ventricular cavity (LV) disappears with a smaller gradient when the right ventricular cavity (RV) is suppressed. The minimum gradient of «the ventricular cavity — myocardium» is at least 20 HU. The boundaries of the right ventricular edge of the interventricular septum (IVS) are not visualized with a smaller contrast gradient. It’s important for determining the insertion place of the lead into the IVS.CONCLUSION: It’s necessary to exceed the contrast of the LV cavities above the RV cavity by at least 80 HU, the RV cavity above the myocardium by at least 20 HU to perform partial segmentation on the left and right chambers of a 3D-model of the heart in an angiographic complex that does not have a specialized segmentation module
13

Essabah, Mouna, Samir Otmane, Guillaume Bouyer, Joan Hérisson, and Malik Mallem. "Analyse des systèmes de visualisation et d’interaction 3D pour la biologie moléculaire." Techniques et sciences informatiques 31, no. 2 (February 2012): 187–214. http://dx.doi.org/10.3166/tsi.31.187-214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Pouletaut, P., I. Claude, R. Winzenrieth, M. C. Ho Ba Tho, and G. Sebag. "3D1 Osteochondrite primitive de hanche : visualisation 3D et caracterisation geometrique de l’articulation." Journal de Radiologie 85, no. 9 (September 2004): 1471. http://dx.doi.org/10.1016/s0221-0363(04)77531-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Forte, Mari Nieves Velasco, Tarique Hussain, Arno Roest, Gorka Gomez, Monique Jongbloed, John Simpson, Kuberan Pushparajah, Nick Byrne, and Israel Valverde. "Living the heart in three dimensions: applications of 3D printing in CHD." Cardiology in the Young 29, no. 06 (June 2019): 733–43. http://dx.doi.org/10.1017/s1047951119000398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractAdvances in biomedical engineering have led to three-dimensional (3D)-printed models being used for a broad range of different applications. Teaching medical personnel, communicating with patients and relatives, planning complex heart surgery, or designing new techniques for repair of CHD via cardiac catheterisation are now options available using patient-specific 3D-printed models. The management of CHD can be challenging owing to the wide spectrum of morphological conditions and the differences between patients. Direct visualisation and manipulation of the patients’ individual anatomy has opened new horizons in personalised treatment, providing the possibility of performing the whole procedure in vitro beforehand, thus anticipating complications and possible outcomes. In this review, we discuss the workflow to implement 3D printing in clinical practice, the imaging modalities used for anatomical segmentation, the applications of this emerging technique in patients with structural heart disease, and its limitations and future directions.
16

Dury, Richard, Rob Dineen, Anbarasu Lourdusamy, and Richard Grundy. "Semi-automated medulloblastoma segmentation and influence of molecular subgroup on segmentation quality." Neuro-Oncology 21, Supplement_4 (October 2019): iv14. http://dx.doi.org/10.1093/neuonc/noz167.060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Medulloblastoma is the most common malignant brain tumour in children. Segmenting the tumour itself from the surrounding tissue on MRI scans has shown to be useful for neuro-surgical planning, by allowing a better understanding of the tumour margin with 3D visualisation. However, manual segmentation of medulloblastoma is time consuming, prone to bias and inter-observer discrepancies. Here we propose a semi-automatic patient based segmentation pipeline with little sensitivity to tumour location and minimal user input. Using SPM12 “Segment” as a base, an additional tissue component describing the medulloblastoma is included in the algorithm. The user is required to define the centre of mass and a single surface point of the tumour, creating an approximate enclosing sphere. The calculated volume is confined to the cerebellum to minimise misclassification of other intracranial structures. This process typically takes 5 minutes from start to finish. This method was applied to 97 T2-weighted scans of paediatric medulloblastoma (7 WNT, 6 SHH, 17 Gr3, 26 Gr4, 41 unknown subtype); resulting segmented volumes were compared to manual segmentations. An average Dice coefficient of 0.85±0.07 was found, with the Group 4 subtype demonstrating a significantly higher similarity with manual segmentation than other subgroups (0.88±0.04). When visually assessing the 10 cases with the lowest Dice coefficients, it was found that the misclassification of oedema was the most common source of error. As this method is independent of image contrast, segmentation could be improved by applying it to images that are less sensitive to oedema, such as T1.
17

Martins, B., A. Smith, Z. Jing, A. Lazare, and J. M. Artonne. "Mammographie 3D : reduction de la dose et visualisation des structures a faible contraste." Journal de Radiologie 87, no. 10 (October 2006): 1271. http://dx.doi.org/10.1016/s0221-0363(06)86944-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Shuo-Tsung, Tzung-Dau Wang, Wen-Jeng Lee, Tsai-Wei Huang, Pei-Kai Hung, Cheng-Yu Wei, Chung-Ming Chen, and Woon-Man Kung. "Coronary Arteries Segmentation Based on the 3D Discrete Wavelet Transform and 3D Neutrosophic Transform." BioMed Research International 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/798303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose. Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies.Methods. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries.Results. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed.Conclusion. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.
19

Gende, Mateo, Joaquim De Moura, Jorge Novo, Pablo Charlon, and Marcos Ortega. "Automatic Segmentation and Intuitive Visualisation of the Epiretinal Membrane in 3D OCT Images Using Deep Convolutional Approaches." IEEE Access 9 (2021): 75993–6004. http://dx.doi.org/10.1109/access.2021.3082638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Paparoditis, Nicolas, Jean-Pierre Papelard, Bertrand Cannelle, Alexandre Devaux, Bahman Soheilian, Nicolas David, and Erwan Houzay. "Stereopolis II: A multi-purpose and multi-sensor 3D mobile mapping system for street visualisation and 3D metrology." Revue Française de Photogrammétrie et de Télédétection, no. 200 (April 19, 2014): 69–79. http://dx.doi.org/10.52638/rfpt.2012.63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nous présentons dans cet article un système de numérisation mobile 3D hybride laser-image qui permet d'acquérir des infrastructures de données spatiales répondant aux besoins d'applications diverses allant de navigations multimédia immersives jusqu'à de la métrologie 3D à travers le web. Nous détaillons la conception du système, ses capteurs, son architecture et sa calibration, ainsi qu'un service web offrant la possibilité de saisir en 3D via un outil de type SaaS (Software as a Service), permettant à tout un chacun d'enrichir ses propres bases de données à hauteur de ses besoins.Nous abordons également l'anonymisation des données, à savoir la détection et le floutage de plaques d'immatriculation, qui est est une étape inévitable pour la diffusion de ces données sur Internet via des applications grand public.
21

Girod, Luc, and Marc Pierrot-Deseilligny. "L'Égalisation radiométrique de nuages de points 3D issus de corrélation dense." Revue Française de Photogrammétrie et de Télédétection, no. 206 (June 19, 2014): 3–14. http://dx.doi.org/10.52638/rfpt.2014.90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Si les problèmes de colorimétrie dans le mosaïquage d'images ont fait l'objet d'études approfondies par le passé et qu'ils sont maintenant globalement résolus, il n'en est pas de même pour l'égalisation des scènes non planaires et des produits photogrammétriques 3D associés. En effet, certains produits photogrammétriques ne sont pas des images mais des produits purement 3D, de type nuage de points ou surfaces texturées, notamment. Cependant, la cohérence colorimétrique reste d'une grande importance dans ces cas pour une visualisation plus fluide des résultats. Cet article explore donc des algorithmes de correction colorimétrique à appliquer aux nuages de points dont la couleur provient de plusieurs images et leur implémentation dans la librairie MicMac de l'IGN.Deux points sont ici abordés : la correction du vignettage des images d'une part, ce défaut posant des problèmes d'homogénéité intra-image, et l'égalisation inter-images d'autre part.
22

Zakani, F. R., M. Bouksim, K. Arhid, M. Aboulfatah, and T. Gadi. "Segmentation of 3D meshes combining the artificial neural network classifier and the spectral clustering." Computer Optics 42, no. 2 (July 24, 2018): 312–19. http://dx.doi.org/10.18287/2412-6179-2018-42-2-312-319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
3D mesh segmentation has become an essential step in many applications in 3D shape analysis. In this paper, a new segmentation method is proposed based on a learning approach using the artificial neural networks classifier and the spectral clustering for segmentation. Firstly, a training step is done using the artificial neural network trained on existing segmentation, taken from the ground truth segmentation (done by humane operators) available in the benchmark proposed by Chen et al. to extract the candidate boundaries of a given 3D-model based on a set of geometric criteria. Then, we use this resulted knowledge to construct a new connectivity of the mesh and use the spectral clustering method to segment the 3D mesh into significant parts. Our approach was evaluated using different evaluation metrics. The experiments confirm that the proposed method yields significantly good results and outperforms some of the competitive segmentation methods in the literature.
23

Nakkabi, Ismail, Mohammed Ridal, Najib Benmansour, Karim Nadour, Ali El Boukhari, and Mohammed Noureddine El Amine El Alami. "ANATOMIE DE LOREILLE INTERNE : LES CANAUX SEMI-CIRCULAIRE DAPRES UNE MODELISATION TRIDIMENTIONNELLE." International Journal of Advanced Research 10, no. 06 (June 30, 2022): 288–94. http://dx.doi.org/10.21474/ijar01/14886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le present travail sinteresse a lanatomie de loreille interne et particulierement celle des canaux semi-circulaires. Il sagit dune etude combinant des moyens dimagerie de haute resolution (TDM de rochers) a des outils informatiques simples et disponibles gratuitement sur internet. Cecia rendu possible la visualisation et letude precise de certains parametres anatomiques. Nous avons obtenu des resultats graphiques et numeriques. les resultats graphiques, permettant de voir le rendu en 3D de ces structures,peuvent avoir plusieurs applications pratiques, notamment a des fins pedagogiques. Les resultats numeriques demontrent quil y a une variabilite inter individuelle pouvant expliquer la reponse de chacun a des test diagnostiques et therapeutiques de vertige.
24

Pacheco-Gutierrez, Salvador, Hanlin Niu, Ipek Caliskanelli, and Robert Skilton. "A Multiple Level-of-Detail 3D Data Transmission Approach for Low-Latency Remote Visualisation in Teleoperation Tasks." Robotics 10, no. 3 (July 14, 2021): 89. http://dx.doi.org/10.3390/robotics10030089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In robotic teleoperation, the knowledge of the state of the remote environment in real time is paramount. Advances in the development of highly accurate 3D cameras able to provide high-quality point clouds appear to be a feasible solution for generating live, up-to-date virtual environments. Unfortunately, the exceptional accuracy and high density of these data represent a burden for communications requiring a large bandwidth affecting setups where the local and remote systems are particularly geographically distant. This paper presents a multiple level-of-detail (LoD) compression strategy for 3D data based on tree-like codification structures capable of compressing a single data frame at multiple resolutions using dynamically configured parameters. The level of compression (resolution) of objects is prioritised based on: (i) placement on the scene; and (ii) the type of object. For the former, classical point cloud fitting and segmentation techniques are implemented; for the latter, user-defined prioritisation is considered. The results obtained are compared using a single LoD (whole-scene) compression technique previously proposed by the authors. Results showed a considerable improvement to the transmitted data size and updated frame rate while maintaining low distortion after decompression.
25

Ge, Ting, Tianming Zhan, Qinfeng Li, and Shanxiang Mu. "Optimal Superpixel Kernel-Based Kernel Low-Rank and Sparsity Representation for Brain Tumour Segmentation." Computational Intelligence and Neuroscience 2022 (June 24, 2022): 1–12. http://dx.doi.org/10.1155/2022/3514988.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Given the need for quantitative measurement and 3D visualisation of brain tumours, more and more attention has been paid to the automatic segmentation of tumour regions from brain tumour magnetic resonance (MR) images. In view of the uneven grey distribution of MR images and the fuzzy boundaries of brain tumours, a representation model based on the joint constraints of kernel low-rank and sparsity (KLRR-SR) is proposed to mine the characteristics and structural prior knowledge of brain tumour image in the spectral kernel space. In addition, the optimal kernel based on superpixel uniform regions and multikernel learning (MKL) is constructed to improve the accuracy of the pairwise similarity measurement of pixels in the kernel space. By introducing the optimal kernel into KLRR-SR, the coefficient matrix can be solved, which allows brain tumour segmentation results to conform with the spatial information of the image. The experimental results demonstrate that the segmentation accuracy of the proposed method is superior to several existing methods under different indicators and that the sparsity constraint for the coefficient matrix in the kernel space, which is integrated into the kernel low-rank model, has certain effects in preserving the local structure and details of brain tumours.
26

Petitpas, Laurent, and Frédérick Van Meer. "L’utilisation de fichiers 3D pour la création d’un clone virtuel." Revue d'Orthopédie Dento-Faciale 55, no. 1 (February 2021): 53–72. http://dx.doi.org/10.1051/odf/2021005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Si bon nombre de praticiens sont équipés de scanners optiques intrabuccaux pour réaliser leurs empreintes numériques 3D, plus rares sont ceux qui utilisent les fichiers 3D issus de l’Imagerie volumétrique par CBCT (Cone Beam Computed Tomography) et encore moins sont ceux qui utilisent des scans 3D de visage de leur patient. Toutes ces images 3D dont la visualisation de l’image en couleur est attirante permettent déjà une analyse immédiate intéressante du patient. Mais peut-on aller plus loin ? Est-ce que ces fichiers 3D issus des différentes technologies sont interfaçables, connectables ? Les fichiers 3D générés par les différents systèmes technologiques d’acquisition correspondent chacun à une partie virtualisée du patient, malgré des formats de fichiers quelque fois différents, il est possible de les regrouper afin d’obtenir un patient virtuel complet : le « Jumeau virtuel ». Plusieurs logiciels de modélisation graphique 3D permettent d’importer, convertir et utiliser les fichiers des différents types d’acquisition 3D. Évidemment, l’utilisation de ces logiciels nécessitent un certain apprentissage initial mais finalement les procédures numériques sont simples. De la sorte, l’objectif de cet article est de vous sensibiliser avec ces techniques d’utilisation de l’imagerie 3D numérique.
27

Geerlings-Batt, Jade, Carley Tillett, Ashu Gupta, and Zhonghua Sun. "Enhanced Visualisation of Normal Anatomy with Potential Use of Augmented Reality Superimposed on Three-Dimensional Printed Models." Micromachines 13, no. 10 (October 10, 2022): 1701. http://dx.doi.org/10.3390/mi13101701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Anatomical knowledge underpins the practice of many healthcare professions. While cadaveric specimens are generally used to demonstrate realistic anatomy, high cost, ethical considerations and limited accessibility can often impede their suitability for use as teaching tools. This study aimed to develop an alternative to traditional teaching methods; a novel teaching tool using augmented reality (AR) and three-dimensional (3D) printed models to accurately demonstrate normal ankle and foot anatomy. An open-source software (3D Slicer) was used to segment a high-resolution magnetic resonance imaging (MRI) dataset of a healthy volunteer ankle and produce virtual bone and musculature objects. Bone and musculature were segmented using seed-planting and interpolation functions, respectively. Virtual models were imported into Unity 3D, which was used to develop user interface and achieve interactability prior to export to the Microsoft HoloLens 2. Three life-size models of bony anatomy were printed in yellow polylactic acid and thermoplastic polyurethane, with another model printed in white Visijet SL Flex with a supporting base attached to its plantar aspect. Interactive user interface with functional toggle switches was developed. Object recognition did not function as intended, with adequate tracking and AR superimposition not achieved. The models accurately demonstrate bony foot and ankle anatomy in relation to the associated musculature. Although segmentation outcomes were sufficient, the process was highly time consuming, with effective object recognition tools relatively inaccessible. This may limit the reproducibility of augmented reality learning tools on a larger scale. Research is required to determine the extent to which this tool accurately demonstrates anatomy and ascertain whether use of this tool improves learning outcomes and is effective for teaching anatomy.
28

Lee, Yee Sye, Ali Rashidi, Amin Talei, and Daniel Kong. "Innovative Point Cloud Segmentation of 3D Light Steel Framing System through Synthetic BIM and Mixed Reality Data: Advancing Construction Monitoring." Buildings 14, no. 4 (March 30, 2024): 952. http://dx.doi.org/10.3390/buildings14040952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In recent years, mixed reality (MR) technology has gained popularity in construction management due to its real-time visualisation capability to facilitate on-site decision-making tasks. The semantic segmentation of building components provides an attractive solution towards digital construction monitoring, reducing workloads through automation techniques. Nevertheless, data shortages remain an issue in maximizing the performance potential of deep learning segmentation methods. The primary aim of this study is to address this issue through synthetic data generation using Building Information Modelling (BIM) models. This study presents a point-cloud-based deep learning segmentation approach to a 3D light steel framing (LSF) system through synthetic BIM models and as-built data captured using MR headsets. A standardisation workflow between BIM and MR models was introduced to enable seamless data exchange across both domains. A total of five different experiments were set up to identify the benefits of synthetic BIM data in supplementing actual as-built data for model training. The results showed that the average testing accuracy using solely as-built data stood at 82.88%. Meanwhile, the introduction of synthetic BIM data into the training dataset led to an improved testing accuracy of 86.15%. A hybrid dataset also enabled the model to segment both the BIM and as-built data captured using an MR headset at an average accuracy of 79.55%. These findings indicate that synthetic BIM data have the potential to supplement actual data, reducing the costs associated with data acquisition. In addition, this study demonstrates that deep learning has the potential to automate construction monitoring tasks, aiding in the digitization of the construction industry.
29

Torregrosa-Fuentes, David, Yolanda Spairani Berrio, José Antonio Huesca Tortosa, Jaime Cuevas González, and Adrián José Torregrosa Fuentes. "Aplicación de la fotogrametría automatizada y de técnicas de iluminación con herramientas SIG para la visualización y el análisis de una piedra con relieves antropomorfos." Virtual Archaeology Review 9, no. 19 (July 20, 2018): 114. http://dx.doi.org/10.4995/var.2018.9531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<p><strong>Extended Abstract:</strong></p><p>We present a methodological approach for the representation, visualisation and analysis of three-dimensional (3D) models of meaningful details in stone reliefs provided by digital documentation tools and subsequent processing. For this aim, anthropomorphous shapes engraved on a flat stone slab found in Sierra de Fontcalent (Alicante) are studied. The object under consideration was located near two archaeological sites, Cova del Fum–a cave with presence of the Chalcolithic material (López, 2010)–and the archaeological site of Fontcalent, with remains from different phases of occupation spanning from 7th-6thBC to the 20thcentury (Ximénez, 2012).</p><p>In the last few years, the use of digital tools provided by new technologies and software development has left traditional work methodology behind (De Reu et al., 2014)while enabling the development of new approaches to both minimise heritage alteration and provide objective and accurate information (Lopez-Menchero, Marchante, Vincent, Cárdenas, &amp; Onrubia, 2017). 3D documentation allows recording of cultural heritage at a reasonable cost with precision and quality through digital photography and SfM (Structure from Motion) photogrammetry with specialised software (De Reu et al., 2013).</p><p>In this project, recording and documentation with digital photography and automated photogrammetric techniques are applied to the Fontcalent stone slab for its digitisation and subsequent 3D representation. From the resulting model, a two-folded line of study is obtained. On the one hand, a Digital Elevation Model (DEM) is generated to study the microtopographies of the stone with geographic analysis techniques provided by Geographic Information Systems (GIS) from different lighting conditions and surface reflections, which are calculated by hillshading or LRM (Local Relief Model) for the interpretation of the object (Carrero-Pazos, Vilas, Romaní, &amp; Rodríguez, 2014;Gawior, Rutkiewicz, Malik &amp; Wistuba, 2017).On the other hand, from both the 3D model and the point cloud, the study is completed with the application of the methods of analysis and visualisation based on the Morphological Residue Model (MRM) which stands out every single detail of the surface morphology of the object (Caninas, Pires, Henriques, &amp; Chambino, 2016;Correia, Pires, &amp; Sousa, 2014). Further visualisations are based on Reflectance Transformation Imaging (RTI) which provides different shadows and reflections over the object from the application of a multidirectional illumination (Happa et al., 2010; Malzbender, Gelb, Wolters, &amp; Zuckerman, 2000; Mudge et al., 2010).</p><p>The results thus obtained of the Fontcalent stone slab allow us to visualise several characteristic elements. The anthropomorphous figure awaking interest is also combined with the figure resulting from different visualisations applied with GIS techniques which may resemble a zoomorph. The use of visualisation techniques shown in this study has been fundamental in order to recognise the latter element. The composition reveals a zigzag line already appreciated before the study so that it is interesting to check if visualisations based on GIS techniques are able to highlight it though being shallow incisions. In our experience regarding this study, visualisation by using the hillshading technique shows a greater level of 3D detail than that provided by the application of the sky-view factor technique which offers a flattering view. However, the former technique may occasionally show shadows which hide other details, unlike the latter technique which plots the entire slab surface illuminated while differentiating the associated microtopography on the basis of its marks. The use of shaders in combination with hillshading and particularly combined with high pass filtering, contributes to improving the visualisation and accuracy of shadowed areas. As a result, we conclude that the results obtained in this work by lighting techniques with GIS add a greater level of detail in comparison to those provided by the mesh or the point cloud.</p><p>The study of the Fontcalent stone slab paves the way for two working hypotheses to be developed: on the one hand, its anthropological origin possibly related to the Chalcolithic, and on the other hand, its study as natural geological formations with ichnofossils.</p><p>The digitisation of cultural heritage with available 3D technologies should be a mandatory requirement when facing any study, analysis or intervention. With the current development of such techniques, we have verified their contribution to fundamental characteristics in the corresponding stages of visualisation and study. Thus, the proposed methodology is presented as an accurate and complete alternative for the study and analysis of the existing cultural heritage, and opens new ways for the revision, reinterpretation and revaluation of the previously evaluated heritage through traditional techniques.</p>
30

Capellini, Katia, Vincenzo Positano, Michele Murzi, Pier Andrea Farneti, Giovanni Concistrè, Luigi Landini, and Simona Celi. "A Decision-Support Informatics Platform for Minimally Invasive Aortic Valve Replacement." Electronics 11, no. 12 (June 17, 2022): 1902. http://dx.doi.org/10.3390/electronics11121902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Minimally invasive aortic valve replacement is performed by mini-sternotomy (MS) or less invasive right anterior mini-thoracotomy (RT). The possibility of adopting RT is assessed by anatomical criteria derived from manual 2D image analysis. We developed a semi-automatic tool (RT-PLAN) to assess the criteria of RT, extract other parameters of surgical interest and generate a view of the anatomical region in a 3D space. Twenty-five 3D CT images from a dataset were retrospectively evaluated. The methodology starts with segmentation to reconstruct 3D surface models of the aorta and anterior rib cage. Secondly, the RT criteria and geometric information from these models are automatically and quantitatively evaluated. A comparison is made between the values of the parameters measured by the standard manual 2D procedure and our tool. The RT-PLAN procedure was feasible in all cases. Strong agreement was found between RT-PLAN and the standard manual 2D procedure. There was no difference between the RT-PLAN and the standard procedure when selecting patients for the RT technique. The tool developed is able to effectively perform the assessment of the RT criteria, with the addition of a realistic visualisation of the surgical field through virtual reality technology.
31

Herráez, Borja Javier, and Eduardo Vendrell. "Segmentación de mallas 3d de edificios históricos para levantamiento arquitectónico." Virtual Archaeology Review 9, no. 18 (January 10, 2018): 66. http://dx.doi.org/10.4995/var.2018.5858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<p>Advances in three-dimensional (3D) acquisition systems have introduced this technology to more fields of study, such as archaeology or architecture. In the architectural field, scanning a building is one of the first possible steps from which a 3D model can be obtained and can be later used for visualisation and/or feature analysis, thanks to computer-based pattern recognition tools. The automation of these tools allows for temporal savings and has become a strong aid for professionals, so that more and more methods are developed with this objective. In this article, a method for 3D mesh segmentation focused on the representation of historic buildings is proposed. This type of buildings is characterised by having singularities and features in façades, such as doors or windows. The main objective is to recognise these features, understanding them as those parts of the model that differ from the main structure of the building. The idea is to use a recognition algorithm for planar faces that allows users to create a graph showing the connectivity between them, therefore allowing the reflection of the shape of the 3Dmodel. At a later step, this graph is matched against some pre-defined graphs that represent the patterns to look for. Each coincidence between both graphs indicate the position of one of the characteristics sought. The developed method has proved to be effective for feature detection and suitable for inclusion in architectural surveying applications.</p>
32

Kharroubi, A., R. Hajji, R. Billen, and F. Poux. "CLASSIFICATION AND INTEGRATION OF MASSIVE 3D POINTS CLOUDS IN A VIRTUAL REALITY (VR) ENVIRONMENT." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17 (November 29, 2019): 165–71. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w17-165-2019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. With the increasing volume of 3D applications using immersive technologies such as virtual, augmented and mixed reality, it is very interesting to create better ways to integrate unstructured 3D data such as point clouds as a source of data. Indeed, this can lead to an efficient workflow from 3D capture to 3D immersive environment creation without the need to derive 3D model, and lengthy optimization pipelines. In this paper, the main focus is on the direct classification and integration of massive 3D point clouds in a virtual reality (VR) environment. The emphasis is put on leveraging open-source frameworks for an easy replication of the findings. First, we develop a semi-automatic segmentation approach to provide semantic descriptors (mainly classes) to groups of points. We then build an octree data structure leveraged through out-of-core algorithms to load in real time and continuously only the points that are in the VR user's field of view. Then, we provide an open-source solution using Unity with a user interface for VR point cloud interaction and visualisation. Finally, we provide a full semantic VR data integration enhanced through developed shaders for future spatio-semantic queries. We tested our approach on several datasets of which a point cloud composed of 2.3 billion points, representing the heritage site of the castle of Jehay (Belgium). The results underline the efficiency and performance of the solution for visualizing classifieds massive point clouds in virtual environments with more than 100 frame per second.
33

Ottom, Mohammad Ashraf, Hanif Abdul Rahman, Iyad M. Alazzam, and Ivo D. Dinov. "Multimodal Stereotactic Brain Tumor Segmentation Using 3D-Znet." Bioengineering 10, no. 5 (May 11, 2023): 581. http://dx.doi.org/10.3390/bioengineering10050581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Stereotactic brain tumor segmentation based on 3D neuroimaging data is a challenging task due to the complexity of the brain architecture, extreme heterogeneity of tumor malformations, and the extreme variability of intensity signal and noise distributions. Early tumor diagnosis can help medical professionals to select optimal medical treatment plans that can potentially save lives. Artificial intelligence (AI) has previously been used for automated tumor diagnostics and segmentation models. However, the model development, validation, and reproducibility processes are challenging. Often, cumulative efforts are required to produce a fully automated and reliable computer-aided diagnostic system for tumor segmentation. This study proposes an enhanced deep neural network approach, the 3D-Znet model, based on the variational autoencoder–autodecoder Znet method, for segmenting 3D MR (magnetic resonance) volumes. The 3D-Znet artificial neural network architecture relies on fully dense connections to enable the reuse of features on multiple levels to improve model performance. It consists of four encoders and four decoders along with the initial input and the final output blocks. Encoder–decoder blocks in the network include double convolutional 3D layers, 3D batch normalization, and an activation function. These are followed by size normalization between inputs and outputs and network concatenation across the encoding and decoding branches. The proposed deep convolutional neural network model was trained and validated using a multimodal stereotactic neuroimaging dataset (BraTS2020) that includes multimodal tumor masks. Evaluation of the pretrained model resulted in the following dice coefficient scores: Whole Tumor (WT) = 0.91, Tumor Core (TC) = 0.85, and Enhanced Tumor (ET) = 0.86. The performance of the proposed 3D-Znet method is comparable to other state-of-the-art methods. Our protocol demonstrates the importance of data augmentation to avoid overfitting and enhance model performance.
34

Bellin, M. F., J. Nauroy, D. Castaing, I. Ewenczyk, R. Adam, D. Azoulay, D. Samuel, M. P. Bralet, C. Guettier, and A. Osorio. "DIG21 Segmentation 3D et mesure du volume du foie et des lesions hepatiques : interet avant chirurgie hepatique." Journal de Radiologie 86, no. 10 (October 2005): 1482. http://dx.doi.org/10.1016/s0221-0363(05)75977-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bareja, Rohan, Marwa Ismail, Douglas Martin, Ameya Nayate, Benita Tamrazi, Ralph Salloum, Ashley Margol, et al. "NIMG-88. A TRANSFER LEARNING APPROACH FOR AUTOMATIC SEGMENTATION OF TUMOR SUB-COMPARTMENTS IN PEDIATRIC MEDULLOBLASTOMA USING MULTIPARAMETRIC MRI: PRELIMINARY FINDINGS." Neuro-Oncology 24, Supplement_7 (November 1, 2022): vii185—vii186. http://dx.doi.org/10.1093/neuonc/noac209.706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract PURPOSE Superior outcomes for medulloblastoma (MB) requires precise surgical resection which can be guided by tumor segmentation. We present the first attempt at automatic segmentation of MB tumors via a hierarchical transfer-learning model that (1) segments the entire tumor habitat (enhancing tumor (ET), necrosis/non-enhancing tumor (NET), edema), followed by (2) training separate models for each of the sub-compartments. Transfer learning from adult brain tumors is used to optimize segmentation of tumor sub-compartments for pediatric MB. METHODS We evaluated 300 adult glioma studies (BRATS) and 49 pediatric MB studies (2-18 years old), both consisting of Gd-T1w, T2w, FLAIR sequences. The MB cohort was collected from Children's Hospital of Los Angeles (Nf19) and Cincinnati Children’s Hospital Medical Center (Nf30). Scans were registered to age-specific pediatric atlases, followed by bias correction and skull-stripping. Ground truth for the tumor sub-compartments was generated via consensus across two experts. We employed a 3D nn-Unet segmentation model on BRATS dataset using initial learning rate of 0.01, stochastic gradient descent as optimizer, and an average of dice loss and cross-entropy loss as the loss function. A hierarchical transfer learning model with Models Genesis was then applied, which allowed for fine tuning every layer on the pediatric MB dataset, across 5-fold cross validation. Dice score was used as performance metric, such that a perfect overlap between ground truth and prediction would yield a Dice score of 1. RESULTS Our 3D hierarchical segmentation model yielded mean dice scores of 0.85±0.03 for the entire tumor habitat; 0.77±0.048 for ET, 0.73±0.09 for edema, and 0.56±0.09 for NET + necrosis segmentation, across cross-validation runs. Overall, tumor outline and segmentation matched well with the ground truth, especially for the entire tumor, ET and enhancing tumor sub-compartments. CONCLUSIONS Our segmentation approach holds promise for accurate automated delineation of the tumor sub-compartments in pediatric Medulloblastoma.
36

Wu, Qian, Yuyao Pei, Zihao Cheng, Xiaopeng Hu, and Changqing Wang. "SDS-Net: A lightweight 3D convolutional neural network with multi-branch attention for multimodal brain tumor accurate segmentation." Mathematical Biosciences and Engineering 20, no. 9 (2023): 17384–406. http://dx.doi.org/10.3934/mbe.2023773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
<abstract> <p>The accurate and fast segmentation method of tumor regions in brain Magnetic Resonance Imaging (MRI) is significant for clinical diagnosis, treatment and monitoring, given the aggressive and high mortality rate of brain tumors. However, due to the limitation of computational complexity, convolutional neural networks (CNNs) face challenges in being efficiently deployed on resource-limited devices, which restricts their popularity in practical medical applications. To address this issue, we propose a lightweight and efficient 3D convolutional neural network SDS-Net for multimodal brain tumor MRI image segmentation. SDS-Net combines depthwise separable convolution and traditional convolution to construct the 3D lightweight backbone blocks, lightweight feature extraction (LFE) and lightweight feature fusion (LFF) modules, which effectively utilizes the rich local features in multimodal images and enhances the segmentation performance of sub-tumor regions. In addition, 3D shuffle attention (SA) and 3D self-ensemble (SE) modules are incorporated into the encoder and decoder of the network. The SA helps to capture high-quality spatial and channel features from the modalities, and the SE acquires more refined edge features by gathering information from each layer. The proposed SDS-Net was validated on the BRATS datasets. The Dice coefficients were achieved 92.7, 80.0 and 88.9% for whole tumor (WT), enhancing tumor (ET) and tumor core (TC), respectively, on the BRTAS 2020 dataset. On the BRTAS 2021 dataset, the Dice coefficients were 91.8, 82.5 and 86.8% for WT, ET and TC, respectively. Compared with other state-of-the-art methods, SDS-Net achieved superior segmentation performance with fewer parameters and less computational cost, under the condition of 2.52 M counts and 68.18 G FLOPs.</p> </abstract>
37

Lê, Than Vu, and Mauro Gaio. "Visualisation 3D de terrain texturé. Préservation au niveau du pixel des qualités géométriques et colorimétriques, une méthode temps réel, innovante et simple." Revue internationale de géomatique 22, no. 3 (September 30, 2012): 461–84. http://dx.doi.org/10.3166/rig.22.461-484.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Avena, M., E. Colucci, G. Sammartano, and A. Spanò. "HBIM MODELLING FOR AN HISTORICAL URBAN CENTRE." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (June 28, 2021): 831–38. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-831-2021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. The research in the geospatial data structuring and formats interoperability direction is the crucial task for creating a 3D Geodatabase at the urban scale. Both geometric and semantic data structuring should be considered, mainly regarding the interoperability of objects and formats generated outside the geographical space. Current reflections on 3D database generation, based on geospatial data, are mostly related to visualisation issues and context-related application. The purposes and scale of representation according to LoDs require some reflections, particularly for the transmission of semantic information.This contribution adopts and develops the integration of some tools to derive object-oriented modelling in the HBIM environment, both at the urban and architectural scale, from point clouds obtained by UAV (Unmanned Aerial Vehicle) photogrammetry.One of the paper’s objectives is retracing the analysis phases of the point clouds acquired by UAV photogrammetry technique and their suitability for multiscale modelling. Starting from UAV clouds, through the optimisation and segmentation, the proposed workflow tries to trigger the modelling of the objects according to the LODs, comparing the one coming from CityGML and the one in use in the BIM community. The experimentation proposed is focused on the case study of the city of Norcia, which like many other historic centres spread over the territory of central Italy, was deeply damaged by the 2016-17 earthquake.
39

Xu, S., and Z. Zhang. "JSMNET: IMPROVING INDOOR POINT CLOUD SEMANTIC AND INSTANCE SEGMENTATION THROUGH SELF-ATTENTION AND MULTISCALE FUSION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (December 13, 2023): 195–201. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-195-2023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. The semantic understanding of indoor 3D point cloud data is crucial for a range of subsequent applications, including indoor service robots, navigation systems, and digital twin engineering. Global features are crucial for achieving high-quality semantic and instance segmentation of indoor point clouds, as they provide essential long-range context information. To this end, we propose JSMNet, which combines a multi-layer network with a global feature self-attention module to jointly segment three-dimensional point cloud semantics and instances. To better express the characteristics of indoor targets, we have designed a multi-resolution feature adaptive fusion module that takes into account the differences in point cloud density caused by varying scanner distances from the target. Additionally, we propose a framework for joint semantic and instance segmentation by integrating semantic and instance features to achieve superior results. We conduct experiments on S3DIS, which is a large three-dimensional indoor point cloud dataset. Our proposed method is compared against other methods, and the results show that it outperforms existing methods in semantic and instance segmentation and provides better results in target local area segmentation. Specifically, our proposed method outperforms PointNet (Qi et al., 2017a) by 16.0% and 26.3% in terms of semantic segmentation mIoU in S3DIS (Area 5) and instance segmentation mPre, respectively. Additionally, it surpasses ASIS (Wang et al., 2019) by 6.0% and 4.6%, respectively, as well as JSPNet (Chen et al., 2022) by a margin of 3.3% for semantic segmentation mIoU and a slight improvement of 0.3% for instance segmentation mPre.
40

Ramilison, Eloi, Axel Legouge, Michel Lucciano, Catherine Masson, and Arnaud Deveze. "Caractérisation acoustique de conduits auditifs externes normaux : de l’humain aux modèles imprimés 3D." Audiology Direct, no. 4 (2020): 5. http://dx.doi.org/10.1051/audiodir/202004005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Objectif : Créer et valider un modèle de CAE normal par impression en 3D bio-fidèles aux CAE humains. Méthodologie : Nous avons prélevé dix CAE humains sur des pièces anatomiques. Après acquisition volumique sur un scanner conventionnel, la conception numérique des CAE en 3D comportait une segmentation et la mise en place d’un support tympanique. Nous avons utilisé du PLA pour l’impression et du ruban adhésif pour simuler une MT artificielle. La vélocimétrie de l’umbo a été mesurée au moyen d’un laser couplé à une chaine acoustique de stimulation-recueil dédiée. Résultats : Comparés aux CAE humains, les modèles ont montré des réponses identiques statistiquement. Un second pic était observé à 5 kHz sur le pattern des CAE imprimés. Les hautes fréquences montraient un profil plus chaotique. Conclusion : Les CAE imprimés en 3D sont des modèles valides et bio-fidèles aux CAE humains. La normalisation des amplifications observées permet d’obtenir un modèle utile pour l’optimisation des dispositifs d’amplification ou de protection auditifs.
41

Vandergucht, David. "ZFOREST : UN PROTOTYPE DE PLATEFORME WEB DE COVISUALISATION LIDAR, RASTER ET VECTEUR À GRANDE ÉCHELLE." Revue Française de Photogrammétrie et de Télédétection 1, no. 211-212 (December 30, 2020): 129–42. http://dx.doi.org/10.52638/rfpt.2015.551.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
En recherche comme en exploitation forestière, la donnée Lidar aéroportée apporte des clefs de compréhension surla structure du terrain, de la forêt et par extension, des informations sur la biomasse aérienne. Mais pour être utile, cette donnée Lidar doit couvrir de vastes étendues tout en étant très résolue spatialement. Ces caractéristiques se traduisent par de grands volumes de données très difficiles à visualiser, manipuler et étudier sans l’aide de logiciels très onéreux.Dans le cadre du projet ANR FORESEE, nous avons développé un logiciel web, de visualisation mixte nuage de pointsLidar / surface 3D issue d’un Modèle Numérique de Terrain / carte / photographie aérienne et terrestre / donnée vectorielle : la plateforme zForest. Ce logiciel, qui s’adresse aux chercheurs en télédétection et à terme aux exploitants forestiers, permet la navigation à grande échelle dans des données massives et leur exploration, du niveau de détail le plus large (la région) jusqu’au plus fin (l’arbre). Cet outil permet la mesure, l’annotation et l’extraction des données. Il propose également une interface de programmation web (API) permettant à d’autres outils du marché d’utiliser ses données sources. zForest étant une plateforme web, elle est disponible sans installation sur tous les navigateurs internet récents, facilitant son accessibilité et son déploiement.
42

Traxer, O., S. Merran, A. Osorio, J. Atif, and X. Ripoche. "Segmentation 3D et mesure instantanee du volume des lithiases coraliformes : application a la nephrolithotomie percutanee." Journal de Radiologie 85, no. 9 (September 2004): 1614. http://dx.doi.org/10.1016/s0221-0363(04)78061-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Santarossa, Monty, Ayse Tatli, Claus von der Burchard, Julia Andresen, Johann Roider, Heinz Handels, and Reinhard Koch. "Chronological Registration of OCT and Autofluorescence Findings in CSCR: Two Distinct Patterns in Disease Course." Diagnostics 12, no. 8 (July 22, 2022): 1780. http://dx.doi.org/10.3390/diagnostics12081780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Optical coherence tomography (OCT) and fundus autofluorescence (FAF) are important imaging modalities for the assessment and prognosis of central serous chorioretinopathy (CSCR). However, setting the findings from both into spatial and temporal contexts as desirable for disease analysis remains a challenge due to both modalities being captured in different perspectives: sparse three-dimensional (3D) cross sections for OCT and two-dimensional (2D) en face images for FAF. To bridge this gap, we propose a visualisation pipeline capable of projecting OCT labels to en face image modalities such as FAF. By mapping OCT B-scans onto the accompanying en face infrared (IR) image and then registering the IR image onto the FAF image by a neural network, we can directly compare OCT labels to other labels in the en face plane. We also present a U-Net inspired segmentation model to predict segmentations in unlabeled OCTs. Evaluations show that both our networks achieve high precision (0.853 Dice score and 0.913 Area under Curve). Furthermore, medical analysis performed on exemplary, chronologically arranged CSCR progressions of 12 patients visualized with our pipeline indicates that, on CSCR, two patterns emerge: subretinal fluid (SRF) in OCT preceding hyperfluorescence (HF) in FAF and vice versa.
44

Petitpas, Laurent. "De l’utilisation des technologies 3D numériques pour l’analyse, la planification et le rétrocontrôle d’un traitement orthodontique de troubles fonctionnels temporo-mandibulaires." Revue d'Orthopédie Dento-Faciale 53, no. 3 (September 2019): 297–315. http://dx.doi.org/10.1051/odf/2019027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nous souhaitons présenter dans cet article, un cas clinique pédagogique d’une adolescente où les analyses numériques ont été utilisées tout au long de la gestion de sa reprise de traitement. L’arrivée de la tomodensitométrie à faisceau conique (CBCT) et de logiciels informatiques 3D, permet aux orthodontistes de fournir des diagnostics, des simulations et des traitements plus précis. D’un point de vue éthique, il n’est pas acceptable de soigner sans utiliser les méthodes les plus bénéfiques aux patients. En effet, les fichiers DICOM de CBCT renferment une multitude d’informations que nous ne possédions pas auparavant. Des techniques de segmentation par des logiciels de sélection par seuillage nous permettent de visualiser précisément les rapports radiculaires et osseux en 3D. Une connaissance précise de la position des racines dentaires et des bases osseuses améliore la détermination du succès du traitement orthodontique par une surveillance accrue. De nos jours, compte tenu de la rapidité du développement technologique, une combinaison de scanners intra-oraux, d’enregistrements numériques 3D, de multi-bagues individualisés, d’arcs personnalisés, de collage indirect numérique, et d’aligneurs de finition deviendra bientôt une obligation de moyens orthodontiques.
45

Petitpas, Laurent. "De l’utilisation des technologies 3D numériques pour l’analyse, la planification et le rétrocontrôle d’un traitement orthodontique de troubles fonctionnels temporo-mandibulaires." Revue d'Orthopédie Dento-Faciale 54, no. 3 (September 2020): 331–48. http://dx.doi.org/10.1051/odf/2020034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Nous souhaitons présenter dans cet article, un cas clinique pédagogique d’une adolescente où les analyses numériques ont été utilisées tout au long de la gestion de sa reprise de traitement. L’arrivée de la tomodensitométrie à faisceau conique (CBCT) et de logiciels informatiques 3D, permet aux orthodontistes de fournir des diagnostics, des simulations et des traitements plus précis. D’un point de vue éthique, il n’est pas acceptable de soigner sans utiliser les méthodes les plus bénéfiques aux patients. En effet, les fichiers DICOM de CBCT renferment une multitude d’informations que nous ne possédions pas auparavant. Des techniques de segmentation par des logiciels de sélection par seuillage nous permettent de visualiser précisément les rapports radiculaires et osseux en 3D. Une connaissance précise de la position des racines dentaires et des bases osseuses améliore la détermination du succès du traitement orthodontique par une surveillance accrue. De nos jours, compte tenu de la rapidité du développement technologique, une combinaison de scanners intra-oraux, d’enregistrements numériques 3D, de multi-bagues individualisés, d’arcs personnalisés, de collage indirect numérique, et d’aligneurs de finition deviendra bientôt une obligation de moyens orthodontiques.
46

Peltier, A., K. Narahari, and R. Van Velthoven. "Visualisation 3D du trajet réel de la biopsie dans la prostate et son impact clinique en routine diagnostique du cancer." Progrès en Urologie 22, no. 13 (November 2012): 770. http://dx.doi.org/10.1016/j.purol.2012.08.071.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Purwani, Sri, Julita Nahar, and Carole Twining. "Brain Image Segmentation with Gradient Information." International Journal of Engineering & Technology 7, no. 4.38 (December 3, 2018): 1392. http://dx.doi.org/10.14419/ijet.v7i4.38.27882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Segmentation is the process of extracting structures within the images. The purpose is to simplify the representation of the image into something meaningful and easier to analyse. A magnetic resonance (MR) brain image can be represented as three main tissues, e.g. cerebrospinal fluid (CSF), grey matter and white matter. Although various segmentation methods have been developed, such images are generally segmented by modelling the intensity histogram by using a Gaussian Mixture Model (GMM). However, the standard use of 1D histogram sometimes fails to find the mean for Gaussians. We hence solved this by including gradient information in the 2D intensity and intensity gradient histogram. We applied our methods on real data of 2D MR brain images. We then compared the methods with the previous published method of Petrovic et al. on their dataset, as well as on our larger datasets extracted from the same database of 3D MR brain mages, where the ground-truth annotations are available. This shows that our method performs better than the previous method.
48

Wang, Bin, Yuanyuan Zhang, Chunyan Wu, and Fen Wang. "Multimodal MRI Analysis of Cervical Cancer on the Basis of Artificial Intelligence Algorithm." Contrast Media & Molecular Imaging 2021 (November 8, 2021): 1–11. http://dx.doi.org/10.1155/2021/1673490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The purpose of this study is to explore the application value of artificial intelligence algorithm in multimodal MRI image diagnosis of cervical cancer. Based on the traditional convolutional neural network (CNN), an artificial intelligence 3D-CNN algorithm is designed according to the characteristics of cervical cancer. 70 patients with cervical cancer were selected as the experimental group, and 10 healthy people were selected as the reference group. The 3D-CNN algorithm was applied to the diagnosis of clinical cervical cancer multimodal MRI images. The value of the algorithm was comprehensively evaluated by the image quality and diagnostic accuracy. The results showed that compared with the traditional CNN algorithm, the convergence rate of the loss curve of the artificial intelligence 3D-CNN algorithm was accelerated, and the segmentation accuracy of whole-area tumors (WT), core tumor areas (CT), and enhanced tumor areas (ET) was significantly improved. In addition, the clarity of the multimodal MRI image and the recognition performance of the lesion were significantly improved. Under the artificial intelligence 3D-CNN algorithm, the Dice values of WT, ET, and CT regions were 0.78, 0.71, and 0.64, respectively. The sensitivity values were 0.92, 0.91, and 0.88, respectively. The specificity values were 0.93, 0.92, and 0.9 l, respectively. The Hausdorff (Haus) distances were 0.93, 0.92, and 0.90, respectively. The data of various indicators were significantly better than those of the traditional CNN algorithm ( P < 0.05). In addition, the diagnostic accuracy of the artificial intelligence 3D-CNN algorithm was 93.11 ± 4.65%, which was also significantly higher than that of the traditional CNN algorithm (82.45 ± 7.54%) ( P < 0.05). In summary, the recognition and segmentation ability of multimodal MRI images based on artificial intelligence 3D-CNN algorithm for cervical cancer lesions were significantly improved, which can significantly enhance the clinical diagnosis rate of cervical cancer.
49

Naldi, Giovanni, Barbara Avuzzi, Simona Fantini, Mauro Carrara, Ester Orlandi, Elisa Massafra, and Stefano Tomatis. "A SEGMENTATION PROBLEM IN QUANTITATIVE ASSESSMENT OF ORGAN DISPOSITION IN RADIOTHERAPY." Image Analysis & Stereology 30, no. 3 (November 1, 2011): 179. http://dx.doi.org/10.5566/ias.v30.p179-186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Radiotherapeutic treatment of cancer is best conducted if the prescription dose is given to the tumor while surrounding normal tissues are maximally spared. With the aim to meet these requirements the complexity of radiotherapy techniques have steadily increased under a strong technological impulse, especially in the last decades. One problem involves the rate of the particular disposition of the structures of interest in a patient. Recently the authors (Tomatis et al., 2010; 2011) have proposed a computational approach in order to represent quantitatively the geometrical features of organs at risk, summarized in characteristics of distance, shape and orientation of such organs in respect to the target. A basic problem to solve before to compute the risk index, is the segmentation of the organs involved in the radiotherapy planning. Here we described a 3D segmentation method by using the clinical computed tomography (CT) data of the patients. Our algorithm is based on different steps, a preprocessing phase where a nonlinear diffusion filter is applied; a level set based method for extract 2D countours; a postprocessing reconstruction of 3D volume from 2D segmented slices. Some comparisons with manually traced segmentation by clinical experts are provided.
50

Scandurra, S., M. Capone, and D. Palomba. "FAST AND SMART 3D MODELLING: AN ALGORITHMIC TOOL BASED ON CHURCH TYPOLOGY." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2/W4-2024 (February 14, 2024): 389–96. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-w4-2024-389-2024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract. Increasingly advanced technological development and the broad possibilities introduced by computer graphics and parametric-semantic modelling force us to reflect on the concept of smart models, especially in relation to the purposes for which the models themselves are created, directing research towards in-depth studies linked to the type of information that the model is intended to convey. This research shows the results obtained in the development and experimentation of a generative modelling algorithm dedicated to the rapid and semi-automatic modelling of churches. The work stems from the need to elaborate a useful tool to prepare, in a short time, an intelligent database - graphic and informative - rapidly visualisable, dedicated to the management of churches in post-earthquake emergency conditions.The main objective is to make efficient the processes of data management and visualisation based on the seismic damage assessment sheets [D.P.C.M. 23 February 2006 (G.U. 7.3.2006, no. 55)] through procedures capable of expanding the information patrimony and, at the same time, optimising documentation and intervention times, costs and resource management (Chevrier, et al., 2009). The procedure for realising the parametric model is based on the concept of shape grammar. It allows different types of churches to be generated from the modification of basic shapes prepared according to the concept of a macro-element. It makes it possible to generate different types of churches from the modification of basic shapes prepared according to the macro-element concept (Lanzara, et al., 2021). The algorithm was tested by applying it to several case studies to evaluate its effectiveness and future implementations.

To the bibliography