Artigos de revistas sobre o tema "Images 2D - Modèles 3D"

Siga este link para ver outros tipos de publicações sobre o tema: Images 2D - Modèles 3D.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Images 2D - Modèles 3D".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Djroh, Simon Pierre, Ehui Beh Jean Constantin Aka, Yacouba Ouattara, Serge P. Dégine Gnoleba, Yaba Mariana Aimée Ahade e Loukou Nicolas Kouame. "Tomographie électrique et estimation des réser ves de Granite pour une exploitation de carrière à Brofodoume, Sud-Est de la Côte d’Ivoire". Journal of the Cameroon Academy of Sciences 18, n.º 2 (24 de outubro de 2022): 437–46. http://dx.doi.org/10.4314/jcas.v18i2.4.

Texto completo da fonte
Resumo:
La quantification de matériaux granitiques est primordiale pour une exploitation rationnelle de carrière afin de fournir les granulats nécessaires pour les travaux d’infrastructure pour l’expansion de la ville d’Abidjan. Cette étude, réalisée à Brofodoumé, présente une autre approche dans l’évaluation du potentiel granitique par la tomographie électrique 2D. Cette technique consiste à explorer le sous-sol par la mesure des contrastes de résistivité électrique avec la configuration poly-pôles à pas multiples. Les résultats obtenus sont des sections images 2D dont l’analyse montre que la profondeur du toit granitique serait comprise entre 0,2 et 60 mètres. Ils relèvent également la présence de quelques fractures notamment des discontinuités NW-SE à ~ 20 m de profondeur, qui pourraient limiter son exploitation. Le modèle tridimensionnel (3D) expose un granite de forme irrégulière, sub-affleurent sur la bordure occidentale du prospect et estime son potentiel exploitable à ~ 7 millions de tonnes. Quantification of granitic materials is essential for rational quarrying to provide the aggregates needed for infrastructure works for the expansion of the city of Abidjan. This study, carried out in Brofodoumé, presents an alternative approach to the evaluation of the economic potential of a granitic (felsic) pluton using 2D electrical tomography. This technique consists of exploring the subsurface by measuring electrical resistivity contrasts with the multi-step poly-pole configuration. The results obtained are 2D image sections whose analysis shows that the depth of the granitic roof is between 0.2 and 60 metres. They also show the presence of some fractures notably a NW—SE discontinuity at ~ 20 m depth with hydrogeological potential that could limit its exploitation. The three-dimensional (3D) model shows an irregularly shaped granite, sub-flush on the western edge of the prospect and estimates its exploitable potential at ~ 7 million tonnes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wang, Yong Sheng. "Fast 3D Human Face Modeling Method Based on Multiple View 2D Images". Applied Mechanics and Materials 273 (janeiro de 2013): 796–99. http://dx.doi.org/10.4028/www.scientific.net/amm.273.796.

Texto completo da fonte
Resumo:
This paper presents a novel approach to model 3D human face from multiple view 2D images in a fast mode. Our proposed method mainly includes three steps: 1) Face Recognition from 2D images, 2) Converting 2D images to 3D images, 3) Modeling 3D human face. To extract visual features of both 2D and 3D images, visual features adopted in 3D are described by Point Signature, and visual features utilized in 2D is represented by Gabor filter responses. Afterwards, 3D model is obtained by combining multiple view 2D images through calculating projections vector and translation vector. Experimental results show that our method can model 3D human face with high accuracy and efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Hirano, Daisuke, Yusuke Funayama e Takashi Maekawa. "3D Shape Reconstruction from 2D Images". Computer-Aided Design and Applications 6, n.º 5 (janeiro de 2009): 701–10. http://dx.doi.org/10.3722/cadaps.2009.701-710.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Szymczyk, Piotr. "Obtaining 3D information from 2D images". ELEKTRONIKA - KONSTRUKCJE, TECHNOLOGIE, ZASTOSOWANIA 1, n.º 6 (5 de junho de 2014): 49–52. http://dx.doi.org/10.15199/ele-2014-041.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Holzleitner, Iris J., Alex L. Jones, Kieran J. O’Shea, Rachel Cassar, Vanessa Fasolt, Victor Shiramizu, Benedict C. Jones e Lisa M. DeBruine. "Do 3D Face Images Capture Cues of Strength, Weight, and Height Better than 2D Face Images do?" Adaptive Human Behavior and Physiology 7, n.º 3 (26 de agosto de 2021): 209–19. http://dx.doi.org/10.1007/s40750-021-00170-8.

Texto completo da fonte
Resumo:
Abstract Objectives A large literature exists investigating the extent to which physical characteristics (e.g., strength, weight, and height) can be accurately assessed from face images. While most of these studies have employed two-dimensional (2D) face images as stimuli, some recent studies have used three-dimensional (3D) face images because they may contain cues not visible in 2D face images. As equipment required for 3D face images is considerably more expensive than that required for 2D face images, we here investigated how perceptual ratings of physical characteristics from 2D and 3D face images compare. Methods We tested whether 3D face images capture cues of strength, weight, and height better than 2D face images do by directly comparing the accuracy of strength, weight, and height ratings of 182 2D and 3D face images taken simultaneously. Strength, height and weight were rated by 66, 59 and 52 raters respectively, who viewed both 2D and 3D images. Results In line with previous studies, we found that weight and height can be judged somewhat accurately from faces; contrary to previous research, we found that people were relatively inaccurate at assessing strength. We found no evidence that physical characteristics could be judged more accurately from 3D than 2D images. Conclusion Our results suggest physical characteristics are perceived with similar accuracy from 2D and 3D face images. They also suggest that the substantial costs associated with collecting 3D face scans may not be justified for research on the accuracy of facial judgments of physical characteristics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Delvit, Jean-Marc, e Céline L'Helguen. "Observer la Terre en 3D avec Pléiades-HR". Revue Française de Photogrammétrie et de Télédétection, n.º 209 (29 de janeiro de 2015): 11–16. http://dx.doi.org/10.52638/rfpt.2015.155.

Texto completo da fonte
Resumo:
La connaissance de modèles numériques d'élévation est un élément essentiel pour un grand nombre d'applications de télédétection, notamment à très haute résolution. Cette information d'élévation peut être déduite de couples d'images stéréoscopiques en combinant des techniques photogrammétriques et des techniques de type corrélation. L'amélioration de la résolution des système spatiaux d'observation de la Terre, comme PleiadesHR (pas d'échantillonnage au sol de 70 cm) ainsi que les capacités croissantes d'acquisitions en mode stéréoscopique et multi-stéréoscopique permettent de générer de manière plus systématique des modèles numériques de terrain prenant également en compte le sursol fins (bâtis, véhicules, végétation), avec une précision de restitution de l'ordre du mètre. Une méthode générique de restitution de modèles en « vrai » 3D fortement parallélisable est ici utilisée autorisant le traitement simultané de 2 à N images sur une même zone.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Nomura, Kosuke, Mitsuru Kaise, Daisuke Kikuchi, Toshiro Iizuka, Yumiko Fukuma, Yasutaka Kuribayashi, Masami Tanaka et al. "Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study". Gastroenterology Research and Practice 2016 (2016): 1–6. http://dx.doi.org/10.1155/2016/4561468.

Texto completo da fonte
Resumo:
Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images.Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition.Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts.Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Sun, Haoran. "A Review of 3D-2D Registration Methods and Applications based on Medical Images". Highlights in Science, Engineering and Technology 35 (11 de abril de 2023): 200–224. http://dx.doi.org/10.54097/hset.v35i.7055.

Texto completo da fonte
Resumo:
The registration of preoperative three-dimensional (3D) medical images with intraoperative two-dimensional (2D) data is a key technology for image-guided radiotherapy, minimally invasive surgery, and interventional procedures. In this paper, we review 3D-2D registration methods using computed tomography (CT) and magnetic resonance imaging (MRI) as preoperative 3D images and ultrasound, X-ray, and visible light images as intraoperative 2D images. The 3D-2D registration techniques are classified into intensity-based, structure-based, and gradient-based according to the different registration features. In addition, we investigated the different application scenarios of this registration technology in medical clinical treatment, which can be divided into disease diagnosis, surgical guidance and postoperative evaluation, and also investigated the evaluation method of 3D-2D registration effect.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Logadottir, A., S. Korreman e P. M. Petersen. "COMPARISON OF PROSTATE LOCALIZATION WITH 2D-2D AND 3D IMAGES". Radiotherapy and Oncology 92 (agosto de 2009): S179—S180. http://dx.doi.org/10.1016/s0167-8140(12)73061-x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Brownhill, Daniel, Yachin Chen, Barbara A. K. Kreilkamp, Christophe de Bezenac, Christine Denby, Martyn Bracewell, Shubhabrata Biswas, Kumar Das, Anthony G. Marson e Simon S. Keller. "Automated subcortical volume estimation from 2D MRI in epilepsy and implications for clinical trials". Neuroradiology 64, n.º 5 (18 de outubro de 2021): 935–47. http://dx.doi.org/10.1007/s00234-021-02811-x.

Texto completo da fonte
Resumo:
Abstract Purpose Most techniques used for automatic segmentation of subcortical brain regions are developed for three-dimensional (3D) MR images. MRIs obtained in non-specialist hospitals may be non-isotropic and two-dimensional (2D). Automatic segmentation of 2D images may be challenging and represents a lost opportunity to perform quantitative image analysis. We determine the performance of a modified subcortical segmentation technique applied to 2D images in patients with idiopathic generalised epilepsy (IGE). Methods Volume estimates were derived from 2D (0.4 × 0.4 × 3 mm) and 3D (1 × 1x1mm) T1-weighted acquisitions in 31 patients with IGE and 39 healthy controls. 2D image segmentation was performed using a modified FSL FIRST (FMRIB Integrated Registration and Segmentation Tool) pipeline requiring additional image reorientation, cropping, interpolation and brain extraction prior to conventional FIRST segmentation. Consistency between segmentations was assessed using Dice coefficients and volumes across both approaches were compared between patients and controls. The influence of slice thickness on consistency was further assessed using 2D images with slice thickness increased to 6 mm. Results All average Dice coefficients showed excellent agreement between 2 and 3D images across subcortical structures (0.86–0.96). Most 2D volumes were consistently slightly lower compared to 3D volumes. 2D images with increased slice thickness showed lower agreement with 3D images with lower Dice coefficients (0.55–0.83). Significant volume reduction of the left and right thalamus and putamen was observed in patients relative to controls across 2D and 3D images. Conclusion Automated subcortical volume estimation of 2D images with a resolution of 0.4 × 0.4x3mm using a modified FIRST pipeline is consistent with volumes derived from 3D images, although this consistency decreases with an increased slice thickness. Thalamic and putamen atrophy has previously been reported in patients with IGE. Automated subcortical volume estimation from 2D images is feasible and most reliable at using in-plane acquisitions greater than 1 mm x 1 mm and provides an opportunity to perform quantitative image analysis studies in clinical trials.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Park, Minsoo, Hang-Nga Mai, Mai Yen Mai, Thaw Thaw Win, Du-Hyeong Lee e Cheong-Hee Lee. "Intra- and Interrater Agreement of Face Esthetic Analysis in 3D Face Images". BioMed Research International 2023 (10 de abril de 2023): 1–7. http://dx.doi.org/10.1155/2023/3717442.

Texto completo da fonte
Resumo:
The use of three-dimensional (3D) facial scans for facial analysis is increasing in maxillofacial treatment. The aim of this study was to investigate the consistency of two-dimensional (2D) and 3D facial analyses performed by multiple raters. Six men and four women (25–36-year-old) participated in this study. The 2D images of the smiling and resting faces in the frontal and sagittal planes were obtained. The 3D facial and intraoral scans were merged to generate virtual 3D faces. Ten clinicians performed facial analyses by investigating 14 indices of 2D and 3D faces. Intra- and interrater agreements of the results of 2D and 3D facial analyses within and among the participants were evaluated. The intrarater agreement between the 2D and 3D facial analyses varied according to the indices. The highest and lowest agreements were found for the dental crowding index (0.94) and smile line curvature index (0.56) in the frontal plane, and Angle’s classification (canine) index (0.98) and occlusal plane angle index (0.55) in the profile plane. In the frontal plane, the interrater agreements were generally higher for the 3D images than for the 2D images, while in the profile plane, the interrater agreements were high in the Angle’s classification (canine) index however low in the other indices. Several occlusion-related indices were missing in the 2D images because the posterior teeth were not observed. Esthetic analysis results between 2D and 3D face images can differ according to the evaluation indices. The use of 3D faces is recommended over 2D images to increase the reliability of facial analyses, as it can fully assess both esthetic and occlusion-related indices.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Falah .K, Rasha, e Rafeef Mohammed .H. "Convert 2D shapes in to 3D images". Journal of Al-Qadisiyah for computer science and mathematics 9, n.º 2 (20 de agosto de 2017): 19–23. http://dx.doi.org/10.29304/jqcm.2017.9.2.146.

Texto completo da fonte
Resumo:
There are several complex programs that using for convert 2D images to 3D models with difficult techniques. In this paper ,it will be introduce a useful technique and using simple Possibilities and language for converting 2D to 3D images. The technique would be used; a three-dimensional projection using three images for the same shape and display three dimensional image from different side and to implement the particular work, visual programming with 3Dtruevision engine would be used, where its given acceptable result with shorting time. And it could be used in the field of engineering drawing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Kim, Jeong Joo. "Capturing 3D macromolecule structure in 2D images". Trends in Biochemical Sciences 48, n.º 3 (março de 2023): 305–6. http://dx.doi.org/10.1016/j.tibs.2023.01.002.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Kim, Hyungsuk, Chang Hyun Yoo, Soo Bin Park e Hyun Seok Song. "Difference in glenoid retroversion between two-dimensional axial computed tomography and three-dimensional reconstructed images". Clinics in Shoulder and Elbow 23, n.º 2 (1 de junho de 2020): 71–79. http://dx.doi.org/10.5397/cise.2020.00122.

Texto completo da fonte
Resumo:
Background: The glenoid version of the shoulder joint correlates with the stability of the glenohumeral joint and the clinical results of total shoulder arthroplasty. We sought to analyze and compare the glenoid version measured by traditional axial two-dimensional (2D) computed tomography (CT) and three-dimensional (3D) reconstructed images at different levels.Methods: A total of 30 cases, including 15 male and 15 female patients, who underwent 3D shoulder CT imaging was randomly selected and matched by sex consecutively at one hospital. The angular difference between the scapular body axis and 2D CT slice axis was measured. The glenoid version was assessed at three levels (midpoint, upper one-third, and center of the lower circle of the glenoid) using Friedman’s method in the axial plane with 2D CT images and at the same level of three different transverse planes using a 3D reconstructed image. Results: The mean difference between the scapular body axis on the 3D reconstructed image and the 2D CT slice axis was 38.4°. At the level of the midpoint of the glenoid, the measurements were 1.7° ± 4.9° on the 2D CT images and −1.8° ± 4.1° in the 3D reconstructed image. At the level of the center of the lower circle, the measurements were 2.7° ± 5.2° on the 2D CT images and −0.5° ± 4.8° in the 3D reconstructed image. A statistically significant difference was found between the 2D CT and 3D reconstructed images at all three levels. Conclusions: The glenoid version is measured differently between axial 2D CT and 3D reconstructed images at three levels. Use of 3D reconstructed imaging can provide a more accurate glenoid version profile relative to 2D CT. The glenoid version is measured differently at different levels.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Sudjai, Narumol, Palanan Siriwanarangsun, Nittaya Lektrakul, Pairash Saiviroonporn, Sorranart Maungsomboon, Rapin Phimolsarnti, Apichat Asavamongkolkul e Chandhanarat Chandhanayingyong. "Robustness of Radiomic Features: Two-Dimensional versus Three-Dimensional MRI-Based Feature Reproducibility in Lipomatous Soft-Tissue Tumors". Diagnostics 13, n.º 2 (10 de janeiro de 2023): 258. http://dx.doi.org/10.3390/diagnostics13020258.

Texto completo da fonte
Resumo:
This retrospective study aimed to compare the intra- and inter-observer manual-segmentation variability in the feature reproducibility between two-dimensional (2D) and three-dimensional (3D) magnetic-resonance imaging (MRI)-based radiomic features. The study included patients with lipomatous soft-tissue tumors that were diagnosed with histopathology and underwent MRI scans. Tumor segmentation based on the 2D and 3D MRI images was performed by two observers to assess the intra- and inter-observer variability. In both the 2D and the 3D segmentations, the radiomic features were extracted from the normalized images. Regarding the stability of the features, the intraclass correlation coefficient (ICC) was used to evaluate the intra- and inter-observer segmentation variability. Features with ICC > 0.75 were considered reproducible. The degree of feature robustness was classified as low, moderate, or high. Additionally, we compared the efficacy of 2D and 3D contour-focused segmentation in terms of the effects of the stable feature rate, sensitivity, specificity, and diagnostic accuracy of machine learning on the reproducible features. In total, 93 and 107 features were extracted from the 2D and 3D images, respectively. Only 35 features from the 2D images and 63 features from the 3D images were reproducible. The stable feature rate for the 3D segmentation was more significant than for the 2D segmentation (58.9% vs. 37.6%, p = 0.002). The majority of the features for the 3D segmentation had moderate-to-high robustness, while 40.9% of the features for the 2D segmentation had low robustness. The diagnostic accuracy of the machine-learning model for the 2D segmentation was close to that for the 3D segmentation (88% vs. 90%). In both the 2D and the 3D segmentation, the specificity values were equal to 100%. However, the sensitivity for the 2D segmentation was lower than for the 3D segmentation (75% vs. 83%). For the 2D + 3D radiomic features, the model achieved a diagnostic accuracy of 87% (sensitivity, 100%, and specificity, 80%). Both 2D and 3D MRI-based radiomic features of lipomatous soft-tissue tumors are reproducible. With a higher stable feature rate, 3D contour-focused segmentation should be selected for the feature-extraction process.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Abousalem, Zib ziab. "3D from 2D for Nano images using images processing methods". INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 14, n.º 2 (11 de dezembro de 2014): 5437–47. http://dx.doi.org/10.24297/ijct.v14i2.2064.

Texto completo da fonte
Resumo:
The scanning electron microscope (SEM) remains a main tool for semiconductor and polymer physics but TEM and AFM are increasingly used for minimum size features which called nanomaterials. In addition some physical properties such as microhardness, grain boundaries and domain structure are observed from optical and polarizing microscope which gives poor information and consequentially the error probability of discussion will be high.Thus it is natural to squeeze out every possible bit of resolution in the SEM, optical and polarizing microscopes for the materials under test. In our paper we will tackling this problem using different image processing techniques to get more clarify and sufficient information.In the suggested paper we will obtain set of images for prepared samples under different conditions and with different physical properties. These images will be analyzed using the above mentioned technique which starting by converting the prepared samples images (gray scale or colored images) to data file (*.dat) in two dimensional using programming. The 2D data will convert to 3D data file using FORTRAN programming. All images will subject to the generate filter algorithm for 3D data file. After filtering the 3D data file we can establish histogram, contours and 3D surface to analysis the image. Another technique will be prepared using Visual FORTRAN for steepest descent algorithm (SDA) which gives the vector map for the obtained data. Finally the depth from one single still image will be created and determine using OpenGL library under Visual C++ language, as well as, perform texture mapping. The quality of filtering depends on the way the data is incorporated into the model. Data should be treated carefully. From our paper we can analysis any part from any image without reanalysis the image, all size of the image as in this paper we take three samples with different size (256 * 256), (400 * 400), (510 * 510), this method decrees the cost of hardware and sample.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Abousalem, Zib ziab. "3D from 2D for Nano images using images processing methods". INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 14, n.º 2 (11 de dezembro de 2014): 5437–47. http://dx.doi.org/10.24297/ijct.v14i2.2065.

Texto completo da fonte
Resumo:
The scanning electron microscope (SEM) remains a main tool for semiconductor and polymer physics but TEM and AFM are increasingly used for minimum size features which called nanomaterials. In addition some physical properties such as microhardness, grain boundaries and domain structure are observed from optical and polarizing microscope which gives poor information and consequentially the error probability of discussion will be high. Thus it is natural to squeeze out every possible bit of resolution in the SEM, optical and polarizing microscopes for the materials under test. In our paper we will tackling this problem using different image processing techniques to get more clarify and sufficient information. In the suggested paper we will obtain set of images for prepared samples under different conditions and with different physical properties. These images will be analyzed using the above mentioned technique which starting by converting the prepared samples images (gray scale or colored images) to data file (*.dat) in two dimensional using programming. The 2D data will convert to 3D data file using FORTRAN programming. All images will subject to the generate filter algorithm for 3D data file. After filtering the 3D data file we can establish histogram, contours and 3D surface to analysis the image. Another technique will be prepared using Visual FORTRAN for steepest descent algorithm (SDA) which gives the vector map for the obtained data. Finally the depth from one single still image will be created and determine using OpenGL library under Visual C++ language, as well as, perform texture mapping. The quality of filtering depends on the way the data is incorporated into the model. Data should be treated carefully. From our paper we can analysis any part from any image without reanalysis the image, all size of the image as in this paper we take three samples with different size (256 * 256), (400 * 400), (510 * 510), this method decrees the cost of hardware and sample.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Ding, Y., S. H. Patel, J. Holmes, H. Feng, L. A. McGee, J. C. Rwigema, S. A. Vora et al. "Patient-specific 3D CT Images Reconstruction from 2D KV Images". International Journal of Radiation Oncology*Biology*Physics 118, n.º 5 (abril de 2024): e68-e69. http://dx.doi.org/10.1016/j.ijrobp.2024.01.153.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Hättenschwiler, Nicole, Marcia Mendes e Adrian Schwaninger. "Detecting Bombs in X-Ray Images of Hold Baggage: 2D Versus 3D Imaging". Human Factors: The Journal of the Human Factors and Ergonomics Society 61, n.º 2 (24 de setembro de 2018): 305–21. http://dx.doi.org/10.1177/0018720818799215.

Texto completo da fonte
Resumo:
Objective: This study compared the visual inspection performance of airport security officers (screeners) when screening hold baggage with state-of-the-art 3D versus older 2D imaging. Background: 3D imaging based on computer tomography features better automated detection of explosives and higher baggage throughput than older 2D X-ray imaging technology. Nonetheless, some countries and airports hesitate to implement 3D systems due to their lower image quality and the concern that screeners will need extensive and specific training before they can be allowed to work with 3D imaging. Method: Screeners working with 2D imaging (2D screeners) and screeners working with 3D imaging (3D screeners) conducted a simulated hold baggage screening task with both types of imaging. Differences in image quality of the imaging systems were assessed with the standard procedure for 2D imaging. Results: Despite lower image quality, screeners’ detection performance with 3D imaging was similar to that with 2D imaging. 3D screeners revealed higher detection performance with both types of imaging than 2D screeners. Conclusion: Features of 3D imaging systems (3D image rotation and slicing) seem to compensate for lower image quality. Visual inspection competency acquired with one type of imaging seems to transfer to visual inspection with the other type of imaging. Application: Replacing older 2D with newer 3D imaging systems can be recommended. 2D screeners do not need extensive and specific training to achieve comparable detection performance with 3D imaging. Current image quality standards for 2D imaging need revision before they can be applied to 3D imaging.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Jiao, Yuzhong, Kayton Wai Keung Cheung, Mark Ping Chan Mok e Yiu Kei Li. "Spatial Distance-based Interpolation Algorithm for Computer Generated 2D+Z Images". Electronic Imaging 2020, n.º 2 (26 de janeiro de 2020): 140–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-140.

Texto completo da fonte
Resumo:
Computer generated 2D plus Depth (2D+Z) images are common input data for 3D display with depth image-based rendering (DIBR) technique. Due to their simplicity, linear interpolation methods are usually used to convert low-resolution images into high-resolution images for not only depth maps but also 2D RGB images. However linear methods suffer from zigzag artifacts in both depth map and RGB images, which severely affects the 3D visual experience. In this paper, spatial distance-based interpolation algorithm for computer generated 2D+Z images is proposed. The method interpolates RGB images with the help of depth and edge information from depth maps. Spatial distance from interpolated pixel to surrounding available pixels is utilized to obtain the weight factors of surrounding pixels. Experiment results show that such spatial distance-based interpolation can achieve sharp edges and less artifacts for 2D RGB images. Naturally, it can improve the performance of 3D display. Since bilinear interpolation is used in homogenous areas, the proposed algorithm keeps low computational complexity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Li, Yu, Shaohua Li e Bo Zhang. "Constructing of 3D Fluvial Reservoir Model Based on 2D Training Images". Applied Sciences 13, n.º 13 (25 de junho de 2023): 7497. http://dx.doi.org/10.3390/app13137497.

Texto completo da fonte
Resumo:
Training images are important input parameters for multipoint geostatistical modeling, and training images that can portray 3D spatial correlations are required to construct 3D models. The 3D training images are usually obtained by unconditional simulation using algorithms such as object-based algorithms, and in some cases, it is difficult to obtain the 3D training images directly, so a series of modeling methods based on 2D training images for constructing 3D models has been formed. In this paper, a new modeling method is proposed by synthesizing the advantages of the previous methods. Taking the fluvial reservoir modeling of the P oilfield in the Bohai area as an example, a comparative study based on 2D and 3D training images was carried out. By comparing the variance function, horizontal and vertical connectivity in x-, y-, and z-directions, and style similarity, the study shows that based on several mutually perpendicular 2D training images, the modeling method proposed in this paper can achieve an effect similar to that based on 3D training images directly. In the case that it is difficult to obtain 3D training images, the modeling method proposed in this paper has suitable application prospects.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Hosoi, Fumiki, Sho Umeyama e Kuangting Kuo. "Estimating 3D Chlorophyll Content Distribution of Trees Using an Image Fusion Method Between 2D Camera and 3D Portable Scanning Lidar". Remote Sensing 11, n.º 18 (13 de setembro de 2019): 2134. http://dx.doi.org/10.3390/rs11182134.

Texto completo da fonte
Resumo:
An image fusion method has been proposed for plant images taken using a two-dimensional (2D) camera and three-dimensional (3D) portable lidar for obtaining a 3D distribution of physiological and biochemical plant properties. In this method, a 2D multispectral camera with five bands (475–840 nm) and a 3D high-resolution portable scanning lidar were applied to three sets of sample trees. After producing vegetation index (VI) images from multispectral images, 3D point cloud lidar data were projected onto the 2D plane based on perspective projection, keeping the depth information of each of the lidar points. The VI images were 2D registered to the lidar projected image based on the projective transformation and VI 3D point cloud images were reconstructed based on the depth information. Based on the relationship between the VI values and chlorophyll contents taken by a soil and plant analysis development (SPAD)-502 plus chlorophyll meter, 3D distribution images of the chlorophyll contents were produced. Similarly, a thermal 3D image for a sample was also produced. The resultant chlorophyll distribution images offered vertical and horizontal distributions, and those for each orientation for each sample, showing the spatial variability of the distribution and the difference between the samples.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Tulunoglu, Ozlem, Elcin Esenlik, Ayse Gulsen e Ibrahim Tulunoglu. "A Comparison of Three-Dimensional and Two-Dimensional Cephalometric Evaluations of Children with Cleft Lip and Palate". European Journal of Dentistry 05, n.º 04 (outubro de 2011): 451–58. http://dx.doi.org/10.1055/s-0039-1698918.

Texto completo da fonte
Resumo:
ABSTRACTObjectives: The aim of this retrospective study was to compare the consistency of orthodontic measurement performed on cephalometric films and 3D CT images of cleft lip and palate (CLP) patients. Methods: The study was conducted with 2D radiographs and 3D CT images of 9 boys and 6 girls aged 7-12 with CLP. 3D reconstructions were performed using MIMICS software. Results: Frontal analysis found statistical differences for all parameters except occlusal plane tilt (OcP-tilt) and McNamara analysis found statistical differences in 2D and 3D measurements for all parameters except ANS-Me and Co-Gn; Steiner analysis found statistical differences for all parameters except SND, SNB and Max1-SN. Intra-group variability in measurements was also very low for all parameters for both 2D and 3D images. Conclusions: Study results indicate significant differences between measurements taken from 2D and 3D images in patients with cleft lip and palate. (Eur J Dent 2011;5:451-458)
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Poudel, Prabal, Christian Hansen, Julian Sprung e Michael Friebe. "3D segmentation of thyroid ultrasound images using active contours". Current Directions in Biomedical Engineering 2, n.º 1 (1 de setembro de 2016): 467–70. http://dx.doi.org/10.1515/cdbme-2016-0103.

Texto completo da fonte
Resumo:
AbstractIn this paper, we propose a method to segment the thyroid from a set of 2D ultrasound images. We extended an active contour model in 2D to generate a 3D segmented thyroid volume. First, a preprocessing step is carried out to suppress the noise present in US data. Second, an active contour is used to segment the thyroid in each of the 2D images. Finally, all the segmented thyroid images are passed to a 3D reconstruction algorithm to obtain a 3D model of the thyroid. We obtained an average segmentation accuracy of 86.7% in six datasets with a total of 703 images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Mao, Xiaoyang, Tosiyasu Kunii, Issei Fujishiro e Tsukasa Noma. "Hierarchical Representations of 2D/3D Gray-Scale Images and Their 2D/3D Two-Way Conversion". IEEE Computer Graphics and Applications 7, n.º 12 (dezembro de 1987): 37–44. http://dx.doi.org/10.1109/mcg.1987.276937.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Lintz, Francois, Arne Burssens, Alesio Bernasconi, Martin O’Malley, Rémi Raclot, Martinus Richter, Alexej Barg e Cesar de Cesar Netto. "A Case-control Study of 3D versus 2D Weight Bearing CT Measurements of the M1-M2 Intermetatarsal Angle in Hallux Valgus". Foot & Ankle Orthopaedics 3, n.º 3 (1 de julho de 2018): 2473011418S0032. http://dx.doi.org/10.1177/2473011418s00321.

Texto completo da fonte
Resumo:
Category: Midfoot/Forefoot Introduction/Purpose: Surgical planning based on angular measurements obtained on conventional radiographs is challenging due to perspective distortion and operator bias. Novel weightbearing CT (WBCT) three-dimensional (3D) measurements using coordinate systems may represent a more reliable and accurate evaluation of this 3D deformity. The objective of this study was to compare the M1-M2 intermetatarsal angle (IMA) obtained manually on WBCT digitally reconstructed 2D radiographs versus a set of coordinates from the full 3D dataset, in patients with hallux valgus (HV) deformity and in healthy controls. We hypothesised that the 3D measurements would be more reliably obtained, demonstrating different values when compared to 2D radiographic measurements. Methods: In this multicenter retrospective comparative study, 83 feet that underwent WBCT of the foot were included (41 HV: mean age 59, 81% female, 42 controls: mean age 52, 80% female). Datasets were analysed by three independent trained foot and ankle surgeons using the same protocol. Coordinates in three planes (x, y, z) of four different landmark points were harvested: center of the heads and midpoint of the proximal metaphysis of the 1st and 2nd metatarsal. The IMA measurements were then performed in reconstructed radiographic images (DRR-IMA). The data collected was then analyzed by a single 4th independent and blinded investigator who calculated the 3D angle (3D-IMA) and its projection on the weightbearing plane (2D-IMA). Intra-observer realiability was assessed by Pearson/Spearman’s correlation. Intermethod correlation was evaluated by intraclass correlation coefficient (ICC). Mean values for measures were comparared by One-way ANOVA. P-values of less than 0.05 were considered significant. Results: Intraobserver reliability was excellent for radiographic DRR-IMA (0.95) and 3D coordinates assessment (0.99). Intermethod correlation between the three different imaging modalities (DDR, 2D and 3D), considering bias and interactions, were respectively 0.71 and 0.51 in control and HV patients. IMA measurements were found to be similar when measured in DRR, 2D and 3D WBCT images, for both controls and HV patients. Mean values and confidence intervals (CI) for controls were 8.8 degrees (CI, 7.9-9.7) in DDR images, 9.8 degrees (CI, 8.7-10.9) in 2D images and 10.6 degrees (CI, 9.5-11.8) in 3D images. When compared to controls, HV patients demonstraded significantly increased IMA (p<0.05): 13.06 degrees (CI, 11.8-14.3) in DDR images, 12.1 degrees (CI, 10.8-13.3) in 2D images and 13.3 degrees (CI, 12.3-14.3) in 3D images. Conclusion: We found that similar values for IMA were measured in 2D reconstructed radiographs, WBCT 3D and 2D projected images. When compared to controls, HV patients were found to have increased IMA in all three different imaging types used (DDR, 2D and 3D). Intermethod correlation was higher for IMA performed in controls. Intraobserver reliability was excellent for both radiographic IMA measurements and WBCT 3D coordinates. Our study is the first study to evaluate measurements of the 3D-IMA in HV and control patients. Further investigations are required before guidelines for its clinical use can be formulated.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Petre, Raluca-Diana, e Titus Zaharia. "3D Model-Based Semantic Categorization of Still Image 2D Objects". International Journal of Multimedia Data Engineering and Management 2, n.º 4 (outubro de 2011): 19–37. http://dx.doi.org/10.4018/jmdem.2011100102.

Texto completo da fonte
Resumo:
Automatic classification and interpretation of objects present in 2D images is a key issue for various computer vision applications. In particular, when considering image/video, indexing, and retrieval applications, automatically labeling in a semantically pertinent manner/huge multimedia databases still remains a challenge. This paper examines the issue of still image object categorization. The objective is to associate semantic labels to the 2D objects present in natural images. The principle of the proposed approach consists of exploiting categorized 3D model repositories to identify unknown 2D objects, based on 2D/3D matching techniques. The authors use 2D/3D shape indexing methods, where 3D models are described through a set of 2D views. Experimental results, carried out on both MPEG-7 and Princeton 3D models databases, show recognition rates of up to 89.2%.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Zamora, Natalia, Jose M. Llamas, Rosa Cibrián, Jose L. Gandia e Vanessa Paredes. "Cephalometric measurements from 3D reconstructed images compared with conventional 2D images". Angle Orthodontist 81, n.º 5 (7 de abril de 2011): 856–64. http://dx.doi.org/10.2319/121210-717.1.

Texto completo da fonte
Resumo:
Abstract Objective: To assess whether the values of different measurements taken on three-dimensional (3D) reconstructions from cone-beam computed tomography (CBCT) are comparable with those taken on two-dimensional (2D) images from conventional lateral cephalometric radiographs (LCRs) and to examine if there are differences between the different types of CBCT software when taking those measurements. Material and Methods: Eight patients were selected who had both an LRC and a CBCT. The 3D reconstructions of each patient in the CBCT were evaluated using two different software packages, NemoCeph 3D and InVivo5. An observer took 10 angular and 3 linear measurements on each of the three types of record on two different occasions. Results: Intraobserver reliability was high except for the mandibular plane and facial cone (from the LCR), the Na-Ans distance (using NemoCeph 3D), and facial cone and the Ans-Me distance (using InVivo5). No statistically significant differences were found for the angular and linear measurements between the LCRs and the CBCTs for any measurement, and the correlation levels were high for all measurements. Conclusion: No statistically significant differences were found between the angular and linear measurements taken with the LCR and those taken with the CBCT. Neither were there any statistically significant differences between the angular or linear measurements using the two CBCT software packages.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Li, Y., T. Sawada, M. Yi, L. J. Latecki e Z. Pizlo. "3D symmetry correspondence from 2D images of objects". Journal of Vision 11, n.º 11 (23 de setembro de 2011): 73. http://dx.doi.org/10.1167/11.11.73.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Kanazawa, Angjoo, Shahar Kovalsky, Ronen Basri e David Jacobs. "Learning 3D Deformation of Animals from 2D Images". Computer Graphics Forum 35, n.º 2 (maio de 2016): 365–74. http://dx.doi.org/10.1111/cgf.12838.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Dhawan, A. P., e L. Arata. "Knowledge-based 3D analysis from 2D medical images". IEEE Engineering in Medicine and Biology Magazine 10, n.º 4 (dezembro de 1991): 30–37. http://dx.doi.org/10.1109/51.107166.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Iyoho, Anthony E., Jonathan M. Young, Vladislav Volman, David A. Shelley, Laurel J. Ng e Henry Wang. "3D Tibia Reconstruction Using 2D Computed Tomography Images". Military Medicine 184, Supplement_1 (1 de março de 2019): 621–26. http://dx.doi.org/10.1093/milmed/usy379.

Texto completo da fonte
Resumo:
Abstract OBJECTIVE Skeletal stress fracture of the lower limbs remains a significant problem for the military. The objective of this study was to develop a subject-specific 3D reconstruction of the tibia using only a few CT images for the prediction of peak stresses and locations. METHODS Full bilateral tibial CT scans were recorded for 63 healthy college male participants. A 3D finite element (FE) model of the tibia for each subject was generated from standard CT cross-section data (i.e., 4%, 14%, 38%, and 66% of the tibial length) via a transformation matrix. The final reconstructed FE models were used to calculate peak stress and location on the tibia due to a simulated walking load (3,700 N), and compared to the raw models. RESULTS The density-weighted, spatially-normalized errors between the raw and reconstructed CT models were small. The mean percent difference between the raw and reconstructed models for peak stress (0.62%) and location (−0.88%) was negligible. CONCLUSIONS Subject-specific tibia models can provide even great insights into the mechanisms of stress fracture injury, which are common in military and athletic settings. Rapid development of 3D tibia models allows for the future work of determining peak stress-related injury correlates to stress fracture outcomes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Woo, Yan San, Zhuguang Li, Shun Tamura, Prawit Buayai, Hiromitsu Nishizaki, Koji Makino, Latifah Munirah Kamarudin e Xiaoyang Mao. "3D grape bunch model reconstruction from 2D images". Computers and Electronics in Agriculture 215 (dezembro de 2023): 108328. http://dx.doi.org/10.1016/j.compag.2023.108328.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Cao, Ping, Jie Gao e Zuping Zhang. "Multi-View Based Multi-Model Learning for MCI Diagnosis". Brain Sciences 10, n.º 3 (20 de março de 2020): 181. http://dx.doi.org/10.3390/brainsci10030181.

Texto completo da fonte
Resumo:
Mild cognitive impairment (MCI) is the early stage of Alzheimer’s disease (AD). Automatic diagnosis of MCI by magnetic resonance imaging (MRI) images has been the focus of research in recent years. Furthermore, deep learning models based on 2D view and 3D view have been widely used in the diagnosis of MCI. The deep learning architecture can capture anatomical changes in the brain from MRI scans to extract the underlying features of brain disease. In this paper, we propose a multi-view based multi-model (MVMM) learning framework, which effectively combines the local information of 2D images with the global information of 3D images. First, we select some 2D slices from MRI images and extract the features representing 2D local information. Then, we combine them with the features representing 3D global information learned from 3D images to train the MVMM learning framework. We evaluate our model on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The experimental results show that our proposed model can effectively recognize MCI through MRI images (accuracy of 87.50% for MCI/HC and accuracy of 83.18% for MCI/AD).
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Wang, Feng, Weichuan Ni, Shaojiang Liu, Zhiming Xu, Zemin Qiu e Zhiping Wan. "A 2D image 3D reconstruction function adaptive denoising algorithm". PeerJ Computer Science 9 (3 de outubro de 2023): e1604. http://dx.doi.org/10.7717/peerj-cs.1604.

Texto completo da fonte
Resumo:
To address the issue of image denoising algorithms blurring image details during the denoising process, we propose an adaptive denoising algorithm for the 3D reconstruction of 2D images. This algorithm takes into account the inherent visual characteristics of human eyes and divides the image into regions based on the entropy value of each region. The background region is subject to threshold denoising, while the target region undergoes processing using an adversarial generative network. This network effectively handles 2D target images with noise and generates a 3D model of the target. The proposed algorithm aims to enhance the noise immunity of 2D images during the 3D reconstruction process and ensure that the constructed 3D target model better preserves the original image’s detailed information. Through experimental testing on 2D images and real pedestrian videos contaminated with noise, our algorithm demonstrates stable preservation of image details. The reconstruction effect is evaluated in terms of noise reduction and the fidelity of the 3D model to the original target. The results show an average noise reduction exceeding 95% while effectively retaining most of the target’s feature information in the original image. In summary, our proposed adaptive denoising algorithm improves the 3D reconstruction process by preserving image details that are often compromised by conventional denoising techniques. This has significant implications for enhancing image quality and maintaining target information fidelity in 3D models, providing a promising approach for addressing the challenges associated with noise reduction in 2D images during 3D reconstruction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Vajda, Peter, Ivan Ivanov, Lutz Goldmann, Jong-Seok Lee e Touradj Ebrahimi. "Robust Duplicate Detection of 2D and 3D Objects". International Journal of Multimedia Data Engineering and Management 1, n.º 3 (julho de 2010): 19–40. http://dx.doi.org/10.4018/jmdem.2010070102.

Texto completo da fonte
Resumo:
In this paper, the authors analyze their graph-based approach for 2D and 3D object duplicate detection in still images. A graph model is used to represent the 3D spatial information of the object based on the features extracted from training images to avoid explicit and complex 3D object modeling. Therefore, improved performance can be achieved in comparison to existing methods in terms of both robustness and computational complexity. Different limitations of this approach are analyzed by evaluating performance with respect to the number of training images and calculation of optimal parameters in a number of applications. Furthermore, effectiveness of object duplicate detection algorithm is measured over different object classes. The authors’ method is shown to be robust in detecting the same objects even when images with objects are taken from different viewpoints or distances.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Sheu, Jia Shing, Ho Nien Shou, Li Peng Wang e Tsong Liang Huang. "Implementation of Face Recognition Based on 3D Image". Applied Mechanics and Materials 311 (fevereiro de 2013): 173–78. http://dx.doi.org/10.4028/www.scientific.net/amm.311.173.

Texto completo da fonte
Resumo:
Biometric is used to confirm the unique of identity. In general, face is the most characteristic to recognize a person. In this paper, it is emphasized and compared the quality of 2D and 3D face recognition. There are three parts in this paper. First part is the detection of skin color which is used RGB color space. In order to reduce color red and green which are sensitive to illuminant, Normalized Color Coordinate (NCC) method is chosen to pick up the range of skin color directly. Second, to increase choosing of the important characteristics by Principle Component Analysis (PCA) the wavelength distinguishes technique is used to make 3D images. The third part is about identifying. An improved PCA through a transfer matrix to get optimal total scatter matrix of within-class scatter matrix is used. The optimal total scatter matrix represents the eigenvalue of face characteristics. Finally, the recognition rate and process performance between 2D and 3D images are compared via Euclidean Distance. The efficiency and recognition rate of 3D images are superior to 2D images. The recognition rate of 3D images attains to 92% and costs 0.39 second to recognize each image. It is improved 28% compared with the recognition rate of 2D images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Wang, Yingjie, e Chin-Seng Chua. "Face recognition from 2D and 3D images using 3D Gabor filters". Image and Vision Computing 23, n.º 11 (outubro de 2005): 1018–28. http://dx.doi.org/10.1016/j.imavis.2005.07.005.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

JIANG, C. F. "3D IMAGE RECONSTRUCTION OF OVARIAN TUMOR IN THE ULTRASONIC IMAGES". Biomedical Engineering: Applications, Basis and Communications 13, n.º 02 (25 de abril de 2001): 93–98. http://dx.doi.org/10.4015/s1016237201000121.

Texto completo da fonte
Resumo:
The prevalence of ovarian tumor malignancy can be monitored by the degree of irregularity in the ovarian contour and by the septal structure inside the tumor observed in ultrasonic images. However the 2D ultrasonic images can not integrate 3D information form the ovarian tumor. In this paper, we present an algorithm that can render the 3D image of an ovarian tumor by reconstructing the 2D ultrasonic images into a 3D data set. This is based on sequentially boundary detection in a series of 2D images to form a 3D tumor contour. This contour is then used as a barrier to remove the data containing the other tissue adhering to the tumor surface. The final 3D image rendered by the isolated data provides a clear view of both the surface and inner structure of the ovarian tumor.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Choi, Chang-Hyuk, Hee-Chan Kim, Daewon Kang e Jun-Young Kim. "Comparative study of glenoid version and inclination using two-dimensional images from computed tomography and three-dimensional reconstructed bone models". Clinics in Shoulder and Elbow 23, n.º 3 (1 de setembro de 2020): 119–24. http://dx.doi.org/10.5397/cise.2020.00220.

Texto completo da fonte
Resumo:
Background: This study was performed to compare glenoid version and inclination measured using two-dimensional (2D) images from computed tomography (CT) scans or three-dimensional (3D) reconstructed bone models.Methods: Thirty patients who had undergone conventional CT scans were included. Two orthopedic surgeons measured glenoid version and inclination three times on 2D images from CT scans (2D measurement), and two other orthopedic surgeons performed the same measurements using 3D reconstructed bone models (3D measurement). The 3D-reconstructed bone models were acquired and measured with Mimics and 3-Matics (Materialise).Results: Mean glenoid version and inclination in 2D measurements were –1.705º and 9.08º, respectively, while those in 3D measurements were 2.635º and 7.23º. The intra-observer reliability in 2D measurements was 0.605 and 0.698, respectively, while that in 3D measurements was 0.883 and 0.892. The inter-observer reliability in 2D measurements was 0.456 and 0.374, respectively, while those in 3D measurements was 0.853 and 0.845.Conclusions: The difference between 2D and 3D measurements is not due to differences in image data but to the use of different tools. However, more consistent results were obtained in 3D measurement. Therefore, 3D measurement can be a good alternative for measuring glenoid version and inclination.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Suryakanth, B., e S. A. Hari Prasad. "3D CNN-Residual Neural Network Based Multimodal Medical Image Classification". WSEAS TRANSACTIONS ON BIOLOGY AND BIOMEDICINE 19 (31 de outubro de 2022): 204–14. http://dx.doi.org/10.37394/23208.2022.19.22.

Texto completo da fonte
Resumo:
Multimodal medical imaging has become incredibly common in the area of biomedical imaging. Medical image classification has been used to extract useful data from multimodality medical image data. Magnetic resonance imaging (MRI) and Computed tomography (CT) are some of the imaging methods. Different imaging technologies provide different imaging information for the same part. Traditional ways of illness classification are effective, but in today's environment, 3D images are used to identify diseases. In comparison to 1D and 2D images, 3D images have a very clear vision. The proposed method uses 3D Residual Convolutional Neural Network (CNN ResNet) for the 3D image classification. Various methods are available for classifying the disease, like cluster, KNN, and ANN. Traditional techniques are not trained to classify 3D images, so an advanced approach is introduced in the proposed method to predict the 3D images. Initially, the multimodal 2D medical image data is taken. This 2D input image is turned into 3D image data because 3D images give more information than the 2D image data. Then the 3D CT and MRI images are fused and using the Guided filtering, and the combined image is filtered for the further process. The fused image is then augmented. Finally, this fused image is fed to 3DCNN ResNet for classification purposes. The 3DCNN ResNet classifies the image data and produces the output as five different stages of the disease. The proposed method achieves 98% of accuracy. Thus the designed modal has predicted the stage of the disease in an effective manner.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

He, Zehao, Xiaomeng Sui e Liangcai Cao. "Holographic 3D Display Using Depth Maps Generated by 2D-to-3D Rendering Approach". Applied Sciences 11, n.º 21 (22 de outubro de 2021): 9889. http://dx.doi.org/10.3390/app11219889.

Texto completo da fonte
Resumo:
Holographic display has the potential to be utilized in many 3D application scenarios because it provides all the depth cues that human eyes can perceive. However, the shortage of 3D content has limited the application of holographic 3D displays. To enrich 3D content for holographic display, a 2D to 3D rendering approach is presented. In this method, 2D images are firstly classified into three categories, including distant view images, perspective view images and close-up images. For each category, the computer-generated depth map (CGDM) is calculated using a corresponding gradient model. The resulting CGDMs are applied in a layer-based holographic algorithm to obtain computer-generated holograms (CGHs). The correctly reconstructed region of the image changes with the reconstruction distance, providing a natural 3D display effect. The realistic 3D effect makes the proposed approach can be applied in many applications, such as education, navigation, and health sciences in the future.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Bentley, Laurence R., e Mehran Gharibi. "Two‐ and three‐dimensional electrical resistivity imaging at a heterogeneous remediation site". GEOPHYSICS 69, n.º 3 (maio de 2004): 674–80. http://dx.doi.org/10.1190/1.1759453.

Texto completo da fonte
Resumo:
Geometrically complex heterogeneities at a decommissioned sour gas plant could not be adequately characterized with drilling and 2D electrical resistivity surveys alone. In addition, 2D electrical resistivity imaging profiles produced misleading images as a result of out‐of‐plane resistivity anomalies and violation of the 2D assumption. Accurate amplitude and positioning of electrical conductivity anomalies associated with the subsurface geochemical distribution were required to effectively analyze remediation alternatives. Forward and inverse modeling and field examples demonstrated that 3D resistivity images were needed to properly reconstruct the amplitude and geometry of the complex resistivity anomalies. Problematic 3D artifacts in 2D images led to poor inversion fits and spurious conductivity values in the images at depths close to the horizontal offset of the off‐line anomaly. Three‐dimensional surveys were conducted with orthogonal sets of Wenner and dipole–dipole 2D resistivity survey lines. The 3D inversions were used to locate source zones and zones of elevated ammonium. Thus, conducting 3D electrical resistivity imaging (ERI) surveys early in the site characterization process will improve cost effectiveness at many remediation sites.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Yang, Guangjie, Aidi Gong, Pei Nie, Lei Yan, Wenjie Miao, Yujun Zhao, Jie Wu, Jingjing Cui, Yan Jia e Zhenguang Wang. "Contrast-Enhanced CT Texture Analysis for Distinguishing Fat-Poor Renal Angiomyolipoma From Chromophobe Renal Cell Carcinoma". Molecular Imaging 18 (1 de janeiro de 2019): 153601211988316. http://dx.doi.org/10.1177/1536012119883161.

Texto completo da fonte
Resumo:
Objective: To evaluate the value of 2-dimensional (2D) and 3-dimensional (3D) computed tomography texture analysis (CTTA) models in distinguishing fat-poor angiomyolipoma (fpAML) from chromophobe renal cell carcinoma (chRCC). Methods: We retrospectively enrolled 32 fpAMLs and 24 chRCCs. Texture features were extracted from 2D and 3D regions of interest in triphasic CT images. The 2D and 3D CTTA models were constructed with the least absolute shrinkage and selection operator algorithm and texture scores were calculated. The diagnostic performance of the 2D and 3D CTTA models was evaluated with respect to calibration, discrimination, and clinical usefulness. Results: Of the 177 and 183 texture features extracted from 2D and 3D regions of interest, respectively, 5 2D features and 8 3D features were selected to build 2D and 3D CTTA models. The 2D CTTA model (area under the curve [AUC], 0.811; 95% confidence interval [CI], 0.695-0.927) and the 3D CTTA model (AUC, 0.915; 95% CI, 0.838-0.993) showed good discrimination and calibration ( P > .05). There was no significant difference in AUC between the 2 models ( P = .093). Decision curve analysis showed the 3D model outperformed the 2D model in terms of clinical usefulness. Conclusions: The CTTA models based on contrast-enhanced CT images had a high value in differentiating fpAML from chRCC.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Gunasekaran, Ganesan, e Meenakshisundaram Venkatesan. "An Efficient Technique for Three-Dimensional Image Visualization Through Two-Dimensional Images for Medical Data". Journal of Intelligent Systems 29, n.º 1 (18 de dezembro de 2017): 100–109. http://dx.doi.org/10.1515/jisys-2017-0315.

Texto completo da fonte
Resumo:
Abstract The main idea behind this work is to present three-dimensional (3D) image visualization through two-dimensional (2D) images that comprise various images. 3D image visualization is one of the essential methods for excerpting data from given pieces. The main goal of this work is to figure out the outlines of the given 3D geometric primitives in each part, and then integrate these outlines or frames to reconstruct 3D geometric primitives. The proposed technique is very useful and can be applied to many kinds of images. The experimental results showed a very good determination of the reconstructing process of 2D images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

D’Attilio, Michele, Antonino Peluso, Giulia Falone, Rossana Pipitone, Francesco Moscagiuri e Francesco Caroccia. "“3D Counterpart Analysis”: A Novel Method for Enlow’s Counterpart Analysis on CBCT". Diagnostics 12, n.º 10 (17 de outubro de 2022): 2513. http://dx.doi.org/10.3390/diagnostics12102513.

Texto completo da fonte
Resumo:
The aim of this study was to propose a novel 3D Enlow’s counterpart analysis traced on cone-beam computed tomography (CBCT) images. Eighteen CBCT images of skeletal Class I (ANB = 2° ± 2°) subjects (12 males and 6 females, aged from 9 to 19 years) with no history of previous orthodontic treatment were selected. For each subject, a 2D Enlow’s counterpart analysis was performed on lateral cephalograms extracted from the CBCT images. The following structures were identified: mandibular ramus, middle cranial floor, maxillary skeletal arch, mandibular skeletal arch, maxillary dento-alveolar arch, mandibular dento-alveolar arch. The differences between each part and its relative counterpart obtained from the 2D analysis were than compared with those obtained from a 3D analysis traced on the CBCT images. A Student’s t-test did not show any statistical significant difference between the 2D and 3D measurements. The landmarks proposed by this study identified the cranio-facial structures on the 3D images in a way that could be superimposed on those described by Enlow in his analysis performed on 2D lateral cephalograms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Dai, Xiaowei, Shuiwang Li, Qijun Zhao e Hongyu Yang. "Animal Pose Estimation Based on 3D Priors". Applied Sciences 13, n.º 3 (22 de janeiro de 2023): 1466. http://dx.doi.org/10.3390/app13031466.

Texto completo da fonte
Resumo:
Animal pose estimation is very useful in analyzing animal behavior, monitoring animal health and moving trajectories, etc. However, occlusions, complex backgrounds, and unconstrained illumination conditions in wild-animal images often lead to large errors in pose estimation, i.e., the detected key points have large deviations from their true positions in 2D images. In this paper, we propose a method to improve animal pose estimation accuracy by exploiting 3D prior constraints. Firstly, we learn the 3D animal pose dictionary, in which each atom provides prior knowledge about 3D animal poses. Secondly, given the initially estimated 2D animal pose in the image, we represent its latent 3D pose with the learned dictionary. Finally, the representation coefficients are optimized to minimize the difference between the initially estimated 2D pose and the 2D projection of the latent 3D pose. Furthermore, we construct 2D and 3D animal pose datasets, which are used to evaluate the algorithm’s performance and learn the 3D pose dictionary, respectively. Our experimental results demonstrate that the proposed method makes good use of the 3D pose knowledge and can effectively improve 2D animal pose estimation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Xiong, Zi Ming, Gang Wan e Xue Feng Cao. "Automatic Alignment of 3D Point Clouds to Orthographic Images". Advanced Materials Research 591-593 (novembro de 2012): 1265–68. http://dx.doi.org/10.4028/www.scientific.net/amr.591-593.1265.

Texto completo da fonte
Resumo:
Recent progress in structure-from-motion (SfM) has led to robust techniques that can operate in extremely general conditions. However, a limitation of SfM is that the scene can only be recovered up to a similarity transformation. We address the problem of automatically aligning 3D point clouds from SfM reconstructions to orthographic images. We extract feature lines from 3D point clouds, and project the feature lines onto the ground plane to create a 2D feature lines. So we reduce this alignment problem to a 2D line to 2D line alignment(match), and a novel technique for the automatic feature lines matching is presented in this paper.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Shen, Xiaoke, e Ioannis Stamos. "3D Object Detection and Instance Segmentation from 3D Range and 2D Color Images". Sensors 21, n.º 4 (9 de fevereiro de 2021): 1213. http://dx.doi.org/10.3390/s21041213.

Texto completo da fonte
Resumo:
Instance segmentation and object detection are significant problems in the fields of computer vision and robotics. We address those problems by proposing a novel object segmentation and detection system. First, we detect 2D objects based on RGB, depth only, or RGB-D images. A 3D convolutional-based system, named Frustum VoxNet, is proposed. This system generates frustums from 2D detection results, proposes 3D candidate voxelized images for each frustum, and uses a 3D convolutional neural network (CNN) based on these candidates voxelized images to perform the 3D instance segmentation and object detection. Results on the SUN RGB-D dataset show that our RGB-D-based system’s 3D inference is much faster than state-of-the-art methods, without a significant loss of accuracy. At the same time, we can provide segmentation and detection results using depth only images, with accuracy comparable to RGB-D-based systems. This is important since our methods can also work well in low lighting conditions, or with sensors that do not acquire RGB images. Finally, the use of segmentation as part of our pipeline increases detection accuracy, while providing at the same time 3D instance segmentation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Ban, Yuxi, Yang Wang, Shan Liu, Bo Yang, Mingzhe Liu, Lirong Yin e Wenfeng Zheng. "2D/3D Multimode Medical Image Alignment Based on Spatial Histograms". Applied Sciences 12, n.º 16 (18 de agosto de 2022): 8261. http://dx.doi.org/10.3390/app12168261.

Texto completo da fonte
Resumo:
The key to image-guided surgery (IGS) technology is to find the transformation relationship between preoperative 3D images and intraoperative 2D images, namely, 2D/3D image registration. A feature-based 2D/3D medical image registration algorithm is investigated in this study. We use a two-dimensional weighted spatial histogram of gradient directions to extract statistical features, overcome the algorithm’s limitations, and expand the applicable scenarios under the premise of ensuring accuracy. The proposed algorithm was tested on CT and synthetic X-ray images, and compared with existing algorithms. The results show that the proposed algorithm can improve accuracy and efficiency, and reduce the initial value’s sensitivity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia