Artykuły w czasopismach na temat „2D Images - 3D Models”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: 2D Images - 3D Models.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „2D Images - 3D Models”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Yang, Guangjie, Aidi Gong, Pei Nie, Lei Yan, Wenjie Miao, Yujun Zhao, Jie Wu, Jingjing Cui, Yan Jia i Zhenguang Wang. "Contrast-Enhanced CT Texture Analysis for Distinguishing Fat-Poor Renal Angiomyolipoma From Chromophobe Renal Cell Carcinoma". Molecular Imaging 18 (1.01.2019): 153601211988316. http://dx.doi.org/10.1177/1536012119883161.

Pełny tekst źródła
Streszczenie:
Objective: To evaluate the value of 2-dimensional (2D) and 3-dimensional (3D) computed tomography texture analysis (CTTA) models in distinguishing fat-poor angiomyolipoma (fpAML) from chromophobe renal cell carcinoma (chRCC). Methods: We retrospectively enrolled 32 fpAMLs and 24 chRCCs. Texture features were extracted from 2D and 3D regions of interest in triphasic CT images. The 2D and 3D CTTA models were constructed with the least absolute shrinkage and selection operator algorithm and texture scores were calculated. The diagnostic performance of the 2D and 3D CTTA models was evaluated with respect to calibration, discrimination, and clinical usefulness. Results: Of the 177 and 183 texture features extracted from 2D and 3D regions of interest, respectively, 5 2D features and 8 3D features were selected to build 2D and 3D CTTA models. The 2D CTTA model (area under the curve [AUC], 0.811; 95% confidence interval [CI], 0.695-0.927) and the 3D CTTA model (AUC, 0.915; 95% CI, 0.838-0.993) showed good discrimination and calibration ( P > .05). There was no significant difference in AUC between the 2 models ( P = .093). Decision curve analysis showed the 3D model outperformed the 2D model in terms of clinical usefulness. Conclusions: The CTTA models based on contrast-enhanced CT images had a high value in differentiating fpAML from chRCC.
Style APA, Harvard, Vancouver, ISO itp.
2

Iyoho, Anthony E., Jonathan M. Young, Vladislav Volman, David A. Shelley, Laurel J. Ng i Henry Wang. "3D Tibia Reconstruction Using 2D Computed Tomography Images". Military Medicine 184, Supplement_1 (1.03.2019): 621–26. http://dx.doi.org/10.1093/milmed/usy379.

Pełny tekst źródła
Streszczenie:
Abstract OBJECTIVE Skeletal stress fracture of the lower limbs remains a significant problem for the military. The objective of this study was to develop a subject-specific 3D reconstruction of the tibia using only a few CT images for the prediction of peak stresses and locations. METHODS Full bilateral tibial CT scans were recorded for 63 healthy college male participants. A 3D finite element (FE) model of the tibia for each subject was generated from standard CT cross-section data (i.e., 4%, 14%, 38%, and 66% of the tibial length) via a transformation matrix. The final reconstructed FE models were used to calculate peak stress and location on the tibia due to a simulated walking load (3,700 N), and compared to the raw models. RESULTS The density-weighted, spatially-normalized errors between the raw and reconstructed CT models were small. The mean percent difference between the raw and reconstructed models for peak stress (0.62%) and location (−0.88%) was negligible. CONCLUSIONS Subject-specific tibia models can provide even great insights into the mechanisms of stress fracture injury, which are common in military and athletic settings. Rapid development of 3D tibia models allows for the future work of determining peak stress-related injury correlates to stress fracture outcomes.
Style APA, Harvard, Vancouver, ISO itp.
3

Osadchy, Margarita, David Jacobs, Ravi Ramamoorthi i David Tucker. "Using specularities in comparing 3D models and 2D images". Computer Vision and Image Understanding 111, nr 3 (wrzesień 2008): 275–94. http://dx.doi.org/10.1016/j.cviu.2007.12.004.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Avesta, Arman, Sajid Hossain, MingDe Lin, Mariam Aboian, Harlan M. Krumholz i Sanjay Aneja. "Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation". Bioengineering 10, nr 2 (1.02.2023): 181. http://dx.doi.org/10.3390/bioengineering10020181.

Pełny tekst źródła
Streszczenie:
Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our models. We used the following performance metrics: segmentation accuracy, performance with limited training data, required computational memory, and computational speed during training and deployment. The 3D, 2.5D, and 2D approaches respectively gave the highest to lowest Dice scores across all models. 3D models maintained higher Dice scores when the training set size was decreased from 3199 MRIs down to 60 MRIs. 3D models converged 20% to 40% faster during training and were 30% to 50% faster during deployment. However, 3D models require 20 times more computational memory compared to 2.5D or 2D models. This study showed that 3D models are more accurate, maintain better performance with limited training data, and are faster to train and deploy. However, 3D models require more computational memory compared to 2.5D or 2D models.
Style APA, Harvard, Vancouver, ISO itp.
5

Petre, Raluca-Diana, i Titus Zaharia. "3D Model-Based Semantic Categorization of Still Image 2D Objects". International Journal of Multimedia Data Engineering and Management 2, nr 4 (październik 2011): 19–37. http://dx.doi.org/10.4018/jmdem.2011100102.

Pełny tekst źródła
Streszczenie:
Automatic classification and interpretation of objects present in 2D images is a key issue for various computer vision applications. In particular, when considering image/video, indexing, and retrieval applications, automatically labeling in a semantically pertinent manner/huge multimedia databases still remains a challenge. This paper examines the issue of still image object categorization. The objective is to associate semantic labels to the 2D objects present in natural images. The principle of the proposed approach consists of exploiting categorized 3D model repositories to identify unknown 2D objects, based on 2D/3D matching techniques. The authors use 2D/3D shape indexing methods, where 3D models are described through a set of 2D views. Experimental results, carried out on both MPEG-7 and Princeton 3D models databases, show recognition rates of up to 89.2%.
Style APA, Harvard, Vancouver, ISO itp.
6

Li, Yu, Shaohua Li i Bo Zhang. "Constructing of 3D Fluvial Reservoir Model Based on 2D Training Images". Applied Sciences 13, nr 13 (25.06.2023): 7497. http://dx.doi.org/10.3390/app13137497.

Pełny tekst źródła
Streszczenie:
Training images are important input parameters for multipoint geostatistical modeling, and training images that can portray 3D spatial correlations are required to construct 3D models. The 3D training images are usually obtained by unconditional simulation using algorithms such as object-based algorithms, and in some cases, it is difficult to obtain the 3D training images directly, so a series of modeling methods based on 2D training images for constructing 3D models has been formed. In this paper, a new modeling method is proposed by synthesizing the advantages of the previous methods. Taking the fluvial reservoir modeling of the P oilfield in the Bohai area as an example, a comparative study based on 2D and 3D training images was carried out. By comparing the variance function, horizontal and vertical connectivity in x-, y-, and z-directions, and style similarity, the study shows that based on several mutually perpendicular 2D training images, the modeling method proposed in this paper can achieve an effect similar to that based on 3D training images directly. In the case that it is difficult to obtain 3D training images, the modeling method proposed in this paper has suitable application prospects.
Style APA, Harvard, Vancouver, ISO itp.
7

Zhong, Chunyan, Yanli Guo, Haiyun Huang, Liwen Tan, Yi Wu i Wenting Wang. "Three-Dimensional Reconstruction of Coronary Arteries and Its Application in Localization of Coronary Artery Segments Corresponding to Myocardial Segments Identified by Transthoracic Echocardiography". Computational and Mathematical Methods in Medicine 2013 (2013): 1–8. http://dx.doi.org/10.1155/2013/783939.

Pełny tekst źródła
Streszczenie:
Objectives.To establish 3D models of coronary arteries (CA) and study their application in localization of CA segments identified by Transthoracic Echocardiography (TTE).Methods.Sectional images of the heart collected from the first CVH dataset and contrast CT data were used to establish 3D models of the CA. Virtual dissection was performed on the 3D models to simulate the conventional sections of TTE. Then, we used 2D ultrasound, speckle tracking imaging (STI), and 2D ultrasound plus 3D CA models to diagnose 170 patients and compare the results to coronary angiography (CAG).Results.3D models of CA distinctly displayed both 3D structure and 2D sections of CA. This simulated TTE imaging in any plane and showed the CA segments that corresponded to 17 myocardial segments identified by TTE. The localization accuracy showed a significant difference between 2D ultrasound and 2D ultrasound plus 3D CA model in the severe stenosis group (P<0.05) and in the mild-to-moderate stenosis group (P<0.05).Conclusions.These innovative modeling techniques help clinicians identify the CA segments that correspond to myocardial segments typically shown in TTE sectional images, thereby increasing the accuracy of the TTE-based diagnosis of CHD.
Style APA, Harvard, Vancouver, ISO itp.
8

Choi, Chang-Hyuk, Hee-Chan Kim, Daewon Kang i Jun-Young Kim. "Comparative study of glenoid version and inclination using two-dimensional images from computed tomography and three-dimensional reconstructed bone models". Clinics in Shoulder and Elbow 23, nr 3 (1.09.2020): 119–24. http://dx.doi.org/10.5397/cise.2020.00220.

Pełny tekst źródła
Streszczenie:
Background: This study was performed to compare glenoid version and inclination measured using two-dimensional (2D) images from computed tomography (CT) scans or three-dimensional (3D) reconstructed bone models.Methods: Thirty patients who had undergone conventional CT scans were included. Two orthopedic surgeons measured glenoid version and inclination three times on 2D images from CT scans (2D measurement), and two other orthopedic surgeons performed the same measurements using 3D reconstructed bone models (3D measurement). The 3D-reconstructed bone models were acquired and measured with Mimics and 3-Matics (Materialise).Results: Mean glenoid version and inclination in 2D measurements were –1.705º and 9.08º, respectively, while those in 3D measurements were 2.635º and 7.23º. The intra-observer reliability in 2D measurements was 0.605 and 0.698, respectively, while that in 3D measurements was 0.883 and 0.892. The inter-observer reliability in 2D measurements was 0.456 and 0.374, respectively, while those in 3D measurements was 0.853 and 0.845.Conclusions: The difference between 2D and 3D measurements is not due to differences in image data but to the use of different tools. However, more consistent results were obtained in 3D measurement. Therefore, 3D measurement can be a good alternative for measuring glenoid version and inclination.
Style APA, Harvard, Vancouver, ISO itp.
9

Falah .K, Rasha, i Rafeef Mohammed .H. "Convert 2D shapes in to 3D images". Journal of Al-Qadisiyah for computer science and mathematics 9, nr 2 (20.08.2017): 19–23. http://dx.doi.org/10.29304/jqcm.2017.9.2.146.

Pełny tekst źródła
Streszczenie:
There are several complex programs that using for convert 2D images to 3D models with difficult techniques. In this paper ,it will be introduce a useful technique and using simple Possibilities and language for converting 2D to 3D images. The technique would be used; a three-dimensional projection using three images for the same shape and display three dimensional image from different side and to implement the particular work, visual programming with 3Dtruevision engine would be used, where its given acceptable result with shorting time. And it could be used in the field of engineering drawing.
Style APA, Harvard, Vancouver, ISO itp.
10

Sezer, Sümeyye, Vitoria Piai, Roy P. C. Kessels i Mark ter Laan. "Information Recall in Pre-Operative Consultation for Glioma Surgery Using Actual Size Three-Dimensional Models". Journal of Clinical Medicine 9, nr 11 (13.11.2020): 3660. http://dx.doi.org/10.3390/jcm9113660.

Pełny tekst źródła
Streszczenie:
Three-dimensional (3D) technologies are being used for patient education. For glioma, a personalized 3D model can show the patient specific tumor and eloquent areas. We aim to compare the amount of information that is understood and can be recalled after a pre-operative consult using a 3D model (physically printed or in Augmented Reality (AR)) versus two-dimensional (2D) MR images. In this explorative study, healthy individuals were eligible to participate. Sixty-one participants were enrolled and assigned to either the 2D (MRI/fMRI), 3D (physical 3D model) or AR groups. After undergoing a mock pre-operative consultation for low-grade glioma surgery, participants completed two assessments (one week apart) testing information recall using a standardized questionnaire. The 3D group obtained the highest recall scores on both assessments (Cohen’s d = 1.76 and Cohen’s d = 0.94, respectively, compared to 2D), followed by AR and 2D, respectively. Thus, real-size 3D models appear to improve information recall as compared to MR images in a pre-operative consultation for glioma cases. Future clinical studies should measure the efficacy of using real-size 3D models in actual neurosurgery patients.
Style APA, Harvard, Vancouver, ISO itp.
11

Bich Nhuong, Quach Thi, Pham Dinh Sac, Nguyen Minh Nhut i Hien Thanh Le. "3D Model Reconstruction Using Gan and 2.5D Sketches from 2D Image". Jurnal Teknologi Informasi dan Pendidikan 15, nr 2 (29.09.2022): 1–11. http://dx.doi.org/10.24036/jtip.v15i2.613.

Pełny tekst źródła
Streszczenie:
In the current 4.0 era, many fields such as medicine, cinema, architecture, etc. often use 3D models to visualize objects. However, there is not always enough information or equipment to build a 3D model. Another approach is to take multiple 2D images and convert to 3D shapes. This method requires information on images taken of objects at different angles. To get around this, we use a 2.5D sketch as an intermediary when going from 2D to 3D. A 2D photo is easier to create a 2.5D sketch than to convert directly to a 3D shape. In this paper, we propose a model consisting of three modules: The first is converting from 2D image to 2.5D sketch. The second is to go from a 2.5D sketch to a 3D shape. Finally, refine the newly created 3D shape. Experiments on the ShapeNet Core55 dataset show that our model gives better results than traditional models
Style APA, Harvard, Vancouver, ISO itp.
12

Filaliansary, Tank, Jean-Philippe Vandeborre i Mohamed Daoudi. "A framework for 3D CAD models retrieval from 2D images". Annales Des Télécommunications 60, nr 11-12 (grudzień 2005): 1337–59. http://dx.doi.org/10.1007/bf03219852.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Franken, Thomas, Matteo Dellepiane, Fabio Ganovelli, Paolo Cignoni, Claudio Montani i Roberto Scopigno. "Minimizing user intervention in registering 2D images to 3D models". Visual Computer 21, nr 8-10 (31.08.2005): 619–28. http://dx.doi.org/10.1007/s00371-005-0309-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Greuter, Ladina, Adriana De Rosa, Philippe Cattin, Davide Marco Croci, Jehuda Soleman i Raphael Guzman. "Randomized study comparing 3D virtual reality and conventional 2D on-screen teaching of cerebrovascular anatomy". Neurosurgical Focus 51, nr 2 (sierpień 2021): E18. http://dx.doi.org/10.3171/2021.5.focus21212.

Pełny tekst źródła
Streszczenie:
OBJECTIVE Performing aneurysmal clipping requires years of training to successfully understand the 3D neurovascular anatomy. This training has traditionally been obtained by learning through observation. Currently, with fewer operative aneurysm clippings, stricter work-hour regulations, and increased patient safety concerns, novel teaching methods are required for young neurosurgeons. Virtual-reality (VR) models offer the opportunity to either train a specific surgical skill or prepare for an individual surgery. With this study, the authors aimed to compare the spatial orientation between traditional 2D images and 3D VR models in neurosurgical residents or medical students. METHODS Residents and students were each randomly assigned to describe 4 aneurysm cases, which could be either 2D images or 3D VR models. The time to aneurysm detection as well as a spatial anatomical description was assessed via an online questionnaire and compared between the groups. The aneurysm cases were 10 selected patient cases treated at the authors’ institution. RESULTS Overall, the time to aneurysm detection was shorter in the 3D VR model compared to 2D images, with a trend toward statistical significance (25.77 ± 37.26 vs 45.70 ± 51.94 seconds, p = 0.052). No significant difference was observed for residents (3D VR 24.47 ± 40.16 vs 2D 33.52 ± 56.06 seconds, p = 0.564), while in students a significantly shorter time to aneurysm detection was measured using 3D VR models (26.95 ± 35.39 vs 59.16 ± 44.60 seconds, p = 0.015). No significant differences between the modalities for anatomical and descriptive spatial mistakes were observed. Most participants (90%) preferred the 3D VR models for aneurysm detection and description, and only 1 participant (5%) described VR-related side effects such as dizziness or nausea. CONCLUSIONS VR platforms facilitate aneurysm recognition and understanding of its spatial anatomy, which could make them the preferred method compared to 2D images in the years to come.
Style APA, Harvard, Vancouver, ISO itp.
15

Sun, Lei, Xuesong Suo, Yifan Liu, Meng Zhang i Lijuan Han. "3D Modeling of Transformer Substation Based on Mapping and 2D Images". Mathematical Problems in Engineering 2016 (2016): 1–6. http://dx.doi.org/10.1155/2016/9320502.

Pełny tekst źródła
Streszczenie:
A new method for building 3D models of transformer substation based on mapping and 2D images is proposed in this paper. This method segments objects of equipment in 2D images by usingk-means algorithm in determining the cluster centers dynamically to segment different shapes and then extracts feature parameters from the divided objects by using FFT and retrieves the similar objects from 3D databases and then builds 3D models by computing the mapping data. The method proposed in this paper can avoid the complex data collection and big workload by using 3D laser scanner. The example analysis shows the method can build coarse 3D models efficiently which can meet the requirements for hazardous area classification and constructions representations of transformer substation.
Style APA, Harvard, Vancouver, ISO itp.
16

Marcillat, Marin, Loic Van Audenhaege, Catherine Borremans, Aurélien Arnaubec i Lenaick Menot. "The best of two worlds: reprojecting 2D image annotations onto 3D models". PeerJ 12 (28.06.2024): e17557. http://dx.doi.org/10.7717/peerj.17557.

Pełny tekst źródła
Streszczenie:
Imagery has become one of the main data sources for investigating seascape spatial patterns. This is particularly true in deep-sea environments, which are only accessible with underwater vehicles. On the one hand, using collaborative web-based tools and machine learning algorithms, biological and geological features can now be massively annotated on 2D images with the support of experts. On the other hand, geomorphometrics such as slope or rugosity derived from 3D models built with structure from motion (sfm) methodology can then be used to answer spatial distribution questions. However, precise georeferencing of 2D annotations on 3D models has proven challenging for deep-sea images, due to a large mismatch between navigation obtained from underwater vehicles and the reprojected navigation computed in the process of building 3D models. In addition, although 3D models can be directly annotated, the process becomes challenging due to the low resolution of textures and the large size of the models. In this article, we propose a streamlined, open-access processing pipeline to reproject 2D image annotations onto 3D models using ray tracing. Using four underwater image datasets, we assessed the accuracy of annotation reprojection on 3D models and achieved successful georeferencing to centimetric accuracy. The combination of photogrammetric 3D models and accurate 2D annotations would allow the construction of a 3D representation of the landscape and could provide new insights into understanding species microdistribution and biotic interactions.
Style APA, Harvard, Vancouver, ISO itp.
17

Guček Puhar, Enej, Lidija Korat, Miran Erič, Aleš Jaklič i Franc Solina. "Microtomographic Analysis of a Palaeolithic Wooden Point from the Ljubljanica River". Sensors 22, nr 6 (18.03.2022): 2369. http://dx.doi.org/10.3390/s22062369.

Pełny tekst źródła
Streszczenie:
A rare and valuable Palaeolithic wooden point, presumably belonging to a hunting weapon, was found in the Ljubljanica River in Slovenia in 2008. In order to prevent complete decay, the waterlogged wooden artefact had to undergo conservation treatment, which usually involves some expected deformations of structure and shape. To investigate these changes, a series of surface-based 3D models of the artefact were created before, during and after the conservation process. Unfortunately, the surface-based 3D models were not sufficient to understand the internal processes inside the wooden artefact (cracks, cavities, fractures). Since some of the surface-based 3D models were taken with a microtomographic scanner, we decided to create a volumetric 3D model from the available 2D tomographic images. In order to have complete control and greater flexibility in creating the volumetric 3D model than is the case with commercial software, we decided to implement our own algorithm. In fact, two algorithms were implemented for the construction of surface-based 3D models and for the construction of volumetric 3D models, using (1) unsegmented 2D images CT and (2) segmented 2D images CT. The results were positive in comparison with commercial software and new information was obtained about the actual state and causes of the deformation of the artefact. Such models could be a valuable aid in the selection of appropriate conservation and restoration methods and techniques in cultural heritage research.
Style APA, Harvard, Vancouver, ISO itp.
18

Anderson, Timothy I., Kelly M. Guan, Bolivia Vega, Saman A. Aryana i Anthony R. Kovscek. "RockFlow: Fast Generation of Synthetic Source Rock Images Using Generative Flow Models". Energies 13, nr 24 (13.12.2020): 6571. http://dx.doi.org/10.3390/en13246571.

Pełny tekst źródła
Streszczenie:
Image-based evaluation methods are a valuable tool for source rock characterization. The time and resources needed to obtain images has spurred development of machine-learning generative models to create synthetic images of pore structure and rock fabric from limited image data. While generative models have shown success, existing methods for generating 3D volumes from 2D training images are restricted to binary images and grayscale volume generation requires 3D training data. Shale characterization relies on 2D imaging techniques such as scanning electron microscopy (SEM), and grayscale values carry important information about porosity, kerogen content, and mineral composition of the shale. Here, we introduce RockFlow, a method based on generative flow models that creates grayscale volumes from 2D training data. We apply RockFlow to baseline binary micro-CT image volumes and compare performance to a previously proposed model. We also show the extension of our model to 2D grayscale data by generating grayscale image volumes from 2D SEM and dual modality nanoscale shale images. The results show that our method underestimates the porosity and surface area on the binary baseline datasets but is able to generate realistic grayscale image volumes for shales. With improved binary data preprocessing, we believe that our model is capable of generating synthetic porous media volumes for a very broad class of rocks from shale to carbonates to sandstone.
Style APA, Harvard, Vancouver, ISO itp.
19

Hayase, Mitsuhiro, i Susumu Shimada. "Posture Estimation of Human Body Based on Connection Relations of 3D Ellipsoidal Models". Journal of Advanced Computational Intelligence and Intelligent Informatics 14, nr 6 (20.09.2010): 638–44. http://dx.doi.org/10.20965/jaciii.2010.p0638.

Pełny tekst źródła
Streszczenie:
We propose new method of estimating human body posture from connection relations of threedimensional (3D) ellipsoidal models. First, 3D ellipsoidal models with enlargement and reduction transformations are constructed. Next, two-dimensional (2D) appearance models are constructed from 2D projected images of the 3D model. The appearance models are related to each other by employing a network data structure. They are then matched with an image of the body made from actual thermal images. By using the connection relations between the head and body, the head and the body can be recognized. Differences between individuals can be simply treated by using different sized parts. Moreover, this method can be applied also to recognition of arms and legs.
Style APA, Harvard, Vancouver, ISO itp.
20

Buccino, Federica, Chiara Colombo, Daniel Hernando Lozano Duarte, Luca Rinaudo, Fabio Massimo Ulivieri i Laura Maria Vergani. "2D and 3D numerical models to evaluate trabecular bone damage". Medical & Biological Engineering & Computing 59, nr 10 (1.09.2021): 2139–52. http://dx.doi.org/10.1007/s11517-021-02422-x.

Pełny tekst źródła
Streszczenie:
AbstractThe comprehension of trabecular bone damage processes could be a crucial hint for understanding how bone damage starts and propagates. Currently, different approaches to bone damage identification could be followed. Clinical approaches start from dual X-ray absorptiometry (DXA) technique that can evaluate bone mineral density (BMD), an indirect indicator of fracture risk. DXA is, in fact, a two-dimensional technology, and BMD alone is not able to predict the effective risk of fractures. First attempts in overcoming this issue have been performed with finite element (FE) methods, combined with the use of three-dimensional high-resolution micro-computed tomographic images. The purpose of this work is to evaluate damage initiation and propagation in trabecular vertebral porcine samples using 2D linear-elastic FE models from DXA images and 3D linear FE models from micro-CT images. Results show that computed values of strains with 2D and 3D approaches (e.g., the minimum principal strain) are of the same order of magnitude. 2D DXA-based models still remain a powerful tool for a preliminary screening of trabecular regions that are prone to fracture, while from 3D micro-CT-based models, it is possible to reach details that permit the localization of the most strained trabecula. Graphical abstract
Style APA, Harvard, Vancouver, ISO itp.
21

JIA, JIN, i KEIICHI ABE. "RECOGNIZING 3D OBJECTS BY USING MODELS LEARNED AUTOMATICALLY FROM 2D TRAINING IMAGES". International Journal of Pattern Recognition and Artificial Intelligence 14, nr 03 (maj 2000): 315–38. http://dx.doi.org/10.1142/s0218001400000210.

Pełny tekst źródła
Streszczenie:
A scheme for learning and recognizing 3D objects from their 2D views is presented. The scheme proceeds in two stages. In the first stage, we try to learn a prototype automatically from 2D training images of different objects which belong to the same object class and consequently have similar shapes of parts and similar adjacency relations between the parts. In the second stage, the generated prototype is used to recognize learned objects or the objects similar to the learned ones from images of complex real scenes. We tested the approach on recognizing some simple objects from images of indoor scenes and got satisfactory results.
Style APA, Harvard, Vancouver, ISO itp.
22

Upadhye, Gopal D., Anant Kaulage, Ranjeetsingh S. Suryawanshi, Roopal Tatiwar, Rishikesh Unawane, Aditya Taware i Aryaman Todkar. "A Survey of Algorithms Involved in the Conversion of 2-D Images to 3-D Model". International Journal on Recent and Innovation Trends in Computing and Communication 11, nr 6s (12.06.2023): 358–70. http://dx.doi.org/10.17762/ijritcc.v11i6s.6942.

Pełny tekst źródła
Streszczenie:
Since the advent of machine learning, deep neural networks, and computer graphics, the field of 2D image to 3D model conversion has made tremendous strides. As a result, many algorithms and methods for converting 2D to 3D images have been developed, including SFM, SFS, MVS, and PIFu. Several strategies have been compared, and it was found that each has pros and cons that make it appropriate for particular applications. For instance, SFM is useful for creating realistic 3D models from a collection of pictures, whereas SFS is best for doing so from a single image. While PIFu can create extremely detailed 3D models of human figures from a single image, MVS can manage complicated situations with varied lighting and texture. The method chosen to convert 2D images to 3D ultimately depends on the demands of the application.
Style APA, Harvard, Vancouver, ISO itp.
23

Cao, Ping, Jie Gao i Zuping Zhang. "Multi-View Based Multi-Model Learning for MCI Diagnosis". Brain Sciences 10, nr 3 (20.03.2020): 181. http://dx.doi.org/10.3390/brainsci10030181.

Pełny tekst źródła
Streszczenie:
Mild cognitive impairment (MCI) is the early stage of Alzheimer’s disease (AD). Automatic diagnosis of MCI by magnetic resonance imaging (MRI) images has been the focus of research in recent years. Furthermore, deep learning models based on 2D view and 3D view have been widely used in the diagnosis of MCI. The deep learning architecture can capture anatomical changes in the brain from MRI scans to extract the underlying features of brain disease. In this paper, we propose a multi-view based multi-model (MVMM) learning framework, which effectively combines the local information of 2D images with the global information of 3D images. First, we select some 2D slices from MRI images and extract the features representing 2D local information. Then, we combine them with the features representing 3D global information learned from 3D images to train the MVMM learning framework. We evaluate our model on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The experimental results show that our proposed model can effectively recognize MCI through MRI images (accuracy of 87.50% for MCI/HC and accuracy of 83.18% for MCI/AD).
Style APA, Harvard, Vancouver, ISO itp.
24

ALEKSANDROVA, O. "3D FACE MODEL RECONSTRUCTING FROM ITS 2D IMAGES USING NEURAL NETWORKS". Scientific papers of Donetsk National Technical University. Series: Informatics, Cybernetics and Computer Science 2 - №1, nr 33-34 (2022): 57–64. http://dx.doi.org/10.31474/1996-1588-2021-2-33-57-64.

Pełny tekst źródła
Streszczenie:
The most common methods of reconstruction of 3D-models of the face are considered, their quantitative estimates are analyzed and determined, the most promising approach is highlighted - 3D Morphable Model. The necessity of its modification in order to improve the results of reconstruction based on the analysis of the main components and the use of generative-competitive neural network is substantiated. One of the advantages of using the 3D Morphable Model with principal component analysis is to present only a plausible solution when the solution space is limited, which simplifies the problem to be solved. Whereas the original approach involves manual initialization. It is planned to use the generative-competitive neural network on high-resolution UV maps as a statistical representation of facial texture. In this way, you can reconstruct textures with high-frequency details. The main result is an approach to creating three-dimensional models of faces from their two-dimensional images that have the least time and a satisfactory standard error. The tasks of further research are determined.
Style APA, Harvard, Vancouver, ISO itp.
25

Iwashita, Yumi, Ryo Kurazume, Kozo Konishi, Masahiko Nakamoto, Makoto Hashizume i Tsutomu Hasegawa. "Fast alignment of 3D geometrical models and 2D grayscale images using 2D distance maps". Systems and Computers in Japan 38, nr 14 (2007): 52–62. http://dx.doi.org/10.1002/scj.20634.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Chrysochoou, Dimosthenis, Ariana Familiar, Deep Gandhi, Phillip B. Storm, Arastoo Vossough, Adam C. Resnick, Christos Davatzikos, Ali Nabavizadeh i Anahita Fathi Kazerooni. "IMG-08. SYNTHESIZING MISSING MRI SEQUENCES IN PEDIATRIC BRAIN TUMORS USING GENERATIVE ADVERSARIAL NETWORKS; TOWARDS IMPROVED VOLUMETRIC TUMOR ASSESSMENT". Neuro-Oncology 26, Supplement_4 (18.06.2024): 0. http://dx.doi.org/10.1093/neuonc/noae064.345.

Pełny tekst źródła
Streszczenie:
Abstract BACKGROUND Standard MRI sequences, such as pre- and post-contrast T1w, T2w, and FLAIR images, are essential for optimizing segmentation of tumor subregions and evaluating treatment responses in pediatric brain tumors (PBTs). However, MRI sets are often incomplete due to imaging artifacts or inconsistent acquisition protocols across various centers. Generative Adversarial Networks (GANs) have been effectively utilized to generate missing MRI sequences for adult brain tumors. This study applies image-to-image translation models using GANs to synthesize missing FLAIR images from T2w images in PBTs. METHODS This retrospective study developed two GAN models, pix2pix in both 2D and 3D, trained and validated on T2w and FLAIR image pairs from 79 and 19 patients, respectively, with pediatric-type diffuse high-grade gliomas (diffuse midline glioma (DMG) including diffuse intrinsic pontine glioma (DIPG)), collected from the Children’s Brain Tumor Network (CBTN). The 2D pix2pix model processes each T2w MRI volume slice-by-slice, generating corresponding FLAIR slices to reconstruct the complete image volume. The 3D pix2pix model allows translation of 3D FLAIR volumes directly from T2w volumes without the need for individual slice processing and reconstruction. The quality of the generated FLAIR volumes was evaluated using Structural Similarity Index (SSI; best closer to 1) and Mean Squared Error (MSE; best near 0) on the validation set. RESULTS The 2D and 3D models achieved robust performance with median SSI, MSE of 0.88, 0.004, and 0.92, 0.002, respectively. CONCLUSIONS The GAN models developed in this study effectively generate missing FLAIR MRI volumes from corresponding T2w images in PBTs. The 3D model, outperforming 2D, speeds the overall processing pipeline by eliminating volume slicing and reconstruction. Future work includes assessing the impact of these synthesized images on the accuracy of our pretrained autosegmentation models in differentiating tumor subregions and measuring tumor volumes in longitudinal MRIs.
Style APA, Harvard, Vancouver, ISO itp.
27

Bobulski, J. "Multimodal face recognition method with two-dimensional hidden Markov model". Bulletin of the Polish Academy of Sciences Technical Sciences 65, nr 1 (1.02.2017): 121–28. http://dx.doi.org/10.1515/bpasts-2017-0015.

Pełny tekst źródła
Streszczenie:
Abstract The paper presents a new solution for the face recognition based on two-dimensional hidden Markov models. The traditional HMM uses one-dimensional data vectors, which is a drawback in the case of 2D and 3D image processing, because part of the information is lost during the conversion to one-dimensional features vector. The paper presents a concept of the full ergodic 2DHMM, which can be used in 2D and 3D face recognition. The experimental results demonstrate that the system based on two dimensional hidden Markov models is able to achieve a good recognition rate for 2D, 3D and multimodal (2D+3D) face images recognition, and is faster than ICP method.
Style APA, Harvard, Vancouver, ISO itp.
28

Wang, Feng, Weichuan Ni, Shaojiang Liu, Zhiming Xu, Zemin Qiu i Zhiping Wan. "A 2D image 3D reconstruction function adaptive denoising algorithm". PeerJ Computer Science 9 (3.10.2023): e1604. http://dx.doi.org/10.7717/peerj-cs.1604.

Pełny tekst źródła
Streszczenie:
To address the issue of image denoising algorithms blurring image details during the denoising process, we propose an adaptive denoising algorithm for the 3D reconstruction of 2D images. This algorithm takes into account the inherent visual characteristics of human eyes and divides the image into regions based on the entropy value of each region. The background region is subject to threshold denoising, while the target region undergoes processing using an adversarial generative network. This network effectively handles 2D target images with noise and generates a 3D model of the target. The proposed algorithm aims to enhance the noise immunity of 2D images during the 3D reconstruction process and ensure that the constructed 3D target model better preserves the original image’s detailed information. Through experimental testing on 2D images and real pedestrian videos contaminated with noise, our algorithm demonstrates stable preservation of image details. The reconstruction effect is evaluated in terms of noise reduction and the fidelity of the 3D model to the original target. The results show an average noise reduction exceeding 95% while effectively retaining most of the target’s feature information in the original image. In summary, our proposed adaptive denoising algorithm improves the 3D reconstruction process by preserving image details that are often compromised by conventional denoising techniques. This has significant implications for enhancing image quality and maintaining target information fidelity in 3D models, providing a promising approach for addressing the challenges associated with noise reduction in 2D images during 3D reconstruction.
Style APA, Harvard, Vancouver, ISO itp.
29

Park, Sungsoo, i Hyeoncheol Kim. "3DPlanNet: Generating 3D Models from 2D Floor Plan Images Using Ensemble Methods". Electronics 10, nr 22 (9.11.2021): 2729. http://dx.doi.org/10.3390/electronics10222729.

Pełny tekst źródła
Streszczenie:
Research on converting 2D raster drawings into 3D vector data has a long history in the field of pattern recognition. Prior to the achievement of machine learning, existing studies were based on heuristics and rules. In recent years, there have been several studies employing deep learning, but a great effort was required to secure a large amount of data for learning. In this study, to overcome these limitations, we used 3DPlanNet Ensemble methods incorporating rule-based heuristic methods to learn with only a small amount of data (30 floor plan images). Experimentally, this method produced a wall accuracy of more than 95% and an object accuracy similar to that of a previous study using a large amount of learning data. In addition, 2D drawings without dimension information were converted into ground truth sizes with an accuracy of 97% or more, and structural data in the form of 3D models in which layers were divided for each object, such as walls, doors, windows, and rooms, were created. Using the 3DPlanNet Ensemble proposed in this study, we generated 110,000 3D vector data with a wall accuracy of 95% or more from 2D raster drawings end to end.
Style APA, Harvard, Vancouver, ISO itp.
30

Cashman, Thomas J., i Andrew W. Fitzgibbon. "What Shape Are Dolphins? Building 3D Morphable Models from 2D Images". IEEE Transactions on Pattern Analysis and Machine Intelligence 35, nr 1 (styczeń 2013): 232–44. http://dx.doi.org/10.1109/tpami.2012.68.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Clemens, D. T., i D. W. Jacobs. "Space and time bounds on indexing 3D models from 2D images". IEEE Transactions on Pattern Analysis and Machine Intelligence 13, nr 10 (1991): 1007–17. http://dx.doi.org/10.1109/34.99235.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Nicholson, Kristen F., R. Tyler Richardson, Freeman Miller i James G. Richards. "Determining 3D scapular orientation with scapula models and biplane 2D images". Medical Engineering & Physics 41 (marzec 2017): 103–8. http://dx.doi.org/10.1016/j.medengphy.2017.01.012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Wang, W. X., G. X. Zhong, J. J. Huang, X. M. Li i L. F. Xie. "INSTANCE SEGMENTATION OF 3D MESH MODEL BY INTEGRATING 2D AND 3D DATA". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W2-2023 (14.12.2023): 1677–84. http://dx.doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1677-2023.

Pełny tekst źródła
Streszczenie:
Abstract. Buildings are an important part of the urban scene. In this paper, a novel instance segmentation framework for 3D mesh models in urban scenes is proposed. Unlike existing works focusing on semantic segmentation of urban scenes, this work focuses on detecting and segmenting 3D building instances even if they are attached and occluded in a large and imprecise 3D surface model. Multi-view images are first enhanced to RGBH images by adding a height map and are segmented to obtain all roof instances using Mask R-CNN. The 2D roof instances are then back-projected onto the 3D scene, the accurate 3D roof instances are obtained using a novel 3D clustering method and two post-processing steps which preserve the largest connected region and remove the model ambiguity. Finally, the 2D convex hull of each 3D roof instance is calculated and the model is divided within the range into building instances. The performance of the proposed methods is evaluated using real UAV images and the corresponding 3D mesh models qualitatively and quantitatively. Results revealed that the proposed method could effectively segment the model of the urban scenes and building instance is obtained, the over-segmentation masks can be clustered correctly into roof instances and the under-segmentation masks caused by image segmentation errors are eliminated.
Style APA, Harvard, Vancouver, ISO itp.
34

Al Khalil, O., i P. Grussenmeyer. "2D & 3D RECONSTRUCTION WORKFLOWS FROM ARCHIVE IMAGES, CASE STUDY OF DAMAGED MONUMENTS IN BOSRA AL-SHAM CITY (SYRIA)". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W15 (19.08.2019): 55–62. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w15-55-2019.

Pełny tekst źródła
Streszczenie:
<p><strong>Abstract.</strong> The paper explores the possibilities of using old images for 2D and 3D documentation of archaeological monuments using open source, free and commercial photogrammetric software. The available images represent the external façade of the Western gate and Al Omari Mosque in the city of Bosra al-Sham in Syria, which were severely damaged during the recent war. The images were captured using consumer camera and they were originally used to achieve 2D documentation for each part of the gate separately. 2D control points were used to scale the digital photomosaic and reference distances were applied for the scaling of the 3D models. Archive images were used to produce a 2D digital photomosaic of the monument by image rectification and 3D dense point clouds by applying Structure from Motion (SfM) techniques. The geometric accuracy of the results has been assessed.</p>
Style APA, Harvard, Vancouver, ISO itp.
35

Barazzetti, L., M. Previtali i W. Rose. "FLATTENING COMPLEX ARCHITECTURAL SURFACES: PHOTOGRAMMETRIC 3D MODELS CONVERTED INTO 2D MAPS". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2/W4-2024 (14.02.2024): 33–40. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-w4-2024-33-2024.

Pełny tekst źródła
Streszczenie:
Abstract. The paper describes a workflow to flatten 3D photogrammetric models of undevelopable surfaces into unfragmented 2D texture maps. The aim is to create a texture map with reduced fragmentation compared to typical photogrammetric texture files associated with 3D models. Geometric reformatting of the mesh is required to achieve an unfragmented final texture image with enough visual quality to allow for its use in 2D editing software (e.g., Photoshop, Illustrator, etc.). With a lighter model and defragmented texture, graphic documentation of conditions, treatments, or other relevant information can be performed directly. The approach considers both simple and complex architectural surfaces, with particular attention to elements that cannot be developed without introducing distortions. In the case of historic buildings, such surfaces constitute the majority, especially in the case of decorative elements and irregular vaulted systems. The approach extends to full 3D models, particularly those where orthomosaics would result in stretched details. We will discuss two methods: the first is particularly suitable when the photogrammetric project (oriented images) is still available, and the second applies to generic 3D models without the availability of the original images.
Style APA, Harvard, Vancouver, ISO itp.
36

Li, Yinhai, Fei Wang i Xinhua Hu. "Deep-Learning-Based 3D Reconstruction: A Review and Applications". Applied Bionics and Biomechanics 2022 (15.09.2022): 1–6. http://dx.doi.org/10.1155/2022/3458717.

Pełny tekst źródła
Streszczenie:
In recent years, deep learning models have been widely used in 3D reconstruction fields and have made remarkable progress. How to stimulate deep academic interest to effectively manage the explosive augmentation of 3D models has been a research hotspot. This work shows mainstream 3D model retrieval algorithm programs based on deep learning currently developed remotely, and further subdivides their advantages and disadvantages according to the behavior evaluation of the algorithm programs obtained by trial. According to other restoration applications, the main 3D model retrieval algorithms can be divided into two categories: (1) 3D standard restoration methods supported by the model, i.e., both the restored object and the recalled object are 3D models. It can be further divided into voxel-based, point coloring-based, and appearance-based methods, and (2) cross-domain 3D model recovery methods supported by 2D replicas, that is, the retrieval motivation is 2D images, and the recovery appearance is 3D models, including retrieval methods supported by 2D display, 2D depiction-based realistic replication and 3D mold recovery methods. Finally, the work proposed novel 3D fashion retrieval algorithms supported by deep science that are analyzed and ventilated, and the unaccustomed directions of future development are prospected.
Style APA, Harvard, Vancouver, ISO itp.
37

Sinchuk, Yuriy, Stefan Dietrich, Matthias Merzkirch, Kay André Weidenmann i Romana Piat. "Micro-Computed Tomography Image Based Numerical Elastic Homogenization of MMCs". Key Engineering Materials 627 (wrzesień 2014): 437–40. http://dx.doi.org/10.4028/www.scientific.net/kem.627.437.

Pełny tekst źródła
Streszczenie:
Properties of an interpenetrating metal–ceramic composite with freeze-cast preforms are investigated. For the estimation of elastic properties of the composite numerical homogenization approaches for 2D and 3D finite element models are implemented. The FE models are created based on micro-computed tomography (μCT) images. The results of the numerical 2D and 3D modeling coincide and are in good agreement with available experimental measurements of elastic properties.
Style APA, Harvard, Vancouver, ISO itp.
38

Artamonova, N. B., S. V. Sheshenin, E. A. Orlov, Zhou Bichen, J. V. Frolova i I. R. Khamidullin. "Calculation of effective properties of geocomposites based on computed tomography images". PNRPU Mechanics Bulletin, nr 3 (15.12.2022): 83–94. http://dx.doi.org/10.15593/perm.mech/2022.3.09.

Pełny tekst źródła
Streszczenie:
Biot’s parameter is included in the formula for calculating effective stresses and should be taken into account when assessing the stress-strain state of a water-saturated rock mass. A method for calculating Biot’s tensor parameter based on asymptotic averaging of the equilibrium equation for a fluid-saturated porous medium is proposed. Calculations of elastic properties and Biot’s coefficient were carried out on various types of rocks - limestone, dolomite, hyaloclastite, basalt. The calculations were carried out using 3D models of geocomposites built from X-ray computed tomography images. The results of 3D calculations of Young's modulus and Biot’s coefficient coincided with the results of experimental determinations of these properties by the ultrasonic method. This fact shows the expediency of using a computational approach that implements asymptotic averaging to estimate the effective properties using 3D models of the real rock structure. The results of 3D and 2D simulation of effective properties of geocomposites are compared. 2D models were built using photographs of rock sections. It was found that the values of Young's modulus and Biot’s parameter for 2D models differ from the corresponding experimental and 3D calculation results by 20-30 %. Therefore, 2D modeling is not suitable for evaluating the effective properties of porous geomaterials. Based on the results of calculations and experiments, the dependences of Young's modulus and Biot’s coefficient on porosity were obtained and studied. These dependencies are used in non-linear numerical simulation of the consolidation of water-saturated soils. The results of the study showed that Biot’s coefficient does not depend on Young's modulus of the rock skeleton material. The influence of the pore shape on Young's modulus and Biot’s coefficient was studied using 2D calculations. A method for predicting the shape of pores from the values of porosity and Young's modulus using neural networks is proposed. Using this method, a specific algorithm for predicting the shape of pores in hyaloclastites has been implemented and studied.
Style APA, Harvard, Vancouver, ISO itp.
39

Tang, Yanlong, Yun Zhang, Xiaoguang Han, Fang-Lue Zhang, Yu-Kun Lai i Ruofeng Tong. "3D corrective nose reconstruction from a single image". Computational Visual Media 8, nr 2 (6.12.2021): 225–37. http://dx.doi.org/10.1007/s41095-021-0237-5.

Pełny tekst źródła
Streszczenie:
AbstractThere is a steadily growing range of applications that can benefit from facial reconstruction techniques, leading to an increasing demand for reconstruction of high-quality 3D face models. While it is an important expressive part of the human face, the nose has received less attention than other expressive regions in the face reconstruction literature. When applying existing reconstruction methods to facial images, the reconstructed nose models are often inconsistent with the desired shape and expression. In this paper, we propose a coarse-to-fine 3D nose reconstruction and correction pipeline to build a nose model from a single image, where 3D and 2D nose curve correspondences are adaptively updated and refined. We first correct the reconstruction result coarsely using constraints of 3D-2D sparse landmark correspondences, and then heuristically update a dense 3D-2D curve correspondence based on the coarsely corrected result. A final refinement step is performed to correct the shape based on the updated 3D-2D dense curve constraints. Experimental results show the advantages of our method for 3D nose reconstruction over existing methods.
Style APA, Harvard, Vancouver, ISO itp.
40

Badr, Ahmed Mangoud, Wael M. Mubarak Refai, Mohamed Gaber El-Shal i Ahmed Nasef Abdelhameed. "Accuracy and Reliability of Kinect Motion Sensing Input Device’s 3D Models: A Comparison to Direct Anthropometry and 2D Photogrammetry". Open Access Macedonian Journal of Medical Sciences 9, nr D (14.05.2021): 54–60. http://dx.doi.org/10.3889/oamjms.2021.6006.

Pełny tekst źródła
Streszczenie:
AIM: This study aims to evaluate the accuracy and reliability of Kinect motion sensing input device’s three-dimensional (3D) models by comparing it with direct anthropometry and digital 2D photogrammetry. MATERIALS AND METHODS: Six profiles and four frontal parameters were directly measured on the faces of 80 participants. The same measurements were repeated using two-dimensional (2D) photogrammetry and (3D) images obtained from Kinect device. Another observer made the same measurements for 30% of the images obtained with 3D technique, and interobserver reproducibility was evaluated for 3D images. Intraobserver reproducibility was evaluated. Statistical analysis was conducted using the paired samples t-test, interclass correlation coefficient, and Bland-Altman limits of agreement. RESULTS: The highest mean difference was 0.0084 mm between direct measurement and photogrammetry, 0.027 mm between direct measurement and 3D Kinect’s models, and 0.018 mm between photogrammetry and 3D Kinect’s. The lowest agreement value was 0.016 in the all parameter between the photogrammetry and 3D Kinect’s methods. Agreement between the two observers varied from 0.999 Sn-Me to 1 with the rest of linear measurements. CONCLUSION: Measurements done using 3D Images obtained from Kinect device indicate that it may be an accurate and reliable imaging method for use in orthodontics. It also provides an easy low-cost 3D imaging technique that has become increasingly popular in clinical settings, offering advantages for surgical planning and outcome evaluation.
Style APA, Harvard, Vancouver, ISO itp.
41

Mehranfar, M., H. Arefi i F. Alidoost. "A PROJECTION-BASED RECONSTRUCTION ALGORITHM FOR 3D MODELING OF BRIDGE STRUCTURES FROM DRONE-BASED POINT CLOUD". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVI-4/W1-2021 (3.09.2021): 77–83. http://dx.doi.org/10.5194/isprs-archives-xlvi-4-w1-2021-77-2021.

Pełny tekst źródła
Streszczenie:
Abstract. This paper presents a projection-based method for 3D bridge modeling using dense point clouds generated from drone-based images. The proposed workflow consists of hierarchical steps including point cloud segmentation, modeling of individual elements, and merging of individual models to generate the final 3D model. First, a fuzzy clustering algorithm including the height values and geometrical-spectral features is employed to segment the input point cloud into the main bridge elements. In the next step, a 2D projection-based reconstruction technique is developed to generate a 2D model for each element. Next, the 3D models are reconstructed by extruding the 2D models orthogonally to the projection plane. Finally, the reconstruction process is completed by merging individual 3D models and forming an integrated 3D model of the bridge structure in a CAD format. The results demonstrate the effectiveness of the proposed method to generate 3D models automatically with a median error of about 0.025 m between the elements’ dimensions in the reference and reconstructed models for two different bridge datasets.
Style APA, Harvard, Vancouver, ISO itp.
42

Liu, Xiaoyang. "Research on 3D Object Reconstruction Method based on Deep Learning". Highlights in Science, Engineering and Technology 39 (1.04.2023): 1221–27. http://dx.doi.org/10.54097/hset.v39i.6732.

Pełny tekst źródła
Streszczenie:
3D reconstruction is a classic task in the field of computer graphics. More and more researchers try to replicate the success of deep learning in 2D image processing tasks to 3D reconstruction tasks, so 3D reconstruction related research based on deep learning has gradually become a research hotspot. Compared with the traditional 3D reconstruction methods that require precision acquisition equipment and strict calibration of image information, the 3D reconstruction method based on deep learning completes the matching of 2D images to 3D models through deep neural networks, and can reconstruct 3D models of various categories of objects from RGB images obtained by ordinary acquisition equipment in a large number and quickly. This paper introduces the state of the art of 3D voxel reconstruction, 3D points cloud reconstruction and 3D mesh reconstruction, respectively. According to the different representation methods of 3D objects, the 3D object reconstruction methods based on deep learning are classified and reviewed, the characteristics and shortcomings of existing methods are analyzed, and three important research trends are summarized.
Style APA, Harvard, Vancouver, ISO itp.
43

Nwaizu, Charles Chioma, Charles Chioma Nwaizu, Qiang Zhang, Christiana Iluno, Qiang Zhang i Christiana Iluno. "3D Pore Structure Characterization of Stored Grain Bed". Applied Engineering in Agriculture 38, nr 6 (2022): 941–50. http://dx.doi.org/10.13031/aea.15133.

Pełny tekst źródła
Streszczenie:
Highlights An image analysis for reconstruction of 3D pore structure within bulk grain was presented. Mathematical models for porosity and tortuosity were developed from the 3D reconstructed images. The mathematical models can be incorporated in computational model of flow through bulk grains. Abstract. An image analysis technique for reconstruction of the complex 3D pore structure within bulk grain from 2D section images was presented. The technique relies on aligning successive 2D images of cut-sections obtained from colored-wax solidified soybean grain beds, which were then subjected to image processing using ImageJ software developed by the National Institute of Health (NIH, Bethesda, Md.) for the reconstruction and visualization of different airflow paths within the bulk grain. Porosity and tortuosity values were quantified from the 3D image volume and 3D reconstructed inter-connected airflow paths to develop empirical mathematical models for predicting porosity and tortuosity as a function of compaction due to the pressure exerted by the grain depth. Results indicated that the rate of decrease in porosity was higher at the lower compaction grain depth and then gradually approached a minimum value as the compaction grain depth increased. At the top of the compacted grain, the porosity of the tested soybean bed was determined to be 0.42 and reduced to 0.34 at a compaction pressure of 14.2 kPa (equivalent to a compaction grain depth of 25 m). Tortuosity increased with the compaction pressure from 1.15 to 1.58 at a compaction pressure of 14.2 kPa (equivalent to 25 m of grain depth), or by 37.4%. Keywords: Grain bed, Image analysis, Pore structure, Porosity, Tortuosity.
Style APA, Harvard, Vancouver, ISO itp.
44

Tomono, Masahiro. "3D Object Modeling and Segmentation Using Image Edge Points in Cluttered Environments". Journal of Robotics and Mechatronics 21, nr 6 (20.12.2009): 672–79. http://dx.doi.org/10.20965/jrm.2009.p0672.

Pełny tekst źródła
Streszczenie:
Object models are indispensable for robots to recognize objects when conducting tasks. This paper proposes a method of creating object models from images captured in real environments using a monocular camera. In our framework, an object model consists of a 3D model composed of 3D points reconstructed from image edge points and 2D models composed of image edge points, each having a SIFT descriptor for object recognition. To address the difficulty in creating object models of separating objects from background clutter, we separate the object of interest by finding edge points which cooccur in images with different backgrounds. We employ supervised and unsupervised schemes to provide training images for segmentation. Experimental results demonstrated that detailed 3D object models are successfully separated and created.
Style APA, Harvard, Vancouver, ISO itp.
45

Soh, Jung, Mei Xiao, Thao Do, Oscar Meruvia-Pastor i Christoph W. Sensen. "Integrative Visualization of Temporally Varying Medical Image Patterns". Journal of Integrative Bioinformatics 8, nr 2 (1.06.2011): 75–84. http://dx.doi.org/10.1515/jib-2011-161.

Pełny tekst źródła
Streszczenie:
Summary We have developed a tool for the visualization of temporal changes of disease patterns, using stacks of medical images collected in time-series experiments. With this tool, users can generate 3D surface models representing disease patterns and observe changes over time in size, shape, and location of clinically significant image patterns. Statistical measurements of the volume of the observed disease patterns can be performed simultaneously. Spatial data integration occurs through the combination of 2D slices of an image stack into a 3D surface model. Temporal integration occurs through the sequential visualization of the 3D models from different time points. Visual integration enables the tool to show 2D images, 3D models and statistical data simultaneously. As an example, the tool has been used to visualize brain MRI scans of several multiple sclerosis patients. It has been developed in Java™, to ensure portability and platform independence, with a user-friendly interface and can be downloaded free of charge for academic users.
Style APA, Harvard, Vancouver, ISO itp.
46

Adam, A., E. Chatzilari, S. Nikolopoulos i I. Kompatsiaris. "H-RANSAC: A HYBRID POINT CLOUD SEGMENTATION COMBINING 2D AND 3D DATA". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2 (28.05.2018): 1–8. http://dx.doi.org/10.5194/isprs-annals-iv-2-1-2018.

Pełny tekst źródła
Streszczenie:
In this paper, we present a novel 3D segmentation approach operating on point clouds generated from overlapping images. The aim of the proposed hybrid approach is to effectively segment co-planar objects, by leveraging the structural information originating from the 3D point cloud and the visual information from the 2D images, without resorting to learning based procedures. More specifically, the proposed hybrid approach, H-RANSAC, is an extension of the well-known RANSAC plane-fitting algorithm, incorporating an additional consistency criterion based on the results of 2D segmentation. Our expectation that the integration of 2D data into 3D segmentation will achieve more accurate results, is validated experimentally in the domain of 3D city models. Results show that HRANSAC can successfully delineate building components like main facades and windows, and provide more accurate segmentation results compared to the typical RANSAC plane-fitting algorithm.
Style APA, Harvard, Vancouver, ISO itp.
47

Jin, Craig, Amelia Gully, Michael I. Proctor, Kirrie Ballard, Sheryl Foster, Tharinda Piyadasa i Yaoyao Yue. "Exploring the relationship between real-time midsagittal images of the vocal tract and volumetric data". Journal of the Acoustical Society of America 154, nr 4_supplement (1.10.2023): A244. http://dx.doi.org/10.1121/10.0023428.

Pełny tekst źródła
Streszczenie:
3D analysis of the vocal tract using dynamic MRI remains a technically difficult challenge. Various approaches have been explored such as using parametic models of the vocal tract (Yehia et al., 1997); integrating data across parallel slices of 2D dynamic data (Zhu et al., 2012); applying stack-of-spiral MRI sampling with 3D constrained reconstruction (Zhao et al., 2020); and combining static 3D and dynamic 2D data (Douros et al., 2019). In this work, we follow a similar approach to Douros et al. and explore the relationship between 2D real-time midsagittal images and 3D volumetric scans of the vocal tract. The real-time MRI midsagittal images are recorded during vowel-consonant-vowel vocal tasks, while the 3D volumetric scans are recorded during sustained vowels. We use large deformation diffeomorphic metric mapping as the foundation for this modeling work. We explore techniques to use constraints provided by the real-time MRI midsagittal images to enable smooth deformations of the 3D volumetric data. We focus on the feasibility of the methods and report on the types of constraints explored and the resulting deformations of the 3D volumetric data.
Style APA, Harvard, Vancouver, ISO itp.
48

Nguyen, Duy M. H., Hoang Nguyen, Truong T. N. Mai, Tri Cao, Binh T. Nguyen, Nhat Ho, Paul Swoboda, Shadi Albarqouni, Pengtao Xie i Daniel Sonntag. "Joint Self-Supervised Image-Volume Representation Learning with Intra-inter Contrastive Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 12 (26.06.2023): 14426–35. http://dx.doi.org/10.1609/aaai.v37i12.26687.

Pełny tekst źródła
Streszczenie:
Collecting large-scale medical datasets with fully annotated samples for training of deep networks is prohibitively expensive, especially for 3D volume data. Recent breakthroughs in self-supervised learning (SSL) offer the ability to overcome the lack of labeled training samples by learning feature representations from unlabeled data. However, most current SSL techniques in the medical field have been designed for either 2D images or 3D volumes. In practice, this restricts the capability to fully leverage unlabeled data from numerous sources, which may include both 2D and 3D data. Additionally, the use of these pre-trained networks is constrained to downstream tasks with compatible data dimensions. In this paper, we propose a novel framework for unsupervised joint learning on 2D and 3D data modalities. Given a set of 2D images or 2D slices extracted from 3D volumes, we construct an SSL task based on a 2D contrastive clustering problem for distinct classes. The 3D volumes are exploited by computing vectored embedding at each slice and then assembling a holistic feature through deformable self-attention mechanisms in Transformer, allowing incorporating long-range dependencies between slices inside 3D volumes. These holistic features are further utilized to define a novel 3D clustering agreement-based SSL task and masking embedding prediction inspired by pre-trained language models. Experiments on downstream tasks, such as 3D brain segmentation, lung nodule detection, 3D heart structures segmentation, and abnormal chest X-ray detection, demonstrate the effectiveness of our joint 2D and 3D SSL approach. We improve plain 2D Deep-ClusterV2 and SwAV by a significant margin and also surpass various modern 2D and 3D SSL approaches.
Style APA, Harvard, Vancouver, ISO itp.
49

Conchello, José-Angel, Joanne Markham i James G. McNally. "All Models are Wrong an Overview of 3D Deconvolution Methods". Microscopy and Microanalysis 3, S2 (sierpień 1997): 375–76. http://dx.doi.org/10.1017/s143192760000876x.

Pełny tekst źródła
Streszczenie:
Three dimensional (3D)microscopy is a powerful toll for the visualization of biological specimens and processes. In 3D microscopy, a 3D image is collected by recording a series of two-dimensional (2D) images focusing the microscope at different planes through the specimen. Each 2D optical slice in this through focus series contains the in-focus information plus contributions from out-of-focus structures that obscure the image and reduce its contrast. There are two complementary approaches to reduce or ameliorate the effects of the out-of-focus contributions, optical and computational. In the optical approach a microscope is used that avoids collecting out-of-focus light, such as a confocal microscope (see and references therein), a two-photon or three-photon fluorescence excitation microscope, or atwo-sided microscope. In the computational approach, the through-focus series is processed in a computer using any of a number of debluring algorithms to reduce or ameliorate the out-of-focus contributions. In the past two decades, several methods for debluring microscopic images have been developed whose common aim is to undo the degradations introduced by the process of image formation and recording
Style APA, Harvard, Vancouver, ISO itp.
50

Tahir, Rohan, Allah Bux Sargano i Zulfiqar Habib. "Voxel-Based 3D Object Reconstruction from Single 2D Image Using Variational Autoencoders". Mathematics 9, nr 18 (17.09.2021): 2288. http://dx.doi.org/10.3390/math9182288.

Pełny tekst źródła
Streszczenie:
In recent years, learning-based approaches for 3D reconstruction have gained much popularity due to their encouraging results. However, unlike 2D images, 3D cannot be represented in its canonical form to make it computationally lean and memory-efficient. Moreover, the generation of a 3D model directly from a single 2D image is even more challenging due to the limited details available from the image for 3D reconstruction. Existing learning-based techniques still lack the desired resolution, efficiency, and smoothness of the 3D models required for many practical applications. In this paper, we propose voxel-based 3D object reconstruction (V3DOR) from a single 2D image for better accuracy, one using autoencoders (AE) and another using variational autoencoders (VAE). The encoder part of both models is used to learn suitable compressed latent representation from a single 2D image, and a decoder generates a corresponding 3D model. Our contribution is twofold. First, to the best of the authors’ knowledge, it is the first time that variational autoencoders (VAE) have been employed for the 3D reconstruction problem. Second, the proposed models extract a discriminative set of features and generate a smoother and high-resolution 3D model. To evaluate the efficacy of the proposed method, experiments have been conducted on a benchmark ShapeNet data set. The results confirm that the proposed method outperforms state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii