Artigos de revistas sobre o tema "Images 2D"

Siga este link para ver outros tipos de publicações sobre o tema: Images 2D.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Images 2D".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Jung, Sukwoo, Seunghyun Song, Minho Chang e Sangchul Park. "Range image registration based on 2D synthetic images". Computer-Aided Design 94 (janeiro de 2018): 16–27. http://dx.doi.org/10.1016/j.cad.2017.08.001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Tsalafoutas, Ioannis A., Angeliki C. Epistatou e Konstantinos K. Delibasis. "Image Quality Comparison between Digital Breast Tomosynthesis Images and 2D Mammographic Images Using the CDMAM Test Object". Journal of Imaging 8, n.º 8 (21 de agosto de 2022): 223. http://dx.doi.org/10.3390/jimaging8080223.

Texto completo da fonte
Resumo:
To evaluate the image quality (IQ) of synthesized two-dimensional (s2D) and tomographic layer (TL) mammographic images in comparison to the 2D digital mammographic images produced with a new digital breast tomosynthesis (DBT) system. Methods: The CDMAM test object was used for IQ evaluation of actual 2D images, s2D and TL images, acquired using all available acquisition modes. Evaluation was performed automatically using the commercial software that accompanied CDMAM. Results: The IQ scores of the TLs with the in-focus CDMAM were comparable, although usually inferior to those of 2D images acquired with the same acquisition mode, and better than the respective s2D images. The IQ results of TLs satisfied the EUREF limits applicable to 2D images, whereas for s2D images this was not the case. The use of high-dose mode (H-mode), instead of normal-dose mode (N-mode), increased the image quality of both TL and s2D images, especially when the standard mode (ST) was used. Although the high-resolution (HR) mode produced TL images of similar or better image quality compared to ST mode, HR s2D images were clearly inferior to ST s2D images. Conclusions: s2D images present inferior image quality compared to 2D and TL images. The HR mode produces TL images and s2D images with half the pixel size and requires a 25% increase in average glandular dose (AGD). Despite that, IQ evaluation results with CDMAM are in favor of HR resolution mode only for TL images and mainly for smaller-sized details.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Plattard, Delphine, Marine Soret, Jocelyne Troccaz, Patrick Vassal, Jean-Yves Giraud, Guillaume Champleboux, Xavier Artignan e Michel Bolla. "Patient Set-Up Using Portal Images: 2D/2D Image Registration Using Mutual Information". Computer Aided Surgery 5, n.º 4 (janeiro de 2000): 246–62. http://dx.doi.org/10.3109/10929080009148893.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Kim, Jin-Mo, Jong-Yoon Kim e Hyung-Je Cho. "Warping of 2D Facial Images Using Image Interpolation by Triangle Subdivision". Journal of Korea Game Society 14, n.º 2 (20 de abril de 2014): 55–66. http://dx.doi.org/10.7583/jkgs.2014.14.2.55.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

JIANG, C. F. "3D IMAGE RECONSTRUCTION OF OVARIAN TUMOR IN THE ULTRASONIC IMAGES". Biomedical Engineering: Applications, Basis and Communications 13, n.º 02 (25 de abril de 2001): 93–98. http://dx.doi.org/10.4015/s1016237201000121.

Texto completo da fonte
Resumo:
The prevalence of ovarian tumor malignancy can be monitored by the degree of irregularity in the ovarian contour and by the septal structure inside the tumor observed in ultrasonic images. However the 2D ultrasonic images can not integrate 3D information form the ovarian tumor. In this paper, we present an algorithm that can render the 3D image of an ovarian tumor by reconstructing the 2D ultrasonic images into a 3D data set. This is based on sequentially boundary detection in a series of 2D images to form a 3D tumor contour. This contour is then used as a barrier to remove the data containing the other tissue adhering to the tumor surface. The final 3D image rendered by the isolated data provides a clear view of both the surface and inner structure of the ovarian tumor.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

KOVALEVSKY, VLADIMIR. "CURVATURE IN DIGITAL 2D IMAGES". International Journal of Pattern Recognition and Artificial Intelligence 15, n.º 07 (novembro de 2001): 1183–200. http://dx.doi.org/10.1142/s0218001401001283.

Texto completo da fonte
Resumo:
The paper presents an analysis of sources of errors when estimating derivatives of numerical or noisy functions. A method of minimizing the errors is suggested. When being applied to the estimation of the curvature of digital curves, the analysis shows that under the conditions typical for digital image processing the curvature can rarely be estimated with a precision higher than 50%. Ways of overcoming the difficulties are discussed and a new method for estimating the curvature is suggested and investigated as to its precision. The method is based on specifying boundaries of regions in gray value images with subpixel precision. The method has an essentially higher precision than the known methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Kohnen, James B. "Images of Organization. 2d ed". Quality Management Journal 5, n.º 2 (janeiro de 1998): 117. http://dx.doi.org/10.1080/10686967.1998.11918859.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Kaczmarek, K., B. Walczak, S. de Jong e B. G. M. Vandeginste. "Matching 2D Gel Electrophoresis Images". Journal of Chemical Information and Computer Sciences 43, n.º 3 (maio de 2003): 978–86. http://dx.doi.org/10.1021/ci0256337.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Bae, Kitae, e Hyoungjin Kim. "Optimal Point Correspondence for Image Registration in 2D Images". International Journal of Multimedia and Ubiquitous Engineering 8, n.º 6 (30 de novembro de 2013): 127–40. http://dx.doi.org/10.14257/ijmue.2013.8.6.13.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Wang, Yong Sheng. "Fast 3D Human Face Modeling Method Based on Multiple View 2D Images". Applied Mechanics and Materials 273 (janeiro de 2013): 796–99. http://dx.doi.org/10.4028/www.scientific.net/amm.273.796.

Texto completo da fonte
Resumo:
This paper presents a novel approach to model 3D human face from multiple view 2D images in a fast mode. Our proposed method mainly includes three steps: 1) Face Recognition from 2D images, 2) Converting 2D images to 3D images, 3) Modeling 3D human face. To extract visual features of both 2D and 3D images, visual features adopted in 3D are described by Point Signature, and visual features utilized in 2D is represented by Gabor filter responses. Afterwards, 3D model is obtained by combining multiple view 2D images through calculating projections vector and translation vector. Experimental results show that our method can model 3D human face with high accuracy and efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Holzleitner, Iris J., Alex L. Jones, Kieran J. O’Shea, Rachel Cassar, Vanessa Fasolt, Victor Shiramizu, Benedict C. Jones e Lisa M. DeBruine. "Do 3D Face Images Capture Cues of Strength, Weight, and Height Better than 2D Face Images do?" Adaptive Human Behavior and Physiology 7, n.º 3 (26 de agosto de 2021): 209–19. http://dx.doi.org/10.1007/s40750-021-00170-8.

Texto completo da fonte
Resumo:
Abstract Objectives A large literature exists investigating the extent to which physical characteristics (e.g., strength, weight, and height) can be accurately assessed from face images. While most of these studies have employed two-dimensional (2D) face images as stimuli, some recent studies have used three-dimensional (3D) face images because they may contain cues not visible in 2D face images. As equipment required for 3D face images is considerably more expensive than that required for 2D face images, we here investigated how perceptual ratings of physical characteristics from 2D and 3D face images compare. Methods We tested whether 3D face images capture cues of strength, weight, and height better than 2D face images do by directly comparing the accuracy of strength, weight, and height ratings of 182 2D and 3D face images taken simultaneously. Strength, height and weight were rated by 66, 59 and 52 raters respectively, who viewed both 2D and 3D images. Results In line with previous studies, we found that weight and height can be judged somewhat accurately from faces; contrary to previous research, we found that people were relatively inaccurate at assessing strength. We found no evidence that physical characteristics could be judged more accurately from 3D than 2D images. Conclusion Our results suggest physical characteristics are perceived with similar accuracy from 2D and 3D face images. They also suggest that the substantial costs associated with collecting 3D face scans may not be justified for research on the accuracy of facial judgments of physical characteristics.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Li, Mingchao, Yerui Chen, Zexuan Ji, Keren Xie, Songtao Yuan, Qiang Chen e Shuo Li. "Image Projection Network: 3D to 2D Image Segmentation in OCTA Images". IEEE Transactions on Medical Imaging 39, n.º 11 (novembro de 2020): 3343–54. http://dx.doi.org/10.1109/tmi.2020.2992244.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Shah, Jalpa, e JS Dhobi. "REVIEW OF IMAGE ENCRYPTION AND DECRYPTION TECHNIQUES FOR 2D IMAGES". International Journal of Engineering Technologies and Management Research 5, n.º 1 (7 de fevereiro de 2020): 81–84. http://dx.doi.org/10.29121/ijetmr.v5.i1.2018.49.

Texto completo da fonte
Resumo:
In the emerging era of Internet, multimedia software and application security of images is of major concern. To offer security to these images encryption is the way for robust security. With Image Encryption it becomes difficult to analyze the image that is communicated over untrusted network. Also it provides security for any unauthorized access. The paper provides an introduction to cryptography and various image encryption techniques are reviewed for 2D images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Yahanda, Alexander T., Timothy J. Goble, Peter T. Sylvester, Gretchen Lessman, Stanley Goddard, Bridget McCollough, Amar Shah, Trevor Andrews, Tammie L. S. Benzinger e Michael R. Chicoine. "Impact of 3-Dimensional Versus 2-Dimensional Image Distortion Correction on Stereotactic Neurosurgical Navigation Image Fusion Reliability for Images Acquired With Intraoperative Magnetic Resonance Imaging". Operative Neurosurgery 19, n.º 5 (10 de junho de 2020): 599–607. http://dx.doi.org/10.1093/ons/opaa152.

Texto completo da fonte
Resumo:
Abstract BACKGROUND Fusion of preoperative and intraoperative magnetic resonance imaging (iMRI) studies during stereotactic navigation may be very useful for procedures such as tumor resections but can be subject to error because of image distortion. OBJECTIVE To assess the impact of 3-dimensional (3D) vs 2-dimensional (2D) image distortion correction on the accuracy of auto-merge image fusion for stereotactic neurosurgical images acquired with iMRI using a head phantom in different surgical positions. METHODS T1-weighted intraoperative images of the head phantom were obtained using 1.5T iMRI. Images were postprocessed with 2D and 3D image distortion correction. These studies were fused to T1-weighted preoperative MRI studies performed on a 1.5T diagnostic MRI. The reliability of the auto-merge fusion of these images for 2D and 3D correction techniques was assessed both manually using the stereotactic navigation system and via image analysis software. RESULTS Eight surgical positions of the head phantom were imaged with iMRI. Greater image distortion occurred with increased distance from isocenter in all 3 axes, reducing accuracy of image fusion to preoperative images. Visually reliable image fusions were accomplished in 2/8 surgical positions using 2D distortion correction and 5/8 using 3D correction. Three-dimensional correction yielded superior image registration quality as defined by higher maximum mutual information values, with improvements ranging between 2.3% and 14.3% over 2D correction. CONCLUSION Using 3D distortion correction enhanced the reliability of surgical navigation auto-merge fusion of phantom images acquired with iMRI across a wider range of head positions and may improve the accuracy of stereotactic navigation using iMRI images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Logadottir, A., S. Korreman e P. M. Petersen. "COMPARISON OF PROSTATE LOCALIZATION WITH 2D-2D AND 3D IMAGES". Radiotherapy and Oncology 92 (agosto de 2009): S179—S180. http://dx.doi.org/10.1016/s0167-8140(12)73061-x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Brownhill, Daniel, Yachin Chen, Barbara A. K. Kreilkamp, Christophe de Bezenac, Christine Denby, Martyn Bracewell, Shubhabrata Biswas, Kumar Das, Anthony G. Marson e Simon S. Keller. "Automated subcortical volume estimation from 2D MRI in epilepsy and implications for clinical trials". Neuroradiology 64, n.º 5 (18 de outubro de 2021): 935–47. http://dx.doi.org/10.1007/s00234-021-02811-x.

Texto completo da fonte
Resumo:
Abstract Purpose Most techniques used for automatic segmentation of subcortical brain regions are developed for three-dimensional (3D) MR images. MRIs obtained in non-specialist hospitals may be non-isotropic and two-dimensional (2D). Automatic segmentation of 2D images may be challenging and represents a lost opportunity to perform quantitative image analysis. We determine the performance of a modified subcortical segmentation technique applied to 2D images in patients with idiopathic generalised epilepsy (IGE). Methods Volume estimates were derived from 2D (0.4 × 0.4 × 3 mm) and 3D (1 × 1x1mm) T1-weighted acquisitions in 31 patients with IGE and 39 healthy controls. 2D image segmentation was performed using a modified FSL FIRST (FMRIB Integrated Registration and Segmentation Tool) pipeline requiring additional image reorientation, cropping, interpolation and brain extraction prior to conventional FIRST segmentation. Consistency between segmentations was assessed using Dice coefficients and volumes across both approaches were compared between patients and controls. The influence of slice thickness on consistency was further assessed using 2D images with slice thickness increased to 6 mm. Results All average Dice coefficients showed excellent agreement between 2 and 3D images across subcortical structures (0.86–0.96). Most 2D volumes were consistently slightly lower compared to 3D volumes. 2D images with increased slice thickness showed lower agreement with 3D images with lower Dice coefficients (0.55–0.83). Significant volume reduction of the left and right thalamus and putamen was observed in patients relative to controls across 2D and 3D images. Conclusion Automated subcortical volume estimation of 2D images with a resolution of 0.4 × 0.4x3mm using a modified FIRST pipeline is consistent with volumes derived from 3D images, although this consistency decreases with an increased slice thickness. Thalamic and putamen atrophy has previously been reported in patients with IGE. Automated subcortical volume estimation from 2D images is feasible and most reliable at using in-plane acquisitions greater than 1 mm x 1 mm and provides an opportunity to perform quantitative image analysis studies in clinical trials.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Sunil, Mrs Megha. "Comparison of Different Image Fusion Techniques for 2D MRI Images". International Journal on Recent and Innovation Trends in Computing and Communication 3, n.º 2 (2015): 500–503. http://dx.doi.org/10.17762/ijritcc2321-8169.150215.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Daszykowski, M., E. Mosleth Færgestad, H. Grove, H. Martens e B. Walczak. "Matching 2D gel electrophoresis images with Matlab ‘Image Processing Toolbox’". Chemometrics and Intelligent Laboratory Systems 96, n.º 2 (abril de 2009): 188–95. http://dx.doi.org/10.1016/j.chemolab.2009.01.011.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Jiao, Yuzhong, Kayton Wai Keung Cheung, Mark Ping Chan Mok e Yiu Kei Li. "Spatial Distance-based Interpolation Algorithm for Computer Generated 2D+Z Images". Electronic Imaging 2020, n.º 2 (26 de janeiro de 2020): 140–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.2.sda-140.

Texto completo da fonte
Resumo:
Computer generated 2D plus Depth (2D+Z) images are common input data for 3D display with depth image-based rendering (DIBR) technique. Due to their simplicity, linear interpolation methods are usually used to convert low-resolution images into high-resolution images for not only depth maps but also 2D RGB images. However linear methods suffer from zigzag artifacts in both depth map and RGB images, which severely affects the 3D visual experience. In this paper, spatial distance-based interpolation algorithm for computer generated 2D+Z images is proposed. The method interpolates RGB images with the help of depth and edge information from depth maps. Spatial distance from interpolated pixel to surrounding available pixels is utilized to obtain the weight factors of surrounding pixels. Experiment results show that such spatial distance-based interpolation can achieve sharp edges and less artifacts for 2D RGB images. Naturally, it can improve the performance of 3D display. Since bilinear interpolation is used in homogenous areas, the proposed algorithm keeps low computational complexity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Kovalev, S. V., S. M. Nesterov e I. A. Skorodumov. "2D radar images of template objects". Journal of Communications Technology and Electronics 56, n.º 2 (fevereiro de 2011): 183–87. http://dx.doi.org/10.1134/s1064226911020082.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Amir, Amihood, Gad M. Landau e Dina Sokol. "Inplace 2D matching in compressed images". Journal of Algorithms 49, n.º 2 (novembro de 2003): 240–61. http://dx.doi.org/10.1016/s0196-6774(03)00088-9.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Hirano, Daisuke, Yusuke Funayama e Takashi Maekawa. "3D Shape Reconstruction from 2D Images". Computer-Aided Design and Applications 6, n.º 5 (janeiro de 2009): 701–10. http://dx.doi.org/10.3722/cadaps.2009.701-710.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Lu, Shao-Ping, Sibo Feng, Beerend Ceulemans, Miao Wang, Rui Zhong e Adrian Munteanu. "Multiview conversion of 2D cartoon images". Communications in Information and Systems 16, n.º 4 (2016): 229–54. http://dx.doi.org/10.4310/cis.2016.v16.n4.a2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Szymczyk, Piotr. "Obtaining 3D information from 2D images". ELEKTRONIKA - KONSTRUKCJE, TECHNOLOGIE, ZASTOSOWANIA 1, n.º 6 (5 de junho de 2014): 49–52. http://dx.doi.org/10.15199/ele-2014-041.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Yang, Chuan-Kai, e Chia-Ning Kuo. "Automatic hair extraction from 2D images". Multimedia Tools and Applications 75, n.º 8 (13 de fevereiro de 2015): 4441–65. http://dx.doi.org/10.1007/s11042-015-2483-y.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Mario, Julia, Shambhavi Venkataraman, Valerie Fein-Zachary, Mark Knox, Alexander Brook e Priscilla Slanetz. "Lumpectomy Specimen Radiography: Does Orientation or 3-Dimensional Tomosynthesis Improve Margin Assessment?" Canadian Association of Radiologists Journal 70, n.º 3 (agosto de 2019): 282–91. http://dx.doi.org/10.1016/j.carj.2019.03.005.

Texto completo da fonte
Resumo:
Purpose Our purpose was twofold. First, we sought to determine whether 2 orthogonal oriented views of excised breast cancer specimens could improve surgical margin assessment compared to a single unoriented view. Second, we sought to determine whether 3D tomosynthesis could improve surgical margin assessment compared to 2D mammography alone. Materials and Methods Forty-one consecutive specimens were prospectively imaged using 4 protocols: single view unoriented 2D image acquired on a specimen unit (1VSU), 2 orthogonal oriented 2D images acquired on the specimen unit (2VSU), 2 orthogonal oriented 2D images acquired on a mammogram unit (2V2DMU), and 2 orthogonal oriented 3D images acquired on the mammogram unit (2V3DMU). Three breast imagers randomly assessed surgical margin of the 41 specimens with each protocol. Surgical margin per histopathology was considered the gold standard. Results The average area under the curve (AUC) was 0.60 for 1VSU, 0.66 for 2VSU, 0.68 for 2V2DMU, and 0.60 for 2V3DMU. Comparing AUCs for 2VSU vs 1VSU by reader showed improved diagnostic accuracy using 2VSU; however, this difference was only statistically significant for reader 3 (0.73 vs 0.63, P = .0455). Comparing AUCs for 2V3DMU vs 2V2DMU by reader showed mixed results, with reader 1 demonstrating increased accuracy (0.72 vs 0.68, P = .5984), while readers 2 and 3 demonstrated decreased accuracy (0.50 vs 0.62, P = .1089 and 0.58 vs 0.75, P = .0269). Conclusions 2VSU showed improved accuracy in surgical margin prediction compared to 1VSU, although this was not statistically significant for all readers. 3D tomosynthesis did not improve surgical margin assessment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Kiyosue, Hiro, Mika Okahara, Shuichi Tanoue, Takaharu Nakamura, Hirofumi Nagatomi e Hiromu Mori. "Detection of the Residual Lumen of Intracranial Aneurysms Immediately after Coil Embolization by Three-dimensional Digital Subtraction Angiographic Virtual Endoscopic Imaging". Neurosurgery 50, n.º 3 (1 de março de 2002): 476–85. http://dx.doi.org/10.1097/00006123-200203000-00008.

Texto completo da fonte
Resumo:
Abstract OBJECTIVE: Detection of a small residual lumen after coil embolization is often difficult because of the coil mass and the overlap of the cerebral arteries. The purpose of this study was to assess the usefulness of virtual endoscopic (VE) analysis of three-dimensional digital subtraction angiographic (DSA) images for evaluation of aneurysmal occlusion immediately after the procedure. METHODS: Twenty-seven intracranial aneurysms were treated with coil embolization using a three-dimensional DSA system. Biplane and rotational DSA scanning was performed before and immediately after the procedures. VE images were obtained at a separate workstation, after transfer of the rotational images. Two-dimensional (2D) DSA images and VE images obtained after the procedure were assessed with respect to aneurysmal occlusion. Morphological outcomes and other factors, including location, size, volumetric ratio (coil volume/aneurysm volume), and residual sites, were also evaluated. RESULTS: Seven aneurysms were evaluated as complete occlusion (CO) on both 2D DSA images and VE images. Twelve aneurysms exhibited residual lumina on both 2D DSA images and VE images. Five aneurysms were evaluated as CO on 2D DSA images and as incomplete occlusion on VE images. There were no recurrences among the aneurysms that were evaluated as CO on VE images. Two of five aneurysms that were evaluated as CO on 2D DSA images and as incomplete occlusion on VE images demonstrated regrowth in follow-up examinations. Residual sites and volumetric ratios were correlated with aneurysmal regrowth. CONCLUSION: VE imaging can demonstrate a residual lumen more frequently than can 2D DSA imaging and is useful for evaluating aneurysmal occlusion after coil embolization.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Nomura, Kosuke, Mitsuru Kaise, Daisuke Kikuchi, Toshiro Iizuka, Yumiko Fukuma, Yasutaka Kuribayashi, Masami Tanaka et al. "Recognition Accuracy Using 3D Endoscopic Images for Superficial Gastrointestinal Cancer: A Crossover Study". Gastroenterology Research and Practice 2016 (2016): 1–6. http://dx.doi.org/10.1155/2016/4561468.

Texto completo da fonte
Resumo:
Aim. To determine whether 3D endoscopic images improved recognition accuracy for superficial gastrointestinal cancer compared with 2D images.Methods. We created an image catalog using 2D and 3D images of 20 specimens resected by endoscopic submucosal dissection. The twelve participants were allocated into two groups. Group 1 evaluated only 2D images at first, group 2 evaluated 3D images, and, after an interval of 2 weeks, group 1 next evaluated 3D and group 2 evaluated 2D images. The evaluation items were as follows: (1) diagnostic accuracy of the tumor extent and (2) confidence levels in assessing (a) tumor extent, (b) morphology, (c) microsurface structure, and (d) comprehensive recognition.Results. The use of 3D images resulted in an improvement in diagnostic accuracy in both group 1 (2D: 76.9%, 3D: 78.6%) and group 2 (2D: 79.9%, 3D: 83.6%), with no statistically significant difference. The confidence levels were higher for all items ((a) to (d)) when 3D images were used. With respect to experience, the degree of the improvement showed the following trend: novices > trainees > experts.Conclusions. By conversion into 3D images, there was a significant improvement in the diagnostic confidence level for superficial tumors, and the improvement was greater in individuals with lower endoscopic expertise.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Chen, Zhengrui, Liying Lu, Ziyang Yuan, Yiming Zhu, Yu Li, Chun Yuan e Weihong Deng. "Blind Face Restoration under Extreme Conditions: Leveraging 3D-2D Prior Fusion for Superior Structural and Texture Recovery". Proceedings of the AAAI Conference on Artificial Intelligence 38, n.º 2 (24 de março de 2024): 1263–71. http://dx.doi.org/10.1609/aaai.v38i2.27889.

Texto completo da fonte
Resumo:
Blind face restoration under extreme conditions involves reconstructing high-quality face images from severely degraded inputs. These input images are often in poor quality and have extreme facial poses, leading to errors in facial structure and unnatural artifacts within the restored images. In this paper, we show that utilizing 3D priors effectively compensates for structure knowledge deficiencies in 2D priors while preserving the texture details. Based on this, we introduce FREx (Face Restoration under Extreme conditions) that combines structure-accurate 3D priors and texture-rich 2D priors in pretrained generative networks for blind face restoration under extreme conditions. To fuse the different information in 3D and 2D priors, we introduce an adaptive weight module that adjusts the importance of features based on the input image's condition. With this approach, our model can restore structure-accurate and natural-looking faces even when the images have lost a lot of information due to degradation and extreme pose. Extensive experimental results on synthetic and real-world datasets validate the effectiveness of our methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Zhang, Hanqing, Yun Lin, Fei Teng, Shanshan Feng, Bing Yang e Wen Hong. "Circular SAR Incoherent 3D Imaging with a NeRF-Inspired Method". Remote Sensing 15, n.º 13 (29 de junho de 2023): 3322. http://dx.doi.org/10.3390/rs15133322.

Texto completo da fonte
Resumo:
Circular synthetic aperture radar (CSAR) has the potential to form 3D images with single-pass single-channel radar data, which is very time-efficient. This article proposes a volumetric neural renderer that utilizes CSAR 2D amplitude images to reconstruct the 3D power distribution of the imaged scene. The innovations are two-fold: Firstly, we propose a new SAR amplitude image formation model that establishes a linear mapping relationship between multi-look amplitude-squared SAR images and a real-valued 4D (spatial location (x, y, z) and azimuth angle θ) radar scattered field. Secondly, incorporating the proposed image formation model and SAR imaging geometry, we extend the neural radiance field (NeRF) methods to reconstruct the 4D radar scattered field using a set of 2D multi-aspect SAR images. Using real-world drone SAR data, we demonstrate our method for (1) creating realistic SAR imagery from arbitrary new viewpoints and (2) reconstructing high-precision 3D structures of the imaged scene.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Sakashita, Reiko, Kaoru Takizawa, Fuminori Matsuura, Nobuyuki Fujisawa e Fujie Kondo. "Differences between Impressions of Anaglyph Stereo Images and 2D Images". Journal of the Visualization Society of Japan 25, Supplement2 (2005): 349–52. http://dx.doi.org/10.3154/jvs.25.supplement2_349.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Abousalem, Zib ziab. "3D from 2D for Nano images using images processing methods". INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 14, n.º 2 (11 de dezembro de 2014): 5437–47. http://dx.doi.org/10.24297/ijct.v14i2.2064.

Texto completo da fonte
Resumo:
The scanning electron microscope (SEM) remains a main tool for semiconductor and polymer physics but TEM and AFM are increasingly used for minimum size features which called nanomaterials. In addition some physical properties such as microhardness, grain boundaries and domain structure are observed from optical and polarizing microscope which gives poor information and consequentially the error probability of discussion will be high.Thus it is natural to squeeze out every possible bit of resolution in the SEM, optical and polarizing microscopes for the materials under test. In our paper we will tackling this problem using different image processing techniques to get more clarify and sufficient information.In the suggested paper we will obtain set of images for prepared samples under different conditions and with different physical properties. These images will be analyzed using the above mentioned technique which starting by converting the prepared samples images (gray scale or colored images) to data file (*.dat) in two dimensional using programming. The 2D data will convert to 3D data file using FORTRAN programming. All images will subject to the generate filter algorithm for 3D data file. After filtering the 3D data file we can establish histogram, contours and 3D surface to analysis the image. Another technique will be prepared using Visual FORTRAN for steepest descent algorithm (SDA) which gives the vector map for the obtained data. Finally the depth from one single still image will be created and determine using OpenGL library under Visual C++ language, as well as, perform texture mapping. The quality of filtering depends on the way the data is incorporated into the model. Data should be treated carefully. From our paper we can analysis any part from any image without reanalysis the image, all size of the image as in this paper we take three samples with different size (256 * 256), (400 * 400), (510 * 510), this method decrees the cost of hardware and sample.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Abousalem, Zib ziab. "3D from 2D for Nano images using images processing methods". INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 14, n.º 2 (11 de dezembro de 2014): 5437–47. http://dx.doi.org/10.24297/ijct.v14i2.2065.

Texto completo da fonte
Resumo:
The scanning electron microscope (SEM) remains a main tool for semiconductor and polymer physics but TEM and AFM are increasingly used for minimum size features which called nanomaterials. In addition some physical properties such as microhardness, grain boundaries and domain structure are observed from optical and polarizing microscope which gives poor information and consequentially the error probability of discussion will be high. Thus it is natural to squeeze out every possible bit of resolution in the SEM, optical and polarizing microscopes for the materials under test. In our paper we will tackling this problem using different image processing techniques to get more clarify and sufficient information. In the suggested paper we will obtain set of images for prepared samples under different conditions and with different physical properties. These images will be analyzed using the above mentioned technique which starting by converting the prepared samples images (gray scale or colored images) to data file (*.dat) in two dimensional using programming. The 2D data will convert to 3D data file using FORTRAN programming. All images will subject to the generate filter algorithm for 3D data file. After filtering the 3D data file we can establish histogram, contours and 3D surface to analysis the image. Another technique will be prepared using Visual FORTRAN for steepest descent algorithm (SDA) which gives the vector map for the obtained data. Finally the depth from one single still image will be created and determine using OpenGL library under Visual C++ language, as well as, perform texture mapping. The quality of filtering depends on the way the data is incorporated into the model. Data should be treated carefully. From our paper we can analysis any part from any image without reanalysis the image, all size of the image as in this paper we take three samples with different size (256 * 256), (400 * 400), (510 * 510), this method decrees the cost of hardware and sample.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Ding, Y., S. H. Patel, J. Holmes, H. Feng, L. A. McGee, J. C. Rwigema, S. A. Vora et al. "Patient-specific 3D CT Images Reconstruction from 2D KV Images". International Journal of Radiation Oncology*Biology*Physics 118, n.º 5 (abril de 2024): e68-e69. http://dx.doi.org/10.1016/j.ijrobp.2024.01.153.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Chen, Lijiang, Changkun Qiao, Meijing Wu, Linghan Cai, Cong Yin, Mukun Yang, Xiubo Sang e Wenpei Bai. "Improving the Segmentation Accuracy of Ovarian-Tumor Ultrasound Images Using Image Inpainting". Bioengineering 10, n.º 2 (1 de fevereiro de 2023): 184. http://dx.doi.org/10.3390/bioengineering10020184.

Texto completo da fonte
Resumo:
Diagnostic results can be radically influenced by the quality of 2D ovarian-tumor ultrasound images. However, clinically processed 2D ovarian-tumor ultrasound images contain many artificially recognized symbols, such as fingers, crosses, dashed lines, and letters which assist artificial intelligence (AI) in image recognition. These symbols are widely distributed within the lesion’s boundary, which can also affect the useful feature-extraction-utilizing networks and thus decrease the accuracy of lesion classification and segmentation. Image inpainting techniques are used for noise and object elimination from images. To solve this problem, we observed the MMOTU dataset and built a 2D ovarian-tumor ultrasound image inpainting dataset by finely annotating the various symbols in the images. A novel framework called mask-guided generative adversarial network (MGGAN) is presented in this paper for 2D ovarian-tumor ultrasound images to remove various symbols from the images. The MGGAN performs to a high standard in corrupted regions by using an attention mechanism in the generator to pay more attention to valid information and ignore symbol information, making lesion boundaries more realistic. Moreover, fast Fourier convolutions (FFCs) and residual networks are used to increase the global field of perception; thus, our model can be applied to high-resolution ultrasound images. The greatest benefit of this algorithm is that it achieves pixel-level inpainting of distorted regions without clean images. Compared with other models, our model achieveed better results with only one stage in terms of objective and subjective evaluations. Our model obtained the best results for 256 × 256 and 512 × 512 resolutions. At a resolution of 256 × 256, our model achieved 0.9246 for SSIM, 22.66 for FID, and 0.07806 for LPIPS. At a resolution of 512 × 512, our model achieved 0.9208 for SSIM, 25.52 for FID, and 0.08300 for LPIPS. Our method can considerably improve the accuracy of computerized ovarian tumor diagnosis. The segmentation accuracy was improved from 71.51% to 76.06% for the Unet model and from 61.13% to 66.65% for the PSPnet model in clean images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Sun, Haoran. "A Review of 3D-2D Registration Methods and Applications based on Medical Images". Highlights in Science, Engineering and Technology 35 (11 de abril de 2023): 200–224. http://dx.doi.org/10.54097/hset.v35i.7055.

Texto completo da fonte
Resumo:
The registration of preoperative three-dimensional (3D) medical images with intraoperative two-dimensional (2D) data is a key technology for image-guided radiotherapy, minimally invasive surgery, and interventional procedures. In this paper, we review 3D-2D registration methods using computed tomography (CT) and magnetic resonance imaging (MRI) as preoperative 3D images and ultrasound, X-ray, and visible light images as intraoperative 2D images. The 3D-2D registration techniques are classified into intensity-based, structure-based, and gradient-based according to the different registration features. In addition, we investigated the different application scenarios of this registration technology in medical clinical treatment, which can be divided into disease diagnosis, surgical guidance and postoperative evaluation, and also investigated the evaluation method of 3D-2D registration effect.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Gircys, Michael, e Brian J. Ross. "Image Evolution Using 2D Power Spectra". Complexity 2019 (2 de janeiro de 2019): 1–21. http://dx.doi.org/10.1155/2019/7293193.

Texto completo da fonte
Resumo:
Procedurally generated images and textures have been widely explored in evolutionary art. One active research direction in the field is the discovery of suitable heuristics for measuring perceived characteristics of evolved images. This is important in order to help influence the nature of evolved images and thereby evolve more meaningful and pleasing art. In this regard, particular challenges exist for quantifying aspects of style and shape. In an attempt to bridge the divide between computer vision and cognitive perception, we propose the use of measures related to image spatial frequencies. Based on existing research that uses power spectral density of spatial frequencies as an effective metric for image classification and retrieval, we posit that Fourier decomposition can be effective for guiding image evolution. We refine fitness measures based on Fourier analysis and spatial frequency and apply them within a genetic programming environment for image synthesis. We implement fitness strategies using 2D Fourier power spectra and phase, with the goal of evolving images that share spectral properties of supplied target images. Adaptations and extensions of the fitness strategies are considered for their utility in art systems. Experiments were conducted using a variety of greyscale and colour target images, spatial fitness criteria, and procedural texture languages. Results were promising, in that some target images were trivially evolved, while others were more challenging to characterize. We also observed that some evolved images which we found discordant and “uncomfortable” show a previously identified spectral phenomenon. Future research should further investigate this result, as it could extend the use of 2D power spectra in fitness evaluations to promote new aesthetic properties.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Gunasekaran, Ganesan, e Meenakshisundaram Venkatesan. "An Efficient Technique for Three-Dimensional Image Visualization Through Two-Dimensional Images for Medical Data". Journal of Intelligent Systems 29, n.º 1 (18 de dezembro de 2017): 100–109. http://dx.doi.org/10.1515/jisys-2017-0315.

Texto completo da fonte
Resumo:
Abstract The main idea behind this work is to present three-dimensional (3D) image visualization through two-dimensional (2D) images that comprise various images. 3D image visualization is one of the essential methods for excerpting data from given pieces. The main goal of this work is to figure out the outlines of the given 3D geometric primitives in each part, and then integrate these outlines or frames to reconstruct 3D geometric primitives. The proposed technique is very useful and can be applied to many kinds of images. The experimental results showed a very good determination of the reconstructing process of 2D images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Sakboonyarat, Boonnatee, e Pinyo Taeprasartsit. "Discriminative Image Enhancement for Robust Cascaded Segmentation of CT Images". ECTI Transactions on Computer and Information Technology (ECTI-CIT) 15, n.º 2 (19 de abril de 2021): 150–65. http://dx.doi.org/10.37936/ecti-cit.2021152.240112.

Texto completo da fonte
Resumo:
Objective: Cascaded/attention-based neural network has become common in image segmentation. This work proposes to improve its robustness by adding discriminative image enhancement to its attention mechanism. Unlike prior work, this image enhancement can also be applied as data augmentation and easily adapted for existing models. Its generalization can improve accuracy across multiple segmentation tasks and datasets. Methods: The method first localizes a target organ in a 2D fashion to obtain a tight neighborhood of the organ in each slice. Next, the method computes an HU histogram of a region combined from multiple 2D neighborhoods. This allows the method to adaptively handle HU-range difference among images. Then, HUs are nonlinearly stretched through a parameterized mapping function providing discriminative features for neural network. Varying the function parameters creates different intensity distribution of the target region. This effectively enhances and augments image data at the same time. The HU-reassigned region is then fed to a segmentation model for training. Results: Our experiments on liver and kidney segmentation showed that even a simple cascaded 2D U-Net model could deliver competitive performance in a variety of datasets. In addition, cross-validation and ablation analysis indicated robustness of the method even when the number of original training samples was limited. Conclusion: With the proposed technique, a simple model with limited training data can deliver competitive performance. Significance: The method significantly improves robustness of a trained model and is ready for generalization to other segmentation tasks and attention-based models. Accurate models can be simpler to save computing resources.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Takahata, Tomoyuki. "Coaxiality Evaluation of Coaxial Imaging System with Concentric Silicon–Glass Hybrid Lens for Thermal and Color Imaging". Sensors 20, n.º 20 (10 de outubro de 2020): 5753. http://dx.doi.org/10.3390/s20205753.

Texto completo da fonte
Resumo:
Thermal imaging is useful for tasks such as detecting the presence of humans and recognizing surrounding objects in the operation of several types of robots, including service robots and personal mobility robots, which assist humans. Because the number of pixels on a thermal imager is generally smaller than that on a color imager, thermal images are more useful when combined with color images, assuming that the correspondence between points in the images captured by the two sensors is known. In the literature, several types of coaxial imaging systems have been reported that can capture thermal and color images, simultaneously, from the same point of view with the same optical axis. Among them, a coaxial imaging system using a concentric silicon–glass hybrid lens was devised. Long-wavelength infrared and visible light was focused using the hybrid lens. The focused light was subsequently split using a silicon plate. Separate thermal and color images were then captured using thermal and color imagers, respectively. However, a coaxiality evaluation of the hybrid lens has not been shown. This report proposes an implementation and coaxiality evaluation for a compact coaxial imaging system incorporating the hybrid lens. The coaxiality of the system was experimentally demonstrated by estimating the intrinsic and extrinsic parameters of the thermal and color imagers and performing 2D mapping between the thermal images and color images.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Garcia, R. L., P. N. Happ e R. Q. Feitosa. "LARGE SCALE SEMANTIC SEGMENTATION OF VIRTUAL ENVIRONMENTS TO FACILITATE CORROSION MANAGEMENT". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B2-2021 (28 de junho de 2021): 465–70. http://dx.doi.org/10.5194/isprs-archives-xliii-b2-2021-465-2021.

Texto completo da fonte
Resumo:
Abstract. This paper reports the results of a study that aims to develop semi-automatic methods for assessing the degree of corrosion in industrial plant. We evaluated two fully convolutional networks (U-Net and DeepLab v3 +) to segment corroded areas in panoramic images of offshore platforms. The experimental analysis was based on two datasets built for this study. The datasets comprise 9,112 2D images and 3,732 panoramic images. Both FCNs trained on 2D images were tested on 2D images and cubic projections of panoramic images. In addition to pointing out encouraging results, the experiments indicated that most prediction errors concentrated in corrosion defects with a small pixel area.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Park, Minsoo, Hang-Nga Mai, Mai Yen Mai, Thaw Thaw Win, Du-Hyeong Lee e Cheong-Hee Lee. "Intra- and Interrater Agreement of Face Esthetic Analysis in 3D Face Images". BioMed Research International 2023 (10 de abril de 2023): 1–7. http://dx.doi.org/10.1155/2023/3717442.

Texto completo da fonte
Resumo:
The use of three-dimensional (3D) facial scans for facial analysis is increasing in maxillofacial treatment. The aim of this study was to investigate the consistency of two-dimensional (2D) and 3D facial analyses performed by multiple raters. Six men and four women (25–36-year-old) participated in this study. The 2D images of the smiling and resting faces in the frontal and sagittal planes were obtained. The 3D facial and intraoral scans were merged to generate virtual 3D faces. Ten clinicians performed facial analyses by investigating 14 indices of 2D and 3D faces. Intra- and interrater agreements of the results of 2D and 3D facial analyses within and among the participants were evaluated. The intrarater agreement between the 2D and 3D facial analyses varied according to the indices. The highest and lowest agreements were found for the dental crowding index (0.94) and smile line curvature index (0.56) in the frontal plane, and Angle’s classification (canine) index (0.98) and occlusal plane angle index (0.55) in the profile plane. In the frontal plane, the interrater agreements were generally higher for the 3D images than for the 2D images, while in the profile plane, the interrater agreements were high in the Angle’s classification (canine) index however low in the other indices. Several occlusion-related indices were missing in the 2D images because the posterior teeth were not observed. Esthetic analysis results between 2D and 3D face images can differ according to the evaluation indices. The use of 3D faces is recommended over 2D images to increase the reliability of facial analyses, as it can fully assess both esthetic and occlusion-related indices.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Morrison, H. Boyd. "Depth and Image Quality of Three-Dimensional, Lenticular-Sheet Images". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 41, n.º 2 (outubro de 1997): 1338–42. http://dx.doi.org/10.1177/1071181397041002135.

Texto completo da fonte
Resumo:
This study investigated the inherent tradeoff between depth and image quality in lenticular-sheet (LS) imaging. Four different scenes were generated as experimental stimuli to represent a range of typical LS images. The overall amount of depth in each image, as well as the degree of foreground and background disparity, were varied, and the images were rated by subjects using the free-modulus magnitude estimation procedure. Generally, subjects preferred images which had smaller amounts of overall depth and tended to dislike excessive amounts of foreground or background disparity. The most preferred image was also determined for each scene by selecting the image with the highest mean rating. In a second experiment, these most preferred LS images for each scene were shown to subjects along with the analogous two-dimensional (2D) photographic versions. Results indicate that observers from the general population looked at the LS images longer than they did at the 2D versions and rated them higher on the attributes of quality of depth and attention-getting ability, although the LS images were rated lower on sharpness. No difference was found in overall quality or likeability.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Chieco, Pasquale, Ard Jonker, Bouke A. De Boer, Jan M. Ruijter e Cornelis J. F. Van Noorden. "Image Cytometry: Protocols for 2D and 3D Quantification in Microscopic Images". Progress in Histochemistry and Cytochemistry 47, n.º 4 (janeiro de 2013): 211–333. http://dx.doi.org/10.1016/j.proghi.2012.09.001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Srisutthiyakorn, Nattavadee, Sander Hunter, Rituparna Sarker, Ronny Hofmann e Irene Espejo. "Predicting elastic properties and permeability of rocks from 2D thin sections". Leading Edge 37, n.º 6 (junho de 2018): 421–27. http://dx.doi.org/10.1190/tle37060421.1.

Texto completo da fonte
Resumo:
Predicting rock elastic properties and permeability from high-resolution 2D thin sections has been a challenging problem in rock physics because the 2D thin sections reveal very little about how the microstructure connects in the third dimension. However, 2D thin sections are widely available and inexpensive because they are often produced as a part of the reservoir-quality workflow. Furthermore, they have much higher resolution and greater field of view than micro X-ray computed tomography images, which are commonly used for rock properties estimation. The 2D thin sections we studied are from various hydrocarbon-bearing clastic formations with a variety of provenances, depositional environments, and burial histories. The high-resolution 2D images were scanned from these physical 2D thin sections. K-means segmentation was then employed to identify different minerals and pores for creating 2D binary images. The focus of this study is to simulate 2D elastic properties and permeability from 2D thin sections and then to employ various empirical relations to transform these 2D simulation results to 3D intrinsic rock properties. We compared the rock properties from this process to those from core measurements and measured wireline logs and found that these 2D to 3D rock property transformations yield promising results, especially for elastic properties. The results show that 2D thin section images have high enough resolution to resolve grain contacts very well. Predicting the permeability from 2D thin sections is still challenging since the process requires fitting the physical equation in order to retrieve the fitting coefficient for prediction due to our lack of understanding of the difference between 2D and 3D pore size distribution.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Kim, Hyungsuk, Chang Hyun Yoo, Soo Bin Park e Hyun Seok Song. "Difference in glenoid retroversion between two-dimensional axial computed tomography and three-dimensional reconstructed images". Clinics in Shoulder and Elbow 23, n.º 2 (1 de junho de 2020): 71–79. http://dx.doi.org/10.5397/cise.2020.00122.

Texto completo da fonte
Resumo:
Background: The glenoid version of the shoulder joint correlates with the stability of the glenohumeral joint and the clinical results of total shoulder arthroplasty. We sought to analyze and compare the glenoid version measured by traditional axial two-dimensional (2D) computed tomography (CT) and three-dimensional (3D) reconstructed images at different levels.Methods: A total of 30 cases, including 15 male and 15 female patients, who underwent 3D shoulder CT imaging was randomly selected and matched by sex consecutively at one hospital. The angular difference between the scapular body axis and 2D CT slice axis was measured. The glenoid version was assessed at three levels (midpoint, upper one-third, and center of the lower circle of the glenoid) using Friedman’s method in the axial plane with 2D CT images and at the same level of three different transverse planes using a 3D reconstructed image. Results: The mean difference between the scapular body axis on the 3D reconstructed image and the 2D CT slice axis was 38.4°. At the level of the midpoint of the glenoid, the measurements were 1.7° ± 4.9° on the 2D CT images and −1.8° ± 4.1° in the 3D reconstructed image. At the level of the center of the lower circle, the measurements were 2.7° ± 5.2° on the 2D CT images and −0.5° ± 4.8° in the 3D reconstructed image. A statistically significant difference was found between the 2D CT and 3D reconstructed images at all three levels. Conclusions: The glenoid version is measured differently between axial 2D CT and 3D reconstructed images at three levels. Use of 3D reconstructed imaging can provide a more accurate glenoid version profile relative to 2D CT. The glenoid version is measured differently at different levels.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Winz, M. L., K. Rohr e S. Wörz. "Geometric Alignment of 2D Gel Electrophoresis Images". Methods of Information in Medicine 48, n.º 04 (2009): 320–23. http://dx.doi.org/10.3414/me9229.

Texto completo da fonte
Resumo:
Summary Objectives: 2D gel electrophoresis (2-DE) is the method of choice for analyzing protein expression in the field of proteomics, for example, comparing a reference with a test population. However, due to complex physical and chemical processes the locations of proteins generally vary in different 2-DE images. To cope with these variations, accurate geometric alignment of 2-DE images is important. Methods: We introduce a new elastic registration approach for 2-DE images, which is based on an analytic solution of the Navier equation using Gaussian elastic body splines (GEBS). With this approach cross-effects in elastic deformations can be handled, which is important for the registration of 2-DE images. In addition, landmark correspondences can be included to aid the registration in regions which are difficult to register using intensity information alone. Results: We have successfully applied our approach to register 2-DE gel images of different levels of complexity. In each case, gel images from a reference group are compared with a test group. To analyze the performance of our approach, we have carried out a quantitative evaluation of the registration results. Moreover, we have performed an experimental comparison with a previous elastic registration scheme. Conclusions: From the results we found that our approach is well-suited for the registration of 2-DE gel images of different levels of complexity and it turned out that the approach is superior to a previous hybrid scheme. Moreover, our approach is well-suited in a fully automatic setting and the performance can further be improved when landmark correspondences are available.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Tian, Chunwei, Qi Zhang, Jian Zhang, Guanglu Sun e Yuan Sun. "2D-PCA Representation and Sparse Representation for Image Recognition". Journal of Computational and Theoretical Nanoscience 14, n.º 1 (1 de janeiro de 2017): 829–34. http://dx.doi.org/10.1166/jctn.2017.6281.

Texto completo da fonte
Resumo:
The two-dimensional principal component analysis (2D-PCA) method has been widely applied in fields of image classification, computer vision, signal processing and pattern recognition. The 2D-PCA algorithm also has a satisfactory performance in both theoretical research and real-world applications. It not only retains main information of the original face images, but also decreases the dimension of original face images. In this paper, we integrate the 2D-PCA and spare representation classification (SRC) method to distinguish face images, which has great performance in face recognition. The novel representation of original face image obtained using 2D-PCA is complementary with original face image, so that the fusion of them can obviously improve the accuracy of face recognition. This is also attributed to the fact the features obtained using 2D-PCA are usually more robust than original face image matrices. The experiments of face recognition demonstrate that the combination of original face images and new representations of the original face images is more effective than the only original images. Especially, the simultaneous use of the 2D-PCA method and sparse representation can extremely improve accuracy in image classification. In this paper, the adaptive weighted fusion scheme automatically obtains optimal weights and it has no any parameter. The proposed method is not only simple and easy to achieve, but also obtains high accuracy in face recognition.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Falah .K, Rasha, e Rafeef Mohammed .H. "Convert 2D shapes in to 3D images". Journal of Al-Qadisiyah for computer science and mathematics 9, n.º 2 (20 de agosto de 2017): 19–23. http://dx.doi.org/10.29304/jqcm.2017.9.2.146.

Texto completo da fonte
Resumo:
There are several complex programs that using for convert 2D images to 3D models with difficult techniques. In this paper ,it will be introduce a useful technique and using simple Possibilities and language for converting 2D to 3D images. The technique would be used; a three-dimensional projection using three images for the same shape and display three dimensional image from different side and to implement the particular work, visual programming with 3Dtruevision engine would be used, where its given acceptable result with shorting time. And it could be used in the field of engineering drawing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Winnemöller, H., A. Orzan, L. Boissieux e J. Thollot. "Texture Design and Draping in 2D Images". Computer Graphics Forum 28, n.º 4 (junho de 2009): 1091–99. http://dx.doi.org/10.1111/j.1467-8659.2009.01486.x.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia