Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Visual image reconstruction“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Visual image reconstruction" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Visual image reconstruction"
Nestor, Adrian, David C. Plaut und Marlene Behrmann. „Feature-based face representations and image reconstruction from behavioral and neural data“. Proceedings of the National Academy of Sciences 113, Nr. 2 (28.12.2015): 416–21. http://dx.doi.org/10.1073/pnas.1514551112.
Der volle Inhalt der QuelleBae, Joungeun, und Hoon Yoo. „Image Enhancement for Computational Integral Imaging Reconstruction via Four-Dimensional Image Structure“. Sensors 20, Nr. 17 (25.08.2020): 4795. http://dx.doi.org/10.3390/s20174795.
Der volle Inhalt der QuelleWang, Xia, und Qianqian Hu. „Visual Truth and Image Manipulation: Visual Ethical Anomie and Reconstruction of Digital Photography“. SHS Web of Conferences 155 (2023): 03018. http://dx.doi.org/10.1051/shsconf/202315503018.
Der volle Inhalt der QuelleMeng, Lu, und Chuanhao Yang. „Dual-Guided Brain Diffusion Model: Natural Image Reconstruction from Human Visual Stimulus fMRI“. Bioengineering 10, Nr. 10 (24.09.2023): 1117. http://dx.doi.org/10.3390/bioengineering10101117.
Der volle Inhalt der QuelleKumar, L. Ravi, K. G. S. Venkatesan und S.Ravichandran. „Cloud-enabled Internet of Things Medical Image Processing Compressed Sensing Reconstruction“. International Journal of Scientific Methods in Intelligence Engineering Networks 01, Nr. 04 (2023): 11–21. http://dx.doi.org/10.58599/ijsmien.2023.1402.
Der volle Inhalt der QuelleYang, Qi, und Jong Hoon Yang. „Virtual Reconstruction of Visually Conveyed Images under Multimedia Intelligent Sensor Network Node Layout“. Journal of Sensors 2022 (02.02.2022): 1–12. http://dx.doi.org/10.1155/2022/8367387.
Der volle Inhalt der QuelleYin, Jing, und Jong Hoon Yang. „Virtual Reconstruction Method of Regional 3D Image Based on Visual Transmission Effect“. Complexity 2021 (11.06.2021): 1–12. http://dx.doi.org/10.1155/2021/5616826.
Der volle Inhalt der QuelleNjølstad, Tormund, Anselm Schulz, Johannes C. Godt, Helga M. Brøgger, Cathrine K. Johansen, Hilde K. Andersen und Anne Catrine T. Martinsen. „Improved image quality in abdominal computed tomography reconstructed with a novel Deep Learning Image Reconstruction technique – initial clinical experience“. Acta Radiologica Open 10, Nr. 4 (April 2021): 205846012110083. http://dx.doi.org/10.1177/20584601211008391.
Der volle Inhalt der QuelleXu, Li, Ling Bai und Lei Li. „The Effect of 3D Image Virtual Reconstruction Based on Visual Communication“. Wireless Communications and Mobile Computing 2022 (05.01.2022): 1–8. http://dx.doi.org/10.1155/2022/6404493.
Der volle Inhalt der QuelleLi, Yuting. „Design of 3D Image Visual Communication System for Automatic Reconstruction of Digital Images“. Advances in Multimedia 2022 (30.07.2022): 1–10. http://dx.doi.org/10.1155/2022/3369386.
Der volle Inhalt der QuelleDissertationen zum Thema "Visual image reconstruction"
Duraisamy, Prakash. „3D Reconstruction Using Lidar and Visual Images“. Thesis, University of North Texas, 2012. https://digital.library.unt.edu/ark:/67531/metadc177193/.
Der volle Inhalt der QuelleHe, Peng. „Image-based reconstruction and visual hull from imprecise input“. Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/10005.
Der volle Inhalt der QuelleGrauman, Kristen Lorraine 1979. „A statistical image-based shape model for visual hull reconstruction and 3D structure inference“. Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87347.
Der volle Inhalt der QuelleIncludes bibliographical references (p. 69-72).
by Kristen Lorraine Grauman.
S.M.
Ozcelik, Furkan. „Déchiffrer le langage visuel du cerveau : reconstruction d'images naturelles à l'aide de modèles génératifs profonds à partir de signaux IRMf“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES073.
Der volle Inhalt der QuelleThe great minds of humanity were always curious about the nature of mind, brain, and consciousness. Through physical and thought experiments, they tried to tackle challenging questions about visual perception. As neuroimaging techniques were developed, neural encoding and decoding techniques provided profound understanding about how we process visual information. Advancements in Artificial Intelligence and Deep Learning areas have also influenced neuroscientific research. With the emergence of deep generative models like Variational Autoencoders (VAE), Generative Adversarial Networks (GAN) and Latent Diffusion Models (LDM), researchers also used these models in neural decoding tasks such as visual reconstruction of perceived stimuli from neuroimaging data. The current thesis provides two frameworks in the above-mentioned area of reconstructing perceived stimuli from neuroimaging data, particularly fMRI data, using deep generative models. These frameworks focus on different aspects of the visual reconstruction task than their predecessors, and hence they may bring valuable outcomes for the studies that will follow. The first study of the thesis (described in Chapter 2) utilizes a particular generative model called IC-GAN to capture both semantic and realistic aspects of the visual reconstruction. The second study (mentioned in Chapter 3) brings new perspective on visual reconstruction by fusing decoded information from different modalities (e.g. text and image) using recent latent diffusion models. These studies become state-of-the-art in their benchmarks by exhibiting high-fidelity reconstructions of different attributes of the stimuli. In both of our studies, we propose region-of-interest (ROI) analyses to understand the functional properties of specific visual regions using our neural decoding models. Statistical relations between ROIs and decoded latent features show that while early visual areas carry more information about low-level features (which focus on layout and orientation of objects), higher visual areas are more informative about high-level semantic features. We also observed that generated ROI-optimal images, using these visual reconstruction frameworks, are able to capture functional selectivity properties of the ROIs that have been examined in many prior studies in neuroscientific research. Our thesis attempts to bring valuable insights for future studies in neural decoding, visual reconstruction, and neuroscientific exploration using deep learning models by providing the results of two visual reconstruction frameworks and ROI analyses. The findings and contributions of the thesis may help researchers working in cognitive neuroscience and have implications for brain-computer-interface applications
Anliot, Manne. „Volume Estimation of Airbags: A Visual Hull Approach“. Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-421.
Der volle Inhalt der QuelleThis thesis presents a complete and fully automatic method for estimating the volume of an airbag, through all stages of its inflation, with multiple synchronized high-speed cameras.
Using recorded contours of the inflating airbag, its visual hull is reconstructed with a novel method: The intersections of all back-projected contours are first identified with an accelerated epipolar algorithm. These intersections, together with additional points sampled from concave surface regions of the visual hull, are then Delaunay triangulated to a connected set of tetrahedra. Finally, the visual hull is extracted by carving away the tetrahedra that are classified as inconsistent with the contours, according to a voting procedure.
The volume of an airbag's visual hull is always larger than the airbag's real volume. By projecting a known synthetic model of the airbag into the cameras, this volume offset is computed, and an accurate estimate of the real airbag volume is extracted.
Even though volume estimates can be computed for all camera setups, the cameras should be specially posed to achieve optimal results. Such poses are uniquely found for different airbag models with a separate, fully automatic, simulated annealing algorithm.
Satisfying results are presented for both synthetic and real-world data.
Naouai, Mohamed. „Localisation et reconstruction du réseau routier par vectorisation d'image THR et approximation des contraintes de type "NURBS"“. Phd thesis, Université de Strasbourg, 2013. http://tel.archives-ouvertes.fr/tel-00994333.
Der volle Inhalt der QuelleFéraud, Thomas. „Rejeu de chemin et localisation monoculaire : application du Visual SLAM sur carte peu dense en environnement extérieur contraint“. Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2011. http://tel.archives-ouvertes.fr/tel-00697028.
Der volle Inhalt der QuelleNorth, Peter R. J. „The reconstruction of visual appearance by combining stereo surfaces“. Thesis, University of Sussex, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362837.
Der volle Inhalt der QuelleEbrahimi, Shahin. „Contribution to automatic adjustments of vertebrae landmarks on x-ray images for 3D reconstruction and quantification of clinical indices“. Thesis, Paris, ENSAM, 2017. http://www.theses.fr/2017ENAM0050/document.
Der volle Inhalt der QuelleExploitation of spine radiographs, in particular for 3D spine shape reconstruction of scoliotic patients, is a prerequisite for personalized modelling. Current methods, even though robust enough to be used in clinical routine, still rely on tedious manual adjustments. In this context, this PhD thesis aims toward automated detection of specific vertebrae landmarks in spine radiographs, enabling automated adjustments. In the first part, we developed an original Random Forest based framework for vertebrae corner localization that was applied on sagittal radiographs of both cervical and lumbar spine regions. A rigorous evaluation of the method confirms robustness and high accuracy of the proposed method. In the second part, we developed an algorithm for the clinically-important task of pedicle localization in the thoracolumbar region on frontal radiographs. The proposed algorithm compares favourably to similar methods from the literature while relying on less manual supervision. The last part of this PhD tackled the scarcely-studied task of joint detection, identification and segmentation of spinous processes of cervical vertebrae in sagittal radiographs, with again high precision performance. All three algorithmic solutions were designed around a generic framework exploiting dedicated visual feature descriptors and multi-class Random Forest classifiers, proposing a novel solution with computational and manual supervision burdens aiming for translation into clinical use. Overall, the presented frameworks suggest a great potential of being integrated in current spine 3D reconstruction frameworks that are used in daily clinical routine
Haouchine, Nazim. „Image-guided simulation for augmented reality during hepatic surgery“. Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10009/document.
Der volle Inhalt der QuelleThe main objective of this thesis is to provide surgeons with tools for pre and intra-operative decision support during minimally invasive hepatic surgery. These interventions are usually based on laparoscopic techniques or, more recently, flexible endoscopy. During such operations, the surgeon tries to remove a significant number of liver tumors while preserving the functional role of the liver. This involves defining an optimal hepatectomy, i.e. ensuring that the volume of post-operative liver is at least at 55% of the original liver and the preserving at hepatic vasculature. Although intervention planning can now be considered on the basis of preoperative patient-specific, significant movements of the liver and its deformations during surgery data make this very difficult to use planning in practice. The work proposed in this thesis aims to provide augmented reality tools to be used in intra-operative conditions in order to visualize the position of tumors and hepatic vascular networks at any time
Bücher zum Thema "Visual image reconstruction"
Blake, Andrew. Visual reconstruction. Cambridge, Mass: MIT, 1987.
Den vollen Inhalt der Quelle findenHuck, Friedrich O. Visual communication: An information theoryapproach. Boston, Mass: Kluwer Academic, 1997.
Den vollen Inhalt der Quelle findenHuck, Friedrich O. Visual communication: An information theory approach. Boston: Kluwer Academic Publishers, 1997.
Den vollen Inhalt der Quelle findenWiber, Melanie. Erect men/undulating women: The visual imagery of gender, race, and progress in reconstructive illustrations of human evolution. Waterloo, Ont: Wilfrid Laurier University Press, 1997.
Den vollen Inhalt der Quelle findenHuck, Friedrich O., Zia-ur Rahman und Carl L. Fales. Visual Communication: An Information Theory Approach. Springer London, Limited, 2013.
Den vollen Inhalt der Quelle findenMcDaniel, Justin Thomas. Conclusions and Comparisons. University of Hawai'i Press, 2017. http://dx.doi.org/10.21313/hawaii/9780824865986.003.0005.
Der volle Inhalt der QuelleKrass, Urte. The Portuguese Restoration of 1640 and Its Global Visualization. Amsterdam University Press, 2023. http://dx.doi.org/10.5117/9789463725637.
Der volle Inhalt der QuelleLeskinen, Maria V., und Eugeny A. Yablokov, Hrsg. All men and beasts, lions, eagles, quails… Anthropomorphic and Zoomorphic Representations of Nations and States in Slavic Сultural Discourse. Institute of Slavic Studies, Russian Academy of Sciences, 2020. http://dx.doi.org/10.31168/0441-1.
Der volle Inhalt der QuelleAthanassaki, Lucia, und Frances Titchener, Hrsg. Plutarch's Cities. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780192859914.001.0001.
Der volle Inhalt der QuelleBuchteile zum Thema "Visual image reconstruction"
Huck, Friedrich O., Carl L. Fales und Zia-ur Rahman. „Image Gathering and Reconstruction“. In Visual Communication, 13–35. Boston, MA: Springer US, 1997. http://dx.doi.org/10.1007/978-1-4757-2568-1_2.
Der volle Inhalt der QuelleZachariah, Anagha, Sandeep Kumar Satapathy und Shruti Mishra. „Visual Image Reconstruction Using fMRI Analysis“. In Reconnoitering the Landscape of Edge Intelligence in Healthcare, 191–216. New York: Apple Academic Press, 2024. http://dx.doi.org/10.1201/9781003401841-14.
Der volle Inhalt der QuelleBruzzone, E., G. Garibotto und F. Mangili. „Three-Dimensional Surface Reconstruction Using Delaunay Triangulation in the Image Plane“. In Visual Form, 99–108. Boston, MA: Springer US, 1992. http://dx.doi.org/10.1007/978-1-4899-0715-8_11.
Der volle Inhalt der QuellePrades, Albert, und Jorge Núñez. „Improving Astrometric Measurements Using Image Reconstruction“. In Visual Double Stars: Formation, Dynamics and Evolutionary Tracks, 15–25. Dordrecht: Springer Netherlands, 1997. http://dx.doi.org/10.1007/978-94-009-1477-3_3.
Der volle Inhalt der QuelleNakai, Hiroyuki, Shuhei Yamamoto, Yasuhiro Ueda und Yoshihide Shigeyama. „High Resolution and High Dynamic Range Image Reconstruction from Differently Exposed Images“. In Advances in Visual Computing, 713–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89646-3_70.
Der volle Inhalt der QuelleLi, Hongsong, Ting Song, Zehuan Wu, Jiandong Ma und Gangyi Ding. „Reconstruction of a Complex Mirror Surface from a Single Image“. In Advances in Visual Computing, 402–12. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-14249-4_38.
Der volle Inhalt der QuelleJin, Ge, Sang-Joon Lee, James K. Hahn, Steven Bielamowicz, Rajat Mittal und Raymond Walsh. „3D Surface Reconstruction and Registration for Image Guided Medialization Laryngoplasty“. In Advances in Visual Computing, 761–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11919476_76.
Der volle Inhalt der QuelleHou, Meng. „Visual Reconstruction Design Based on Image Technology Emotion“. In Innovative Computing Vol 1 - Emerging Topics in Artificial Intelligence, 184–90. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2092-1_23.
Der volle Inhalt der QuelleYuan, Xin’an, Wei Li, Jianming Zhao, Xiaokang Yin, Xiao Li und Jianchao Zhao. „Visual Reconstruction of Irregular Crack in Austenitic Stainless Steel Based on ACFM Technique“. In Recent Development of Alternating Current Field Measurement Combine with New Technology, 99–114. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-4224-0_6.
Der volle Inhalt der QuelleDamiand, Guillaume, und David Coeurjolly. „A Generic and Parallel Algorithm for 2D Image Discrete Contour Reconstruction“. In Advances in Visual Computing, 792–801. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89646-3_78.
Der volle Inhalt der QuelleKonferenzberichte zum Thema "Visual image reconstruction"
Vaezi, Matt M., Behnam Bavarian und Glenn Healey. „Image reconstruction of IDS filter response“. In Visual Communications, '91, Boston, MA, herausgegeben von Kou-Hu Tzou und Toshio Koga. SPIE, 1991. http://dx.doi.org/10.1117/12.50350.
Der volle Inhalt der QuelleTian, Qi, Like Zhang und Jingsheng Ma. „Voting based object boundary reconstruction“. In Visual Communications and Image Processing 2005. SPIE, 2005. http://dx.doi.org/10.1117/12.633444.
Der volle Inhalt der QuelleGuo, Weihong, und Wotao Yin. „EdgeCS: edge guided compressive sensing reconstruction“. In Visual Communications and Image Processing 2010, herausgegeben von Pascal Frossard, Houqiang Li, Feng Wu, Bernd Girod, Shipeng Li und Guo Wei. SPIE, 2010. http://dx.doi.org/10.1117/12.863354.
Der volle Inhalt der QuelleLe Mestre, Gwenaelle, und Danielle Pele. „Trinocular image analysis for virtual frame reconstruction“. In Visual Communications and Image Processing '96, herausgegeben von Rashid Ansari und Mark J. T. Smith. SPIE, 1996. http://dx.doi.org/10.1117/12.233198.
Der volle Inhalt der QuelleKim, Chul-Woo, HyoJoon Kim und ChoongWoong Lee. „Image reconstruction through projection of wavelet coefficients“. In Visual Communications and Image Processing '96, herausgegeben von Rashid Ansari und Mark J. T. Smith. SPIE, 1996. http://dx.doi.org/10.1117/12.233289.
Der volle Inhalt der QuelleSun, Xi, Ying Zheng und Zengfu Wang. „Model-assisted face reconstruction based on binocular stereo“. In Visual Communications and Image Processing 2010, herausgegeben von Pascal Frossard, Houqiang Li, Feng Wu, Bernd Girod, Shipeng Li und Guo Wei. SPIE, 2010. http://dx.doi.org/10.1117/12.863269.
Der volle Inhalt der QuelleLin, Wen-Huei, Chin-Hsing Chen und Jiann-Shu Lee. „Interpolation for 3D object reconstruction using wavelet transforms“. In Visual Communications and Image Processing '95, herausgegeben von Lance T. Wu. SPIE, 1995. http://dx.doi.org/10.1117/12.206645.
Der volle Inhalt der QuelleSdigui, A., G. Barta und M. Benjelloun. „Three-dimensional object reconstruction from a monocular image“. In Visual Communications and Image Processing '94, herausgegeben von Aggelos K. Katsaggelos. SPIE, 1994. http://dx.doi.org/10.1117/12.186026.
Der volle Inhalt der QuelleBrites, Catarina, Vitor Gomes, Joao Ascenso und Fernando Pereira. „Statistical reconstruction for predictive video coding“. In 2014 Visual Communications and Image Processing (VCIP). IEEE, 2014. http://dx.doi.org/10.1109/vcip.2014.7051624.
Der volle Inhalt der QuelleTom, Brian C., und Aggelos K. Katsaggelos. „Reconstruction of a high-resolution image from multiple-degraded misregistered low-resolution images“. In Visual Communications and Image Processing '94, herausgegeben von Aggelos K. Katsaggelos. SPIE, 1994. http://dx.doi.org/10.1117/12.186041.
Der volle Inhalt der QuelleBerichte der Organisationen zum Thema "Visual image reconstruction"
Makhachashvili, Rusudan K., Svetlana I. Kovpik, Anna O. Bakhtina und Ekaterina O. Shmeltser. Technology of presentation of literature on the Emoji Maker platform: pedagogical function of graphic mimesis. [б. в.], Juli 2020. http://dx.doi.org/10.31812/123456789/3864.
Der volle Inhalt der Quelle