Academic literature on the topic 'Imagerie augmentée'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Imagerie augmentée.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Imagerie augmentée"
Meier, Walter N., Michael L. Van Woert, and Cheryl Bertoia. "Evaluation of operational SSM/I ice-concentration algorithms." Annals of Glaciology 33 (2001): 102–8. http://dx.doi.org/10.3189/172756401781818509.
Full textLin, Xin, and Arthur Y. Hou. "Evaluation of Coincident Passive Microwave Rainfall Estimates Using TRMM PR and Ground Measurements as References." Journal of Applied Meteorology and Climatology 47, no. 12 (December 1, 2008): 3170–87. http://dx.doi.org/10.1175/2008jamc1893.1.
Full textChancia, Robert, Jan van Aardt, Sarah Pethybridge, Daniel Cross, and John Henderson. "Predicting Table Beet Root Yield with Multispectral UAS Imagery." Remote Sensing 13, no. 11 (June 2, 2021): 2180. http://dx.doi.org/10.3390/rs13112180.
Full textLogaldo, Mara. "Augmented Bodies: Functional and Rhetorical Uses of Augmented Reality in Fashion." Pólemos 10, no. 1 (April 1, 2016): 125–41. http://dx.doi.org/10.1515/pol-2016-0007.
Full textMihara, Masahito, Hiroaki Fujimoto, Noriaki Hattori, Hironori Otomune, Yuta Kajiyama, Kuni Konaka, Yoshiyuki Watanabe, et al. "Effect of Neurofeedback Facilitation on Poststroke Gait and Balance Recovery." Neurology 96, no. 21 (April 20, 2021): e2587-e2598. http://dx.doi.org/10.1212/wnl.0000000000011989.
Full textNeumann, Ulrich, Suya You, Jinhui Hu, Bolan Jiang, and Ismail Oner Sebe. "Visualizing Reality in an Augmented Virtual Environment." Presence: Teleoperators and Virtual Environments 13, no. 2 (April 2004): 222–33. http://dx.doi.org/10.1162/1054746041382366.
Full textGomes, José Duarte Cardoso, Mauro Jorge Guerreiro Figueiredo, Lúcia da Graça Cruz Domingues Amante, and Cristina Maria Cardoso Gomes. "Augmented Reality in Informal Learning Environments." International Journal of Creative Interfaces and Computer Graphics 7, no. 2 (July 2016): 39–55. http://dx.doi.org/10.4018/ijcicg.2016070104.
Full textGawehn, Matthijs, Rafael Almar, Erwin W. J. Bergsma, Sierd de Vries, and Stefan Aarninkhof. "Depth Inversion from Wave Frequencies in Temporally Augmented Satellite Video." Remote Sensing 14, no. 8 (April 12, 2022): 1847. http://dx.doi.org/10.3390/rs14081847.
Full textKuny, S., H. Hammer, and A. Thiele. "CNN BASED VEHICLE TRACK DETECTION IN COHERENT SAR IMAGERY: AN ANALYSIS OF DATA AUGMENTATION." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2022 (May 30, 2022): 93–98. http://dx.doi.org/10.5194/isprs-archives-xliii-b1-2022-93-2022.
Full textBernardes, Sergio, Margueritte Madden, Ashurst Walker, Andrew Knight, Nicholas Neel, Akshay Mendki, Dhaval Bhanderi, Andrew Guest, Shannon Healy, and Thomas Jordan. "Emerging Geospatial Technologies in Environmental Research, Education, and Outreach." Geosfera Indonesia 5, no. 3 (December 30, 2020): 352. http://dx.doi.org/10.19184/geosi.v5i3.20719.
Full textDissertations / Theses on the topic "Imagerie augmentée"
Poirier, Stéphane. "Estimation de pose omnidirectionnelle dans un contexte de réalité augmentée." Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/28703/28703.pdf.
Full textCamera pose estimation is a fundamental problem of augmented reality, and enables registration of a model to the reality. An accurate estimate of the pose is often critical in infrastructure engineering. Omnidirectional images cover a larger field of view than planar images commonly used in AR. This property can be beneficial to pose estimation. However, no existing work present results clearly showing accuracy gains. Our objective is therefore to quantify the accuracy of omnidirectional pose estimation and test it in practice. We propose a pose estimation method for omnidirectional images and have measured its accuracy using automated simulations. Our results show that the large field of view of omnidirectional images increases pose accuracy, compared to poses from planar images. We also tested our method in practice, using data from real environments and discuss challenges and limitations to its use in practice.
Maman, Didier. "Recalage de modèles tridimensionnels sur des images réelles : application à la modélisation interactive d'environnement par des techniques de réalité augmentée." Paris, ENMP, 1998. http://www.theses.fr/1998ENMP0820.
Full textMouktadiri, Ghizlane. "Angiovision - Pose d'endoprothèse aortique par angionavigation augmentée." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00943465.
Full textCrespel, Thomas. "Optical and software tools for the design of a new transparent 3D display." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0366.
Full textWe live exciting times where new types of displays are made possible, and current challenges focus on enhancing user experience. As examples, we witness the emergence of curved, volumetric, head-mounted, autostereoscopic, or transparent displays, among others, with more complex sensors and algorithms that enable sophisticated interactions.This thesis aims at contributing to the creation of such novel displays. In three concrete projects, we combine both optical and software tools to address specific applications with the ultimate goal of designing a three-dimensional display. Each of these projects led to the development of a working prototype based on the use of picoprojectors, cameras, optical elements, and custom software.In a first project, we investigated spherical displays: they are more suitable for visualizing spherical data than regular flat 2D displays, however, existing solutions are costly and difficult to build due to the requirement of tailored optics. We propose a low-cost multitouch spherical display that uses only off-the-shelf, low-cost, and 3D-printed elements to make it more accessible and reproducible. Our solution uses a focus-free projector and an optical system to cover a sphere from the inside, infrared finger tracking for multitouch interaction, and custom software to link both. We leverage the use of low-cost material by software calibrations and corrections.We then extensively studied wedge-shaped light guides, in which we see great potential and that became the center component of the rest of our work. Such light guides were initially devised for flat and compact projection-based displays but in this project we exploit them in a context of acquisition. We seek to image constrained locations that are not easily accessible with regular cameras due to the lack of space in front of the object of interest. Our idea is to fold the imaging distance into a wedge guide thanks to prismatic elements. With our prototype, we validated various applications in the archaeological field.The skills and expertise that we acquired during both projects allowed us to design a new transparent autostereoscopic display. Our solution overcomes some limitations of augmented reality displays allowing a user to see both a direct view of the real world as well as a stereoscopic and view-dependent augmentation without any wearable or tracking. The principle idea is to use a wedge light guide, a holographic optical element, and several projectors, each of them generating a different viewpoint. Our current prototype has five viewpoints, and more can be added. This new display has a wide range of potential applications in the augmented reality field
Meshkat, Alsadat Shabnam. "Analysis of camera pose estimation using 2D scene features for augmented reality applications." Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/30281.
Full textAugmented reality (AR) had recently made a huge impact on field engineers and workers in construction industry, as well as the way they interact with architectural plans. AR brings in a superimposition of the 3D model of a building onto the 2D image not only as the big picture, but also as an intricate representation of what is going to be built. In order to insert a 3D model, the camera has to be localized regarding its surroundings. Camera localization con-sists of finding the exterior parameters (i.e. its position and orientation) of the camera with respect to the viewed scene and its characteristics. In this thesis, camera pose estimation methods using circle-ellipse and straight line corre-spondences has been investigated. Circles and lines are two of the geometrical features that are mostly present in structures and buildings. Based on the relationship between the 3D features and their corresponding 2D data detected in the image, the position and orientation of the camera is estimated.
Ferretti, Gilbert. "Endoscopie virtuelle des bronches : études pré-cliniques et cliniques." Université Joseph Fourier (Grenoble), 1999. http://www.theses.fr/1999GRE19001.
Full textFabre, Diandra. "Retour articulatoire visuel par échographie linguale augmentée : développements et application clinique." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT076/document.
Full textIn the framework of speech therapy for articulatory troubles associated with tongue misplacement, providing a visual feedback might be very useful for both the therapist and the patient, as the tongue is not a naturally visible articulator. In the last years, ultrasound imaging has been successfully applied to speech therapy in English speaking countries, as reported in several case studies. The assumption that visual articulatory biofeedback may facilitate the rehabilitation of the patient is supported by studies on the links between speech production and perception. During speech therapy sessions, the patient seems to better understand his/her tongue movements, despite the poor quality of the image due to inherent noise and the lack of information about other speech articulators. We develop in this thesis the concept of augmented lingual ultrasound. We propose two approaches to improve the raw ultrasound image, and describe a first clinical application of this device.The first approach focuses on tongue tracking in ultrasound images. We propose a method based on supervised machine learning, where we model the relationship between the intensity of all the pixels of the image and the contour coordinates. The size of the images and of the contours is reduced using a principal component analysis, and a neural network models their relationship. We developed speaker-dependent and speaker-independent implementations and evaluated the performances as a function of the amount of manually annotated contours used as training data. We obtained an error of 1.29 mm for the speaker-dependent model with only 80 annotated images, which is better than the performance of the EdgeTrak reference method based on active contours.The second approach intends to automatically animate an articulatory talking head from the ultrasound images. This talking head is the avatar of a reference speaker that reveals the external and internal structures of the vocal tract (palate, pharynx, teeth, etc.). First, we build a mapping model between ultrasound images and tongue control parameters acquired on the reference speaker. We then adapt this model to new speakers referred to as source speakers. This adaptation is performed by the Cascaded Gaussian Mixture Regression (C-GMR) technique based on a joint model of the ultrasound data of the reference speaker, control parameters of the talking head, and adaptation ultrasound data of the source speaker. This approach is compared to a direct GMR regression between the source speaker data and the control parameters of the talking head. We show that C-GMR approach achieves the best compromise between amount of adaptation data and prediction quality. We also evaluate the generalization capability of the C-GMR approach and show that prior information of the reference speaker helps the model generalize to articulatory configurations of the source speaker unseen during the adaptation phase.Finally, we present preliminary results of a clinical application of augmented ultrasound imaging to a population of patients after partial glossectomy. We evaluate the use of visual feedback of the patient’s tongue in real time and the use of sequences recorded with a speech therapist to illustrate the targeted articulation. Classical speech therapy probes are led after each series of sessions. The first results show an improvement of the patients’ performance, especially for tongue placement
Agustinos, Anthony. "Navigation augmentée d'informations de fluorescence pour la chirurgie laparoscopique robot-assistée." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAS033/document.
Full textLaparoscopic surgery faithfully reproduce the principles of conventional surgery with minimal physical aggression.If this surgery appears to be very beneficial for the patient, it is a difficult surgery where the complexity of surgicalact is increased, compared with conventional surgery. This complexity is partly due to the manipulation of surgicalinstruments and viewing the surgical scene (including the restricted field of view of a conventional endoscope). Thedecisions of the surgeon could be improved by identifying critical or not visible areas of interest in the surgical scene.My research aimed to combine robotics, computer vision and fluorescence to provide an answer to these problems :fluorescence imaging provides additional visual information to assist the surgeon in determining areas to operate or tobe avoided (for example, visualization of the cystic duct during cholecystectomy). Robotics will provide the accuracyand efficiency of the surgeon’s gesture as well as a visualization and a "more intuitive" tracking of the surgical scene.The combination of these two technologies will help guide and secure the surgical gesture.A first part of this work consisted in extracting visual information in both imagingmodalities (laparoscopy/fluorescence).Localization methods for 2D/3D real-time of laparoscopic surgical instruments in the laparoscopic image and anatomicaltargets in the fluorescence image have been designed and developed. A second part consisted in the exploitationof the bimodal visual information for developing control laws for robotics endoscope holder and the instrument holder.Visual servoing controls of a robotic endoscope holder to track one or more instruments in laparoscopic image ora target of interest in the fluorescence image were implemented. In order to control a robotic instrument holder withthe visual information provided by the imaging system, a calibration method based on the use of 3D information of thelocalization of surgical instruments was also developed. This multimodal environment was evaluated quantitativelyon the test bench and on anatomical specimens.Ultimately this work will be integrated within lightweight robotic architectures, not rigidly linked, using comanipulationrobots with more sophisticated controls such as force feedback. Such an "increase" viewing capabilities andsurgeon’s action could help to optimize the management of the patient
Thomas, Vincent. "Modélisation 3D pour la réalité augmentée : une première expérimentation avec un téléphone intelligent." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/27904/27904.pdf.
Full textRecently, a new genre of software applications has emerged allowing the general public to browse their immediate environment using their smartphone: Mobile Augmented Reality (MAR) applications. The growing popularity of this type of application is triggered by the fast evolution of smartphones. These ergonomic mobile platforms embed several pieces of equipment useful to deploy MAR (i.e. digital camera, GPS receiver, accelerometers, digital compass and now gyroscope). In order to achieve a strong augmentation of the reality in terms of user’s immersion and interactions, a 3D model of the real environment is generally required. The 3D model can be used for three different purposes in these MAR applications: 1) to manage the occlusions between real and virtual objects; 2) to provide accurate camera pose (position/orientation) calculation; 3) to support the augmentation and interactions. However, the availability of such 3D models is limited and therefore preventing MAR application to be used anywhere at anytime. In order to overcome such constraints, this proposed research thesis is aimed at devising a new approach adapted to the specific context of MAR applications and dedicated to the simple and fast production of 3D models. This approach was implemented on the iPhone 3G platform and evaluated according to precision, rapidity, simplicity and efficiency criteria. Results of the evaluation underlined the capacity of the proposed approach to provide, in about 3 minutes, a simple 3D model of a building using smartphone while achieving accuracy of 5 meters and higher.
Barberio, Manuel. "Real-time intraoperative quantitative assessment of gastrointestinal tract perfusion using hyperspectral imaging (HSI)." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAJ120.
Full textAnastomotic leak (AL) is a severe complication in surgery. Adequate local perfusion is fundamental to promote anastomotic healing, reducing the risk of AL. However, clinical criteria are unreliable to evaluate bowel perfusion. Consequently, a tool allowing to objectively detect intestinal viability intraoperatively is desirable. In this regard, fluorescence angiography (FA) has been explored. In spite of promising results in clinical trials, FA assessment is subjective, hence the efficacy of FA is unclear. Quantitative FA has been previously introduced. However, it is limited by the need of injecting a fluorophore. Hyperspectral imaging (HSI) is a promising optical imaging technique coupling a spectroscope with a photo camera, allowing for a contrast-free, real-time, and quantitative tissue analysis. The intraoperative usability of HSI is limited by the presence of static images. We developed hyperspectral-based enhanced reality (HYPER), to allow for precise intraoperative perfusion assessment. This thesis describes the steps of the development and validation of HYPER
Books on the topic "Imagerie augmentée"
Briscoe, Robert Eamon. Superimposed Mental Imagery. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198717881.003.0008.
Full textBook chapters on the topic "Imagerie augmentée"
Sales Barros, Ellton, and Nelson Neto. "Classification Procedure for Motor Imagery EEG Data." In Augmented Cognition: Intelligent Technologies, 201–11. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91470-1_17.
Full textSanthaseelan, Varun, and Vijayan K. Asari. "Moving Object Detection and Tracking in Wide Area Motion Imagery." In Augmented Vision and Reality, 49–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/8612_2012_9.
Full textAlam, Mohammad S., and Adel Sakla. "Automatic Target Recognition in Multispectral and Hyperspectral Imagery Via Joint Transform Correlation." In Augmented Vision and Reality, 179–206. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/8612_2012_5.
Full textLi, Xiaofei, Lele Xu, Li Yao, and Xiaojie Zhao. "A Novel HCI System Based on Real-Time fMRI Using Motor Imagery Interaction." In Foundations of Augmented Cognition, 703–8. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39454-6_75.
Full textPérez-Zapata, A. F., A. F. Cardona-Escobar, J. A. Jaramillo-Garzón, and Gloria M. Díaz. "Deep Convolutional Neural Networks and Power Spectral Density Features for Motor Imagery Classification of EEG Signals." In Augmented Cognition: Intelligent Technologies, 158–69. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-91470-1_14.
Full textLoureiro, Sandra Maria Correia, Carolina Correia, and João Guerreiro. "The Role of Mental Imagery as Driver to Purchase Intentions in a Virtual Supermarket." In Augmented Reality and Virtual Reality, 17–28. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68086-2_2.
Full textChen, Mei Lin, Lin Yao, and Ning Jiang. "Music Imagery for Brain-Computer Interface Control." In Augmented Cognition. Enhancing Cognition and Behavior in Complex Human Environments, 293–300. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58625-0_21.
Full textDhindsa, Kiret, Dean Carcone, and Suzanna Becker. "A Brain-Computer Interface Based on Abstract Visual and Auditory Imagery: Evidence for an Effect of Artistic Training." In Augmented Cognition. Enhancing Cognition and Behavior in Complex Human Environments, 313–32. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58625-0_23.
Full textQiu, Zhaoyang, Shugeng Chen, Brendan Z. Allison, Jie Jia, Xingyu Wang, and Jing Jin. "Differences in Motor Imagery Activity Between the Paretic and Non-paretic Hands in Stroke Patients Using an EEG BCI." In Augmented Cognition. Enhancing Cognition and Behavior in Complex Human Environments, 378–88. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-58625-0_28.
Full textHubschman, J. P. "Réalité augmentée pour le segment postérieur." In Imagerie en ophtalmologie, 477–85. Elsevier, 2014. http://dx.doi.org/10.1016/b978-2-294-73702-2.00026-5.
Full textConference papers on the topic "Imagerie augmentée"
Ventura, Jonathan, and Tobias Hollerer. "Outdoor mobile localization from panoramic imagery." In 2011 IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2011. http://dx.doi.org/10.1109/ismar.2011.6092399.
Full textVentura, Jonathan, and Tobias Hollerer. "Outdoor mobile localization from panoramic imagery." In 2011 IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2011. http://dx.doi.org/10.1109/ismar.2011.6162900.
Full textGao, Zhenzhen, Luciano Nocera, and Ulrich Neumann. "Fusing oblique imagery with augmented aerial LiDAR." In the 20th International Conference. New York, New York, USA: ACM Press, 2012. http://dx.doi.org/10.1145/2424321.2424381.
Full textNguyen, Lam, Francois Koenig, and Kelly Sherbondy. "Augmented reality using ultra-wideband radar imagery." In SPIE Defense, Security, and Sensing, edited by Kenneth I. Ranney and Armin W. Doerry. SPIE, 2011. http://dx.doi.org/10.1117/12.883285.
Full textConover, Damon M., Brittany Beidleman, Ryan McAlinden, and Christoph C. Borel-Donohue. "Visualizing UAS-collected imagery using augmented reality." In SPIE Defense + Security, edited by Timothy P. Hanratty and James Llinas. SPIE, 2017. http://dx.doi.org/10.1117/12.2262864.
Full textMeixner, Philipp, and Franz Leberl. "Augmented internet maps with property information from aerial imagery." In the 18th SIGSPATIAL International Conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1869790.1869848.
Full textKorah, Thommen, and Yun-Ta Tsai. "Urban canvas: Unfreezing street-view imagery with semantically compressed LIDAR pointclouds." In 2011 IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2011. http://dx.doi.org/10.1109/ismar.2011.6143897.
Full textKorah, Thommen, and Yun-Ta Tsai. "Urban canvas: Unfreezing street-view imagery with semantically compressed LIDAR pointclouds." In 2011 IEEE International Symposium on Mixed and Augmented Reality. IEEE, 2011. http://dx.doi.org/10.1109/ismar.2011.6162912.
Full textNijholt, Anton. "Augmented Reality: Beyond Interaction." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1002058.
Full textChabot, Samuel, Jaimie Drozdal, Matthew Peveler, Yalun Zhou, Hui Su, and Jonas Braasch. "A Collaborative, Immersive Language Learning Environment Using Augmented Panoramic Imagery." In 2020 6th International Conference of the Immersive Learning Research Network (iLRN). IEEE, 2020. http://dx.doi.org/10.23919/ilrn47897.2020.9155140.
Full textReports on the topic "Imagerie augmentée"
Mapping the Spatial Distribution of Poverty Using Satellite Imagery in the Philippines. Asian Development Bank, March 2021. http://dx.doi.org/10.22617/spr210076-2.
Full textMapping the Spatial Distribution of Poverty Using Satellite Imagery in the Philippines. Asian Development Bank, March 2021. http://dx.doi.org/10.22617/tcs210076-2.
Full textA Guidebook on Mapping Poverty through Data Integration and Artificial Intelligence. Asian Development Bank, May 2021. http://dx.doi.org/10.22617/spr210131-2.
Full text