Добірка наукової літератури з теми "Localisation 3D en temps réel"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Localisation 3D en temps réel".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Localisation 3D en temps réel":
-HUGUES, N. "Poursuite et localisation Ariane : une application temps réel sur un réseau IP classique." Revue de l'Electricité et de l'Electronique -, no. 06 (1999): 75. http://dx.doi.org/10.3845/ree.1999.067.
Mourlan, Lou. "La forêt dans Éducation européenne, un espace humaniste : de l’esthétique à l’éthique." Voix Plurielles 16, no. 2 (November 29, 2019): 22–36. http://dx.doi.org/10.26522/vp.v16i2.2307.
Lecuyer, A. "La réalité virtuelle : un bond technologique." European Psychiatry 29, S3 (November 2014): 561. http://dx.doi.org/10.1016/j.eurpsy.2014.09.381.
Yanatchkov, Milovann. "Modélisation par le flux." SHS Web of Conferences 47 (2018): 01006. http://dx.doi.org/10.1051/shsconf/20184701006.
Daval, Vincent, Lâmân Lelégard, and Mathieu Bredif. "Correction du flou de mouvement sur des images prises de nuit depuis un véhicule de numérisation terrestre." Revue Française de Photogrammétrie et de Télédétection, no. 215 (August 16, 2017): 53–64. http://dx.doi.org/10.52638/rfpt.2017.354.
Lê, Than Vu, and Mauro Gaio. "Visualisation 3D de terrain texturé. Préservation au niveau du pixel des qualités géométriques et colorimétriques, une méthode temps réel, innovante et simple." Revue internationale de géomatique 22, no. 3 (September 30, 2012): 461–84. http://dx.doi.org/10.3166/rig.22.461-484.
Palombi, O., V. Favier, B. Farny, and A. H. Dicko. "Vers une description sémantique et fonctionnelle du corps humain pour la simulation 3D temps-réel. Cas d’étude : mouvements élémentaires du membre inférieur." Morphologie 96, no. 314-315 (October 2012): 76. http://dx.doi.org/10.1016/j.morpho.2012.08.025.
Sagna, T., HG Ouedraogo,, AA Zouré, S. Zida, RT Compaore, D. Kambire, and Et Al. "Le Laboratoire à l'épreuve de la pandémie de la COVID-19 au Burkina Faso : Quels défis pour la régularité de l'offre de diagnostic." Revue Malienne d'Infectiologie et de Microbiologie 16, no. 1 (January 31, 2021): 32–37. http://dx.doi.org/10.53597/remim.v16i1.1758.
PIEL, S., D. NEYENS, A. PENASSO, J. SAINTE-MARIE, and F. SOUILLE. "Modélisation des remontées de chlorures le long d’un fleuve pour une optimisation de la gestion de la ressource." Techniques Sciences Méthodes, no. 12 (January 20, 2021): 33–51. http://dx.doi.org/10.36904/tsm/202012033.
Monnier, Fabrice, Bruno Vallet, Nicolas Paparoditis, Jean-Pierre Papelard, and Nicolas David. "Mise en cohérence de données laser mobile sur un modèle cartographique par recalage non-rigide." Revue Française de Photogrammétrie et de Télédétection, no. 202 (April 16, 2014): 27–41. http://dx.doi.org/10.52638/rfpt.2013.49.
Дисертації з теми "Localisation 3D en temps réel":
Decrouez, Marion. "Localisation et cartographie visuelles simultanées en milieu intérieur et en temps réel." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM010/document.
In this thesis, we explore the problem of modeling an unknown environment using monocular vision for localization applications. We focus in modeling dynamic indoor environments. Many objects in indoor environments are likely to be moved. These movements significantly affect the structure and appearance of the environment and disrupt the existing methods of visual localization. We present in this work a new approach for modeling the environment and its evolution with time. We define explicitly the scene as a static structure and a set of dynamic objects. The object is defined as a rigid entity that a user can take, move and that is visually detectable. First, we show how to automatically discover new objects in a dynamic environment. Existing methods of visual localization simply ignore the inconsistencies due to changes in the scene. We aim to analyze these changes to extract additional information. Without any prior knowledge, an object is a set of points with coherent motion relative to the static structure of the scene. We combine two methods of visual localization to compare various explorations in the same environment taken at different time. The comparison enables to detect objects that have moved between the two shots. For each object, a geometric model and an appearance model are learned. Moreover, we extend the scene model while updating the metrical map and the topological map of the static structure of the environment. Object discovery using motion is based on a new algorithm of multiple structures detection in an image pair. Given a set of correspondences between two views, the method based on RANSAC extracts the different structures corresponding to different model parameterizations seen in the data. The method is applied to homography estimation to detect planar structures and to fundamental matrix estimation to detect structures that have been shifted one from another. Our approach for dynamic scene modeling is applied in a new formulation of place recognition to take into account the presence of dynamic objects in the environment. The model of the place consists in an appearance model of the static structure observed in that place. An object database is learned from previous observations in the environment with the method of object discovery using motion. The place recognition we propose detects the dynamic objects seen in the place and rejects the false detection due to these objects. The different methods described in this dissertation are tested on synthetic and real data. Qualitative and quantitative results are presented throughout the dissertation
Abuhadrous, Iyad. "Système embarqué temps réel de localisation et de modélisation 3D par fusion multi-capteur." Phd thesis, École Nationale Supérieure des Mines de Paris, 2005. http://pastel.archives-ouvertes.fr/pastel-00001118.
Mouragnon, Etienne. "Reconstruction 3D et localisation simultanée de caméras mobiles : une approche temps-réel par ajustement de faisceaux local." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2007. http://tel.archives-ouvertes.fr/tel-00925661.
Mouragnon, Etienne. "Reconstruction 3D et localisation simultanée de caméras mobiles : une approche temps-réel par ajustement de faisceaux local." Phd thesis, Clermont-Ferrand 2, 2007. http://www.theses.fr/2007CLF21799.
Picard, Quentin. "Proposition de mécanismes d'optimisation des données pour la perception temps-réel dans un système embarqué hétérogène." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG039.
The development of autonomous systems has an increasing need for perception of the environment in embedded systems. Autonomous cars, drones, mixed reality devices have limited form factor and a restricted budget of power consumption for real-time performances. For instance, those use cases have a budget in the range of 300W-10W, 15W-10W and 10W-10mW respectively. This thesis is focused on autonomous and mobile systems with a budget of 10mW to 15W with the use of imager sensors and the inertial measurement unit (IMU). Simultaneous Localization And Mapping (SLAM) provides accurate and robust perception of the environment in real-time without prior knowledge for autonomous and mobile systems. The thesis aims at the real-time execution of the whole SLAM system composed of advanced perception functions, from localization to 3D reconstruction, with restricted hardware resources. In this context, two main questions are raised to answer the challenges of the literature. How to reduce the resource requirements of advanced perception functions? What is the SLAM pipeline partitioning for the heterogeneous system that integrates several computing units, from the embedded chip in the imager, to the near-sensor processing (FPGA) and in the embedded platform (ARM, embedded GPU)?. The first issue addressed in the thesis is about the need to reduce the hardware resources used by the SLAM pipeline, from the sensor output to the 3D reconstruction. In this regard, the work described in the manuscript provides two main contributions. The first one presents the processing in the embedded chip with an impact on the image characteristics by reducing the dynamic range. The second has an impact on the management of the image flow injected in the SLAM pipeline with a near-sensor processing. The first contribution aims at reducing the memory footprint of the SLAM algorithms with the evaluation of the pixel dynamic reduction on the accuracy and robustness of real-time localization and 3D reconstruction. The experiments show that we can reduce the input data up to 75% corresponding to 2 bits per pixel while maintaining a similar accuracy than the baseline 8 bits per pixel. Those results have been obtained with the evaluation of the accuracy and robustness of four SLAM algorithms on two databases. The second contribution aims at reducing the amount of data injected in SLAM with a decimation strategy to control the input frame rate, called the adaptive filtering. Data are initially injected in constant rate (20 frames per second). This implies a consumption of energy, memory, bandwidth and increases the complexity of calculation. Can we reduce this amount of data ? In SLAM, the accuracy and the number of operations depend on the movement of the system. With the linear and angular accelerations from the IMU, data are injected based on the movement of the system. Those key images are injected with the adaptive filtering approach (AF). Although the results depend on the difficulty of the chosen database, the experiments describe that the AF allows the decimation of up to 80% of the images while maintaining low localization and reconstruction errors similar to the baseline. This study shows that in the embedded context, the peak memory consumption is reduced up to 92%
Loesch, Angélique. "Localisation d'objets 3D industriels à l'aide d'un algorithme de SLAM contraint au modèle." Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAC059/document.
In the industry domain, applications such as quality control, automation of complex tasks or maintenance support with Augmented Reality (AR) could greatly benefit from visual tracking of 3D objects. However, this technology is under-exploited due to the difficulty of providing deployment easiness, localization quality and genericity simultaneously. Most existing solutions indeed involve a complex or an expensive deployment of motion capture sensors, or require human supervision to simplify the 3D model. And finally, most tracking solutions are restricted to textured or polyhedral objects to achieved an accurate camera pose estimation.Tracking any object is a challenging task due to the large variety of object forms and appearances. Industrial objects may indeed have sharp edges, or occluding contours that correspond to non-static and view-point dependent edges. They may also be textured or textureless. Moreover, some applications require to take large amplitude motions as well as object occlusions into account, tasks that are not always dealt with common model-based tracking methods. These approaches indeed exploit 3D features extracted from a model, that are matched with 2D features in the image of a video-stream. However the accuracy and robustness of the camera localization depend on the visibility of the object as well as on the motion of the camera. To better constrain the localization when the object is static, recent solutions rely on environment features that are reconstructed online, in addition to the model ones. These approaches combine SLAM (Simultaneous Localization And Mapping) and model-based tracking solutions by using constraints from the 3D model of the object of interest. Constraining SLAM algorithms with a 3D model results in a drift free localization. However, such approaches are not generic since they are only adapted for textured or polyhedral objects. Furthermore, using the 3D model to constrain the optimization process may generate high memory consumption,and limit the optimization to a temporal window of few cameras. In this thesis, we propose a solution that fulfills the requirements concerning deployment easiness, localization quality and genericity. This solution, based on a visual key-frame-based constrained SLAM, only exploits an RGB camera and a geometric CAD model of the static object of interest. An RGB camera is indeed preferred over an RGBD sensor, since the latter imposes limits on the volume, the reflectiveness or the absorptiveness of the object, and the lighting conditions. A geometric CAD model is also preferred over a textured model since textures may hardly be considered as stable in time (deterioration, marks,...) and may vary for one manufactured object. Furthermore, textured CAD models are currently not widely spread. Contrarily to previous methods, the presented approach deals with polyhedral and curved objects by extracting dynamically 3D contour points from a model rendered on GPU. This extraction is integrated as a structure constraint into the constrained bundle adjustment of a SLAM algorithm. Moreover we propose different formalisms of this constraint to reduce the memory consumption of the optimization process. These formalisms correspond to hybrid structure/trajectory constraints, that uses output camera poses of a model-based tracker. These formalisms take into account the structure information given by the 3D model while relying on the formalism of trajectory constraints. The proposed solution is real-time, accurate and robust to occlusion or sudden motion. It has been evaluated on synthetic and real sequences of different kind of objects. The results show that the accuracy achieved on the camera trajectory is sufficient to ensure a solution perfectly adapted for high-quality Augmented Reality experiences for the industry
Royer, Eric. "Cartographie 3D et localisation par vision monoculaire pour la navignation autonome d'un robot mobile." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2006. http://tel.archives-ouvertes.fr/tel-00698908.
Zarader, Pierre. "Transcranial ultrasound tracking of a neurosurgical microrobot." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS054.
With the aim of treating brain tumors difficult to access with current surgical tools, Robeauté is developing an innovative microrobot to navigate deep brain areas with minimal invasiveness. The aim of this thesis was to develop and validate a transcranial ultrasound-based tracking system for the microrobot, in order to be able to implement robotic commands and thus guarantee both the safety and the effectiveness of the intervention.The proposed approach consists in positioning three ultrasound emitters on the patient's head, and embedding an ultrasound receiver on the microrobot. Knowing the speed of sound in biological tissue and the skull thickness crossed, it is possible to estimate the distances from the emitters to the receiver by time-of-flight measurements, and to deduce its 3D position by trilateration. A proof of concept was first carried out using a skull phantom of constant thickness, demonstrating submillimeter localization accuracy. The system was then evaluated using a calvaria phantom whose thickness and speed of sound in front of each emitter were deduced by CT scan. The system demonstrated an mean localization accuracy of 1.5 mm, i.e. a degradation in accuracy of 1 mm compared with the tracking through the skull phantom of constant thickness, explained by the uncertainty brought by the heterogeneous shape of the calvaria. Finally, three preclinical tests, without the possibility of assessing localization error, were carried out: (i) a post-mortem test on a human, (ii) a post-mortem test on a ewe, (iii) and an in vivo test on a ewe.Further improvements to the tracking system have been proposed, such as (i) the use of CT scan-based transcranial ultrasound propagation simulation to take account of skull heterogeneities, (ii) the miniaturization of the ultrasound sensor embedded in the microrobot, (iii) as well as the integration of ultrasound imaging to visualize local vascularization around the microrobot, thereby reducing the risk of lesions and detecting possible pathological angiogenesis
Holländer, Matthias. "Synthèse géométrique temps réel." Thesis, Paris, ENST, 2013. http://www.theses.fr/2013ENST0009/document.
Eal-time geometry synthesis is an emerging topic in computer graphics.Today's interactive 3D applications have to face a variety of challengesto fulfill the consumer's request for more realism and high quality images.Often, visual effects and quality known from offline-rendered feature films or special effects in movie productions are the ultimate goal but hard to achieve in real time.This thesis offers real-time solutions by exploiting the Graphics Processing Unit (GPU)and efficient geometry processing.In particular, a variety of topics related to classical fields in computer graphics such assubdivision surfaces, global illumination and anti-aliasing are discussedand new approaches and techniques are presented
Baele, Xavier. "Génération et rendu 3D temps réel d'arbres botaniques." Doctoral thesis, Universite Libre de Bruxelles, 2003. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211314.
Книги з теми "Localisation 3D en temps réel":
Eberly, David H. 3D Game Engine Architecture. San Diego: Elsevier Science, 2009.
Eberly, David H. 3D game engine design: A practical approach to real-time computer graphics. 2nd ed. San Francisco: Morgan Kaufmann, 2007.
Eberly, David H. 3D Game Engine Architecture: Engineering Real-Time Applications with Wild Magic (The Morgan Kaufmann Series in Interactive 3D Technology). Morgan Kaufmann, 2004.
Eberly, David H. 3D Game Engine Architecture: Engineering Real-Time Applications with Wild Magic (The Morgan Kaufmann Series in Interactive 3D Technology). Morgan Kaufmann, 2004.
Eberly, David H. 3D Game Engine Design : A Practical Approach to Real-Time Computer Graphics (The Morgan Kaufmann Series in Interactive 3D Technology). Morgan Kaufmann, 2000.
Eberly, David H. 3D Game Engine Design, Second Edition: A Practical Approach to Real-Time Computer Graphics (The Morgan Kaufmann Series in Interactive 3D Technology). Morgan Kaufmann, 2006.
Eberly, David H. 3D Game Engine Design, Second Edition: A Practical Approach to Real-Time Computer Graphics (The Morgan Kaufmann Series in Interactive 3D Technology). 2nd ed. Morgan Kaufmann, 2006.
Eberly, David H. 3D Game Engine Design: A Practical Approach to Real-Time Computer Graphics. Elsevier Science & Technology Books, 2000.
Eberly, David H. 3D Game Engine Design: A Practical Approach to Real-Time Computer Graphics. Taylor & Francis Group, 2000.
Eberly, David H. 3D Game Engine Design: A Practical Approach to Real-Time Computer Graphics, 3rd Edition. Taylor & Francis Group, 2018.