Дисертації з теми "Imagerie augmentée"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-34 дисертацій для дослідження на тему "Imagerie augmentée".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Poirier, Stéphane. "Estimation de pose omnidirectionnelle dans un contexte de réalité augmentée." Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/28703/28703.pdf.
Повний текст джерелаCamera pose estimation is a fundamental problem of augmented reality, and enables registration of a model to the reality. An accurate estimate of the pose is often critical in infrastructure engineering. Omnidirectional images cover a larger field of view than planar images commonly used in AR. This property can be beneficial to pose estimation. However, no existing work present results clearly showing accuracy gains. Our objective is therefore to quantify the accuracy of omnidirectional pose estimation and test it in practice. We propose a pose estimation method for omnidirectional images and have measured its accuracy using automated simulations. Our results show that the large field of view of omnidirectional images increases pose accuracy, compared to poses from planar images. We also tested our method in practice, using data from real environments and discuss challenges and limitations to its use in practice.
Maman, Didier. "Recalage de modèles tridimensionnels sur des images réelles : application à la modélisation interactive d'environnement par des techniques de réalité augmentée." Paris, ENMP, 1998. http://www.theses.fr/1998ENMP0820.
Повний текст джерелаMouktadiri, Ghizlane. "Angiovision - Pose d'endoprothèse aortique par angionavigation augmentée." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00943465.
Повний текст джерелаCrespel, Thomas. "Optical and software tools for the design of a new transparent 3D display." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0366.
Повний текст джерелаWe live exciting times where new types of displays are made possible, and current challenges focus on enhancing user experience. As examples, we witness the emergence of curved, volumetric, head-mounted, autostereoscopic, or transparent displays, among others, with more complex sensors and algorithms that enable sophisticated interactions.This thesis aims at contributing to the creation of such novel displays. In three concrete projects, we combine both optical and software tools to address specific applications with the ultimate goal of designing a three-dimensional display. Each of these projects led to the development of a working prototype based on the use of picoprojectors, cameras, optical elements, and custom software.In a first project, we investigated spherical displays: they are more suitable for visualizing spherical data than regular flat 2D displays, however, existing solutions are costly and difficult to build due to the requirement of tailored optics. We propose a low-cost multitouch spherical display that uses only off-the-shelf, low-cost, and 3D-printed elements to make it more accessible and reproducible. Our solution uses a focus-free projector and an optical system to cover a sphere from the inside, infrared finger tracking for multitouch interaction, and custom software to link both. We leverage the use of low-cost material by software calibrations and corrections.We then extensively studied wedge-shaped light guides, in which we see great potential and that became the center component of the rest of our work. Such light guides were initially devised for flat and compact projection-based displays but in this project we exploit them in a context of acquisition. We seek to image constrained locations that are not easily accessible with regular cameras due to the lack of space in front of the object of interest. Our idea is to fold the imaging distance into a wedge guide thanks to prismatic elements. With our prototype, we validated various applications in the archaeological field.The skills and expertise that we acquired during both projects allowed us to design a new transparent autostereoscopic display. Our solution overcomes some limitations of augmented reality displays allowing a user to see both a direct view of the real world as well as a stereoscopic and view-dependent augmentation without any wearable or tracking. The principle idea is to use a wedge light guide, a holographic optical element, and several projectors, each of them generating a different viewpoint. Our current prototype has five viewpoints, and more can be added. This new display has a wide range of potential applications in the augmented reality field
Meshkat, Alsadat Shabnam. "Analysis of camera pose estimation using 2D scene features for augmented reality applications." Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/30281.
Повний текст джерелаAugmented reality (AR) had recently made a huge impact on field engineers and workers in construction industry, as well as the way they interact with architectural plans. AR brings in a superimposition of the 3D model of a building onto the 2D image not only as the big picture, but also as an intricate representation of what is going to be built. In order to insert a 3D model, the camera has to be localized regarding its surroundings. Camera localization con-sists of finding the exterior parameters (i.e. its position and orientation) of the camera with respect to the viewed scene and its characteristics. In this thesis, camera pose estimation methods using circle-ellipse and straight line corre-spondences has been investigated. Circles and lines are two of the geometrical features that are mostly present in structures and buildings. Based on the relationship between the 3D features and their corresponding 2D data detected in the image, the position and orientation of the camera is estimated.
Ferretti, Gilbert. "Endoscopie virtuelle des bronches : études pré-cliniques et cliniques." Université Joseph Fourier (Grenoble), 1999. http://www.theses.fr/1999GRE19001.
Повний текст джерелаFabre, Diandra. "Retour articulatoire visuel par échographie linguale augmentée : développements et application clinique." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT076/document.
Повний текст джерелаIn the framework of speech therapy for articulatory troubles associated with tongue misplacement, providing a visual feedback might be very useful for both the therapist and the patient, as the tongue is not a naturally visible articulator. In the last years, ultrasound imaging has been successfully applied to speech therapy in English speaking countries, as reported in several case studies. The assumption that visual articulatory biofeedback may facilitate the rehabilitation of the patient is supported by studies on the links between speech production and perception. During speech therapy sessions, the patient seems to better understand his/her tongue movements, despite the poor quality of the image due to inherent noise and the lack of information about other speech articulators. We develop in this thesis the concept of augmented lingual ultrasound. We propose two approaches to improve the raw ultrasound image, and describe a first clinical application of this device.The first approach focuses on tongue tracking in ultrasound images. We propose a method based on supervised machine learning, where we model the relationship between the intensity of all the pixels of the image and the contour coordinates. The size of the images and of the contours is reduced using a principal component analysis, and a neural network models their relationship. We developed speaker-dependent and speaker-independent implementations and evaluated the performances as a function of the amount of manually annotated contours used as training data. We obtained an error of 1.29 mm for the speaker-dependent model with only 80 annotated images, which is better than the performance of the EdgeTrak reference method based on active contours.The second approach intends to automatically animate an articulatory talking head from the ultrasound images. This talking head is the avatar of a reference speaker that reveals the external and internal structures of the vocal tract (palate, pharynx, teeth, etc.). First, we build a mapping model between ultrasound images and tongue control parameters acquired on the reference speaker. We then adapt this model to new speakers referred to as source speakers. This adaptation is performed by the Cascaded Gaussian Mixture Regression (C-GMR) technique based on a joint model of the ultrasound data of the reference speaker, control parameters of the talking head, and adaptation ultrasound data of the source speaker. This approach is compared to a direct GMR regression between the source speaker data and the control parameters of the talking head. We show that C-GMR approach achieves the best compromise between amount of adaptation data and prediction quality. We also evaluate the generalization capability of the C-GMR approach and show that prior information of the reference speaker helps the model generalize to articulatory configurations of the source speaker unseen during the adaptation phase.Finally, we present preliminary results of a clinical application of augmented ultrasound imaging to a population of patients after partial glossectomy. We evaluate the use of visual feedback of the patient’s tongue in real time and the use of sequences recorded with a speech therapist to illustrate the targeted articulation. Classical speech therapy probes are led after each series of sessions. The first results show an improvement of the patients’ performance, especially for tongue placement
Agustinos, Anthony. "Navigation augmentée d'informations de fluorescence pour la chirurgie laparoscopique robot-assistée." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAS033/document.
Повний текст джерелаLaparoscopic surgery faithfully reproduce the principles of conventional surgery with minimal physical aggression.If this surgery appears to be very beneficial for the patient, it is a difficult surgery where the complexity of surgicalact is increased, compared with conventional surgery. This complexity is partly due to the manipulation of surgicalinstruments and viewing the surgical scene (including the restricted field of view of a conventional endoscope). Thedecisions of the surgeon could be improved by identifying critical or not visible areas of interest in the surgical scene.My research aimed to combine robotics, computer vision and fluorescence to provide an answer to these problems :fluorescence imaging provides additional visual information to assist the surgeon in determining areas to operate or tobe avoided (for example, visualization of the cystic duct during cholecystectomy). Robotics will provide the accuracyand efficiency of the surgeon’s gesture as well as a visualization and a "more intuitive" tracking of the surgical scene.The combination of these two technologies will help guide and secure the surgical gesture.A first part of this work consisted in extracting visual information in both imagingmodalities (laparoscopy/fluorescence).Localization methods for 2D/3D real-time of laparoscopic surgical instruments in the laparoscopic image and anatomicaltargets in the fluorescence image have been designed and developed. A second part consisted in the exploitationof the bimodal visual information for developing control laws for robotics endoscope holder and the instrument holder.Visual servoing controls of a robotic endoscope holder to track one or more instruments in laparoscopic image ora target of interest in the fluorescence image were implemented. In order to control a robotic instrument holder withthe visual information provided by the imaging system, a calibration method based on the use of 3D information of thelocalization of surgical instruments was also developed. This multimodal environment was evaluated quantitativelyon the test bench and on anatomical specimens.Ultimately this work will be integrated within lightweight robotic architectures, not rigidly linked, using comanipulationrobots with more sophisticated controls such as force feedback. Such an "increase" viewing capabilities andsurgeon’s action could help to optimize the management of the patient
Thomas, Vincent. "Modélisation 3D pour la réalité augmentée : une première expérimentation avec un téléphone intelligent." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/27904/27904.pdf.
Повний текст джерелаRecently, a new genre of software applications has emerged allowing the general public to browse their immediate environment using their smartphone: Mobile Augmented Reality (MAR) applications. The growing popularity of this type of application is triggered by the fast evolution of smartphones. These ergonomic mobile platforms embed several pieces of equipment useful to deploy MAR (i.e. digital camera, GPS receiver, accelerometers, digital compass and now gyroscope). In order to achieve a strong augmentation of the reality in terms of user’s immersion and interactions, a 3D model of the real environment is generally required. The 3D model can be used for three different purposes in these MAR applications: 1) to manage the occlusions between real and virtual objects; 2) to provide accurate camera pose (position/orientation) calculation; 3) to support the augmentation and interactions. However, the availability of such 3D models is limited and therefore preventing MAR application to be used anywhere at anytime. In order to overcome such constraints, this proposed research thesis is aimed at devising a new approach adapted to the specific context of MAR applications and dedicated to the simple and fast production of 3D models. This approach was implemented on the iPhone 3G platform and evaluated according to precision, rapidity, simplicity and efficiency criteria. Results of the evaluation underlined the capacity of the proposed approach to provide, in about 3 minutes, a simple 3D model of a building using smartphone while achieving accuracy of 5 meters and higher.
Barberio, Manuel. "Real-time intraoperative quantitative assessment of gastrointestinal tract perfusion using hyperspectral imaging (HSI)." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAJ120.
Повний текст джерелаAnastomotic leak (AL) is a severe complication in surgery. Adequate local perfusion is fundamental to promote anastomotic healing, reducing the risk of AL. However, clinical criteria are unreliable to evaluate bowel perfusion. Consequently, a tool allowing to objectively detect intestinal viability intraoperatively is desirable. In this regard, fluorescence angiography (FA) has been explored. In spite of promising results in clinical trials, FA assessment is subjective, hence the efficacy of FA is unclear. Quantitative FA has been previously introduced. However, it is limited by the need of injecting a fluorophore. Hyperspectral imaging (HSI) is a promising optical imaging technique coupling a spectroscope with a photo camera, allowing for a contrast-free, real-time, and quantitative tissue analysis. The intraoperative usability of HSI is limited by the presence of static images. We developed hyperspectral-based enhanced reality (HYPER), to allow for precise intraoperative perfusion assessment. This thesis describes the steps of the development and validation of HYPER
Hammami, Houda. "Guidance of radioembolization procedures in the context of interventional oncology." Thesis, Rennes 1, 2021. http://www.theses.fr/2021REN1S121.
Повний текст джерелаRadioembolization is a minimally-invasive intervention performed to treat liver cancer by administering radioactive microspheres. In order to optimize radioembolization outcomes, the procedure is carried out in two sessions: pretreatment assessment intervention, mainly performed to locate the injection site, assess microspheres distribution and perform dosimetry evaluation, and treatment intervention performed to inject the estimated proper dose of radioactive microspheres in the located injection site. Due to the hepatic vasculature complexity, interventional radiologists carefully manipulate the catheter, during the two interventions, under X-Ray image guidance and resort to contrast media injection in order to highlight vessels. In this thesis, we propose a novel guidance strategy that promises a simplification and accuracy of the catheter navigation during the pretreatment assessment, as well as during the treatment interventions. The proposed navigation system processes pre- and intraoperative images to achieve intraoperative image fusion through a rigid registration technique. This approach is designed to 1) assist the celiac trunk access, 2) assist the injection site access and 3) automatically reproduce the injection site during the proper intervention. Knowing that the liver undergoes a motion induced by the breathing, we also propose an approach that allows obtaining a dynamic overlay of the projected 3D vessels onto fluoroscopy
Devaux, Jean-Clément. "Perception multicapteur : Etalonnage extrinsèque de caméra 3D, télémètre laser et caméra conventionnelle : Application au déplacement autonome et téléopéré d'un robot mobile." Thesis, Evry-Val d'Essonne, 2014. http://www.theses.fr/2014EVRY0043/document.
Повний текст джерелаThis work concerns multisensor perception field of study for indoor mobile robotics and especially treats the use of low cost 3D cameras together with inboard perception system of the robot (usually laser telemeter and conventionnal cameras). Each independent sensor is suffering from important weaknesses as limited field of view for low cost camears or only planar detection for laser telemeter. Using 3D camera in addition to laser telemeter allows tod etect obstacles not detected by laser telemeter but keeps the advantage of the wise field of view of common laser telemeters. 3D camears are also very useful to add visual augmentations on visual data in teleoperation control interfaces.Multisensor perception based on laser telemeter, 3D camera and conventionnal camera is only possible when extrinsic parameters are accuratly (depending on application) known. But not only accurate, the calibration process has to be simple enough to be repeatable particularly whens sensors are fixed. Calibration processes have to be executed regularly because of menanical slack. The robotic community is usually using approximative calibration because calibration is boring. In this work, we propose four new methods for extrinsic calibration. Two of these methods have been conducted to improve significantly teh simplicity and usability of calibration processes
Nauroy, Julien. "Traitements interactifs d'images radiologiques et leurs applications cliniques." Phd thesis, Université Paris Sud - Paris XI, 2010. http://tel.archives-ouvertes.fr/tel-00596516.
Повний текст джерелаLefebvre, Michael. "Appariement automatique de modèles 3D à des images omnidirectionnelles pour des applications en réalité augmentée urbaine." Thesis, Université Laval, 2013. http://www.theses.ulaval.ca/2013/30251/30251.pdf.
Повний текст джерелаOne of the greatest challenges of augmented reality is to perfectly synchronize real and virtual information to give the illusion that virtual information are an integral part of real world. To do so, we have to precisely estimate the user position and orientation and, even more dificult, it has to be done in real time. Augmentation of outdoor scenes is particularly problematic because there are no technologies accurate enough to get user position with the level of accuracy required for application in engineering. To avoid this problem, we focused on augmenting panoramic images taken at a fixed position. The goal of this project is to propose a robust and automatic initialization method to calculate the pose of urban omnidirectional panoramas to get a perfect alignment between panoramas and virtual information.
Wang, Xiyao. "Augmented reality environments for the interactive exploration of 3D data." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG052.
Повний текст джерелаExploratory visualization of 3D data is fundamental in many scientific domains. Traditionally, experts use a PC workstation and rely on mouse and keyboard to interactively adjust the view to observe the data. This setup provides immersion through interaction---users can precisely control the view and the parameters, but it does not provide any depth clues which can limit the comprehension of large and complex 3D data. Virtual or augmented reality (V/AR) setups, in contrast, provide visual immersion with stereoscopic views. Although their benefits have been proven, several limitations restrict their application to existing workflows, including high setup/maintenance needs, difficulties of precise control, and, more importantly, the separation from traditional analysis tools. To benefit from both sides, we thus investigated a hybrid setting combining an AR environment with a traditional PC to provide both interactive and visual immersions for 3D data exploration. We closely collaborated with particle physicists to understand their general working process and visualization requirements to motivate our design. First, building on our observations and discussions with physicists, we built up a prototype that supports fundamental tasks for exploring their datasets. This prototype treated the AR space as an extension to the PC screen and allowed users to freely interact with each using the mouse. Thus, experts could benefit from the visual immersion while using analysis tools on the PC. An observational study with 7 physicists in CERN validated the feasibility of such a hybrid setting, and confirmed the benefits. We also found that the large canvas of the AR and walking around to observe the data in AR had a great potential for data exploration. However, the design of mouse interaction in AR and the use of PC widgets in AR needed improvements. Second, based on the results of the first study, we decided against intensively using flat widgets in AR. But we wondered if using the mouse for navigating in AR is problematic compared to high degrees of freedom (DOFs) input, and then attempted to investigate if the match or mismatch of dimensionality between input and output devices play an important role in users’ performance. Results of user studies (that compared the performance of using mouse, space mouse, and tangible tablet paired with the screen or the AR space) did not show that the (mis-)match was important. We thus concluded that the dimensionality was not a critical point to consider, which suggested that users are free to choose any input that is suitable for a specific task. Moreover, our results suggested that the mouse was still an efficient tool compared to high DOFs input. We can therefore validate our design of keeping the mouse as the primary input for the hybrid setting, while other modalities should only serve as an addition for specific use cases. Next, to support the interaction and to keep the background information while users are walking around to observe the data in AR, we proposed to add a mobile device. We introduced a novel approach that augments tactile interaction with pressure sensing for 3D object manipulation/view navigation. Results showed that this method could efficiently improve the accuracy, with limited influence on completion time. We thus believe that it is useful for visualization purposes where a high accuracy is usually demanded. Finally, we summed up in this thesis all the findings we have and came up with an envisioned setup for a realistic data exploration scenario that makes use of a PC workstation, an AR headset, and a mobile device. The work presented in this thesis shows the potential of combining a PC workstation with AR environments to improve the process of 3D data exploration and confirms its feasibility, all of which will hopefully inspire future designs that seamlessly bring immersive visualization to existing scientific workflows
Seeliger, Barbara. "Évaluation de la perfusion viscérale et anastomotique par réalité augmentée basée sur la fluorescence." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAJ048.
Повний текст джерелаThe fluorescence-based enhanced reality approach is used to quantify fluorescent signal dynamics and superimpose the perfusion cartography onto laparoscopic images in real time. A colonic ischemia model was chosen to differentiate between different types of ischemia and determine the extension of an ischemic zone in the different layers of the colonic wall. The evaluation of fluorescence dynamics associated with a machine learning approach made it possible to distinguish between arterial and venous ischemia with a good prediction rate. In the second study, quantitative perfusion assessment showed that the extent of ischemia was significantly larger on the mucosal side, and may be underestimated with an exclusive analysis of the serosal side. Two further studies have revealed that fluorescence imaging can guide the surgeon in real time during minimally invasive adrenal surgery, and that quantitative software fluorescence analysis facilitates the distinction between vascularized and ischemic segments
Bano, Jordan. "Modélisation et correction des déformations du foie dues à un pneumopéritoine : application au guidage par réalité augmentée en chirurgie laparoscopique." Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD010/document.
Повний текст джерелаAugmented reality can provide to surgeons during intervention the positions of critical structures like vessels. The 3D models displayed during a laparoscopic surgery intervention do not fit to reality due to pneumperitoneum deformations. This thesis aim is to correct these deformations to provide a realistic liver model during intervention. We propose to deform the preoperative liver model according to an intraoperative acquisition of the liver anterior surface. A deformation field between the preoperative and intraoperative models is computed according to the geodesic distance to anatomical landmarks. Moreover, a biomechanical simulation is realised to predict the position of the abdomino-thoracic cavity which is used as boundary conditions. This method evaluation shows that the position error of the liver and its vessels is reduced to 1cm
Bauer, Armelle. "Modélisation anatomique utilisateur-spécifique et animation temps-réel : Application à l'apprentissage de l'anatomie." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM071/document.
Повний текст джерелаTo ease the complex task of anatomy learning, there exist many ways to represent and structure anatomy : illustrations, books, cadaver dissections and 3d models. However, it is difficult to understand and analyse anatomy motion, which is essential for medicine students. We present the "Living Book of Anatomy" (LBA), an original and innovative tool to learn anatomy. For a specific user, we superimpose a 3d anatomical model (skin, skeleton, muscles and visceras) onto the user’s color map and we animate it following the user’s movements. We present a real-time mirror-like augmented reality (AR) system. A Kinect is used to capturebody motions.The first innovation of our work is the identification of the user’s body measurements to register our 3d anatomical model. We propose two different methods to register anatomy.The first one is real-time and use affine transformations attached to rigid positioned on each joint given by the Kinect body tracking skeleton in order to deform the 3d anatomical model using skinning to fit the user’s measurements.The second method needs a few minutes to register the anatomy and is divided in 3 parts : skin deformation (using Kinect body tracking skeleton and the Kinect partial point cloud), with it and strict anatomical rules we register the skeleton. Lastly we deformm the soft tissues to completly fill the space inbetween the registered skeleton and skin.Secondly, we want to capture realistically and in real-time the user’s motion. To do that we need to reproduce anatomical structure motion but it is a complex task due to the noisy and often partial Kinect data. We propose here the use of anatomical rules concerning body articulations (angular limits and degrees of freedom) to constraint Kinect captured motion in order to obtain/gain plausible motions. a kalman filter is used to smooth the obtaiined motion capture.Lastly, to embed visual style and interaction, we use a full body reproduction to show general knowledge on human anatomy and its differents joints. We also use a lower-limb as structure of interest to higlight specific anatomical phenomenon, as muscular activity.All these tools have been integrated in a working system detailed in this thesis.We validated our tool/system by presenting it as a live demo during different conferences and through user studies done with students and professionnals from different backgrounds
Berhouet, Julien. "Optimisation de l'implantation glénoïdienne d'une prothèse d'épaule : de la reconstitution 3D à la réalité augmentée." Thesis, Tours, 2016. http://www.theses.fr/2016TOUR4016/document.
Повний текст джерелаIn this thesis, two methods of operating assistance for the positioning of the glenoid component of a shoulder prosthesis, are addressed. They have in common a preliminary 3D reconstruction of the pathological glenoid to implant. A main clinical approach, with practice studies, is proposed for the Patient Specific Implants technology, which is currently used in orthopaedics. Then a main prospective and technological approach is proposed with the Augmented Reality, while it is so far untapped in the field of orthopaedic surgery. The feasibility of this last technology, as well as the tools and the manual for its use, were studied. Upstream, a new type of information to implement the augmented reality connected application support is offered, with mathematical modeling by multiple linear regression of a normal glenoid. The second goal is to build a normal generic glenoids database. It can be used as reference to the reconstruction of a pathological glenoid to treat, after a morphing process step
Tykkälä, Tommi. "Suivi de caméra image en temps réel base et cartographie de l'environnement." Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00933813.
Повний текст джерелаHueber, Thomas. "Reconstitution de la parole par imagerie ultrasonore et vidéo de l'appareil vocal : vers une communication parlée silencieuse." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2009. http://pastel.archives-ouvertes.fr/pastel-00005707.
Повний текст джерелаMolinier, Thierry. "Approche coopérative pour l'acquisition et l'observation de formes tridimensionnelles." Phd thesis, Université de Bourgogne, 2009. http://tel.archives-ouvertes.fr/tel-00465787.
Повний текст джерелаLeebmann, Johannes. "Dreidimensionale Skizzen in Erweiterter Realität." Phd thesis, Université Louis Pasteur - Strasbourg I, 2005. http://tel.archives-ouvertes.fr/tel-00275264.
Повний текст джерелаBraux-Zin, Jim. "Contributions aux problèmes de l'étalonnage extrinsèque d'affichages semi-transparents pour la réalité augmentée et de la mise en correspondance dense d'images." Thesis, Clermont-Ferrand 1, 2014. http://www.theses.fr/2014CLF1MM13/document.
Повний текст джерелаAugmented reality is the process of inserting virtual elements into a real scene, observed through a screen. Augmented Reality systems can take different forms to get the desired balance between three criteria: accuracy, latency and robustness. Three main components can be identified: localization, reconstruction and display. The contributions of this thesis are focused on display and reconstruction. Most augmented reality systems use non-transparent screens as they are widely available. However, for critical applications such as surgery or driving assistance, the user cannot be ever isolated from reality. We answer this problem by proposing a new “augmented tablet” system with a semi-transparent screen. Such a system needs a suitable calibration scheme:to correctly align the displayed augmentations and reality, one need to know at every moment the poses of the user and the observed scene with regard to the screen. Two tracking devices (user and scene) are thus necessary, and the system calibration aims to compute the pose of those devices with regard to the screen. The calibration process set up in this thesis is as follows: the user indicates the apparent projections in the screen of reference points from a known 3D object ; then the poses to estimate should minimize the 2D on-screen distance between those projections and the ones computed by the system. This is a non-convex problem difficult to solve without a sane initialization. We develop a direct estimation method by computing the extrinsic parameters of virtual cameras. Those are defined by their optical centers which coincide with user positions, and their common focal plane consisting of the screen plane. The user-entered projections are then the 2D observations of the reference points in those virtual cameras. A symmetrical thinking allows one to define virtual cameras centered on the reference points, and “looking at” the user positions. Those initial estimations can then be refined with a bundle adjustment. Meanwhile, 3D reconstruction is based on the triangulation of matches between images. Those matches can be sparse when computed by detection and description of image features or dense when computed through the minimization of a cost function of the whole image. A dense correspondence field is better because it makes it possible to reconstruct a 3D surface, useful especially for realistic handling of occlusions for augmented reality. However, such a field is usually estimated thanks to variational methods, minimizing a convex cost function using local information. Those methods are accurate but subject to local minima, thus limited to small deformations. In contrast, sparse matches can be made very robust by using adequately discriminative descriptors. We propose to combine the advantages of those two approaches by adding a feature-based term into a dense variational method. It helps prevent the optimization from falling into local minima without degrading the end accuracy. Our feature-based term is suited to feature with non-integer coordinates and can handle point or line segment matches while implicitly filtering false matches. We also introduce comprehensive handling of occlusions so as to support large deformations. In particular, we have adapted and generalized a local method for detecting selfocclusions. Results on 2D optical flow and wide-baseline stereo disparity estimation are competitive with the state of the art, with a simpler and most of the time faster method. This proves that our contributions enables new applications of variational methods without degrading their accuracy. Moreover, the weak coupling between the components allows great flexibility and genericness. This is the reason we were able to also transpose the proposed method to the problem of non-rigid surface registration and outperforms the state of the art methods
Tamaazousti, Mohamed. "L'ajustement de faisceaux contraint comme cadre d'unification des méthodes de localisation : application à la réalité augmentée sur des objets 3D." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2013. http://tel.archives-ouvertes.fr/tel-00881206.
Повний текст джерелаLacoche, Jérémy. "Plasticity for user interfaces in mixed reality." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S034/document.
Повний текст джерелаThis PhD thesis focuses on plasticity for Mixed Reality (MR) User interfaces, which includes Virtual Reality (VR), Augmented Reality (AR) and Augmented Virtuality (AV) applications. Today, there is a growing interest for this kind of applications thanks to the generalization of devices such as Head Mounted Displays, Depth sensors and tracking systems. Mixed Reality application can be used in a wide variety of domains such as entertainment, data visualization, education and training, and engineering. Plasticity refers to the capacity of an interactive system to withstand variations of both the system physical characteristics and the environment while preserving its usability. Usability continuity of a plastic interface is ensured whatever the context of use. Therefore, we propose a set of software models, integrated in a software solution named 3DPlasticToolkit, which allow any developer to create plastic MR user interfaces. First, we propose three models for modeling adaptation sources: a model for the description of display devices and interaction devices, a model for the description of the users and their preferences, a model for the description of data structure and semantic. These adaptation sources are taken into account by an adaptation process that deploys application components adapted to the context of use thanks to a scoring system. The deployment of these application components lets the system adapt the interaction techniques of the application of its content presentation. We also propose a redistribution process that allows the end-user to change the distribution of his/her application components across multiple dimensions: display, user and platform. Thus, it allows the end-user to switch dynamically of platform or to combine multiple platforms. The implementation of these models in 3DPlasticToolkit provides developers with a ready to use solution for the development of plastic MR user interfaces. Indeed, the solution already integrates different display devices and interaction devices and also includes multiple interaction techniques, visual effects and data visualization metaphors
Loy, Rodas Nicolas. "Context-aware radiation protection for the hybrid operating room." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD001/document.
Повний текст джерелаThe use of X-ray imaging technologies during minimally-invasive procedures exposes both patients and medical staff to ionizing radiation. Even if the dose absorbed during a single procedure can be low, long-term exposure can lead to noxious effects (e.g. cancer). In this thesis, we therefore propose methods to improve the overall radiation safety in the hybrid operating room by acting in two complementary directions. First, we propose approaches to make clinicians more aware of exposure by providing in-situ visual feedback of the ongoing radiation dose by means of augmented reality. Second, we propose to act on the X-ray device positioning with an optimization approach for recommending an angulation reducing the dose deposited to both patient and staff, while maintaining the clinical quality of the outcome image. Both applications rely on approaches proposed to perceive the room using RGBD cameras and to simulate in real-time the propagation of radiation and the deposited dose
Andrei, Cassiana. "Détection directe et sélective de bactéries par imagerie Raman exaltée de surface." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX065.
Повний текст джерелаRapid detection of bacterial pathogens is an important challenge nowadays in multiple fields like in food industry, health and military biodefense. Biosensors are promising candidates for replacing time consuming and expensive classical tools. In this work, we developed biosensors based on hydrogenated amorphous silicon layer for the covalent grafting of probes (antibodies or sugars) interacting specifically with bacteria, and noble metal nanoparticles for spectroscopic identification of trapped bacteria by surface enhanced Raman spectroscopy (SERS). In a first approach, the production of stable and cost-effective SERS-active substrates based on metallic thin films was proposed for the study of various bacteria. Different SERS fingerprints of three different strains of the same bacteria were obtained allowing their discrimination, result confirmed by principal component analysis (PCA). In a second approach, SERS study of bacteria was performed using nanoparticles colloids, positively charged gold nanorods showing the best reproducibility. In parallel, the optimization of probes grafting on the amorphous silicon surface and of the blocking step for minimization of non-specific adhesion of bacteria were performed. Finally, tests with the entire architecture of the biosensor were performed and by using a fluidic cell the attachment of bacteria was monitored in situ. After contact with gold nanorods the specific identification of bacteria by SERS was possible. Using this strategy limits of detection up to 10 cfu/mL were achieved in a total time of detection of 3 h
Chemak, Chokri. "Elaboration de nouvelles approches basées sur les turbos codes pour augmenter la robustesse du tatouage des images médicales : application au projet PocketNeuro." Besançon, 2008. http://www.theses.fr/2008BESA2016.
Повний текст джерелаFor the most communicating systems, the turbo code is used for the transmission in the power limited canal. This Error Correcting Code (ECC) is very efficient against distortions on the network transmissions. The main goal of this Ph. D work is to integrate the canal coding field or more precisely the turbo code in medical image watermarking. The image watermarking approach consists in embedding a message in image, and we will try to extract it with the maximum of possible fidelity. The message embedded will be coded with the turbo code before embedding it in image. This is for the aim to decrease errors in the message embedded after image transmission on networks. First we elaborate a new watermarking scheme, based on turbo code; robust against the most usually attacks on images: noises, JPEG compression image filtering and geometric transformations like the cropping. In this case, the message embedded is a binary mark that identify image own. Secondly we use these new schemes on telemedicine and more precisely on the PocketNeuro project. In this case, the message embedded into image is medical information about patient. This information will be coded with turbo code, and after that it will be embedded into patient diagnosis image. Two important studies on the PocketNeuro contribution, the confidentiality of the medical information transmitted between the different participants, and the image adaptation at the terminals
Khoualed, Samir. "Descripteurs augmentés basés sur l'information sémantique contextuelle." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00853815.
Повний текст джерелаDerfoul, Ratiba. "Intégration des données de sismique 4D dans les modèles de réservoir : recalage d'images fondé sur l'élasticité non linéraire." Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00924825.
Повний текст джерелаGillibert, Raymond. "Développement d’un substrat SPRi/SERS pour des applications en détection moléculaire." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCD003/document.
Повний текст джерелаIn this thesis, we briefly describe the techniques used, which are surface plasmon resonanceimaging (SPRi) and surface enhanced Raman scattering (SERS). The main goal of the Piranexproject in which the thesis is based is the development of a bimodal nanostructured biochipallowing the coupling of the two techniques SPRi and SERS. This bio-chip consists of a goldfilm over which we have deposited a square array of gold nanocylinders. A set of studies hasbeen carried out to characterize plasmonic properties of the biosensor in order to optimize theSERS signal. We have thus found that the emission of the signal was strongly anisotropic, due tothe excitation of the Bragg Mode and that the near field was mainly enhanced on the edges of thenanostructure. The properties were also compared with those of identical gratings depositeddirectly on a dielectric substrate. Subsequently a set of plasmonic and SERS studies were carriedout for aluminum, other plasmonic materials of interest. Finally, a detection protocol by SERS ofochratoxin based on an aptamer was developed and allowed the detection of ochratoxin with adetection threshold of 10 pM, well below the limit allowed by food regulatory agencies
Fournaise, Érik. "Développement d’une méthode de transfert de protéines présentes dans des sections tissulaires minces sur des cibles fonctionnalisées pour augmenter la spécificité de l’imagerie MS du protéome." Thèse, 2014. http://hdl.handle.net/1866/11463.
Повний текст джерелаImaging mass spectrometry (IMS) is a technique in full expansion that is used in a large range of studies such as the correlation between molecular expression and the health status of a tissue and developmental biology. A common limitation of the technology is that only the more abundant and/or more easily ionisable molecules are usually detected, in particular in protein analysis. One of the methods used to alleviate this limitation is the direct specific transfer of proteins from a tissue section to a functionalized surface with high spatial fidelity. In this case, only proteins with an affinity for the surface will be retained whereas others will be removed. The chemical nature of the surface is therefore critical. The research work presented in this document proposes a high spatial fidelity transfer method for proteins from a tissue section onto a nitrocellulose surface. The method uses a homebuilt apparatus that allows the transfer process to be done without any direct physical contact between the tissue section and the transfer surface while still using physical pressure to help protein migration. In subsequent work, the developed method was used to transfer proteins from a mouse kidney section onto the nitrocellulose surface. Serials sections were also collected either to be colored with hematoxylin and eosin (H&E) to assess the high spatial fidelity of the transfer process, or to be directly analyzed as a control sample to access the different signals detected after transfer. Results showed a high spatial fidelity transfer of a subset of proteins. Some of the detected transferred proteins were not observed after direct tissue analysis and/or showed an increase in sensitivity.
Van, Heerden Carel Jacobus. "Interactive digital media displacement : digital imagery contextualised within deep remixability and remediation." Diss., 2020. http://hdl.handle.net/10500/27131.
Повний текст джерелаLink to the dataset (catalogue): https://doi.org/10.25399/UnisaData.14101913.v1
Digital image editing is rooted in the analog practices of photographic retouching from the late nineteenth century. This study interrogated how novel contributions of new media practice can inform understanding of the relationship between digital and analog media. The study also sought to explore new conceptual avenues in the creation of digital art that incorporates key aspects of both new and traditional media. This study employed a literature review of selected discourses related to new media studies. Specifically, the work of scholars Lev Manovich, Jay David Bolter, Richard Grusin, and Filipe Pais on the interplay between traditional and new media formed the cornerstone of the analysis. These discourses contextualise an analysis of several contemporary case studies of digital artists, with a particular focus on John Craig Freeman and the Oddviz collective. These works were selected for the way in which they destabilise conventional notions of digital photography in new media and the way digital content can be ‘displaced’ into a physical space. From this analysis several concepts arise that serve as distinguishing markers for media displacement. These themes include embodiment, memory, identity formation, autotopography, and intermediality. The dissertation concludes with an overview of my work that incorporates the concepts derived from my analysis of the case studies. It discusses how my exhibition Digital Tourist, a mixed media installation, makes use of photogrammetry and AR to displace the private connections of an individual life into the public space of the gallery.
Ukuhlelwa kwezithombe zezindaba zedijithali kususelwe emikhubeni ye-analokhu yokuthwebula kabusha izithombe kusukela ngasekupheleni kwekhulu leshumi nesishiyagalolunye leminyaka.. Lolu cwaningo luphenye ukuthi iminikelo yenoveli emisha yokwenziwa kwezezindaba ezintsha zingakwazisa kanjani ukuqonda kobudlelwano phakathi kwezindaba zedijithali ne-analokhu. Ucwaningo luphinde lwafuna ukubheka izindlela ezintsha zomqondo ekwakhiweni kobuciko bedijithali obufaka izinndaba ezibalulekile kokubili kwezokuxhumana nezendabuko ezintsha. Izinkulumo ezikhethiwe ezihlobene nezifundo zezindaba ezintsha zibuyekeziwe. Ngokuqondile, umsebenzi wezazi uLev Manovich, Jay David Bolter, Richard Grusin noFilipe Pais ekusebenzisaneni phakathi kwabezindaba bendabuko nabasha kwakha okuzobhekwa ngqo uma kuhlaziywa. Lezi zinkulumo zigxila ekuhlaziyweni kwezifundo zamanje zamaciko edijithali, kugxilwe kakhulu kuJohn Craig Freeman kanye neqoqo le-Oddviz. Le misebenzi yakhethwa ngendlela yokuthi ingazinzisi imiqondo ejwayelekile yokuthwebula izithombe zedijithali emithonjeni emisha kanye nokuthi okuqukethwe kwedijithali "kungahanjiswa kanjani" endaweni ebonakalayo. Ukusuka kulokhu kuhlaziywa kuvela imiqondo eminingana esebenza njengezimpawu ezihlukanisayo zokufuduswa kwabezindaba. Lezi zingqikithi zifaka phakathi ukwakheka, inkumbulo, ukwakheka kobunikazi, ukuziphendulela kanye nokuzibandakanya. Idezetheyishini iphetha ngokubuka konke ngomsebenzi wami ohlanganisa imiqondo esuselwe ekuhlaziyweni kwami kwezifundo zocwaningo. Ingxoxo ihlanganisa ukuthi umbukiso wami we-Zivakashi zeDijithali, ukufakwa kwabezindaba okuxubile, isebenzisa uhlelo lokuthwebula olusebenzisa ulimi noma ifothogrametri ne-AR ukukhipha ukuxhumana kwangasese kwempilo yomuntu ngamunye endaweni yomphakathi yegalari.
Ukuhlela imifanekiso yedijithali yinkqubo eyendeleyo, nowaqalwa kwiminyaka yokugqibela yenkulungwane yeshumi elinethoba, kwimisebenzi yezifaniso/yeanalogu ekuhlaziyweni kweefoto. Esi sifundo siphonononga ukuba igalelo elikhethekileyo leendlela ezintsha zonxibelelwano lwemiboniso/imidiya lingenza njani ukuqinisa ukuqonda unxulumano phakathi kwemiboniso yedijithali neyeanalogu. Kwakhona, esi sifundo sizama ukuphanda iindlela ezintsha ezisetyenziswa kubugcisa bedijithali neziquka imiba ephambili yemiboniso yale mihla neyakudala. From this analysis several concepts arise that serve as distinguishing markers for media displacement. These themes include embodiment, memory, identity formation, autotopography and intermediality. Kuphononongwe iingxoxo ezithile ezimalunga nezifundo zemiboniso yale mihla. Kuqwalaselwe ngakumbi imisebenzi yeengcali ooLev Manovich, Jay David Bolter, Richard Grusin kunye noFilipe Pais malunga nonxulumano phakathi kwemiboniso yakudala neyale mihla njengesiseko solu hlalutyo. Ezi ngxoxo zifaka emxholweni uhlalutyo lwezifundo zokuzekelisa zale mihla malunga nabazobi bale mihla, kugxininiswa kwindibanisela ka John Craig Freeman nekaOddviz. Le misebenzi ikhethwe ngenxa yokuba iyazichitha iingcinga eziqhelekileyo malunga nokufota ngedijithali kwimiboniso yale mihla nangendlela iziqulatho zedijithali “zinokushenxiswa” zisiwe kwindawo ebambekayo. Olu hlalutyo luveze iingcinga eziliqela nezisebenza njengeempawu zoshenxiso lwemiboniso. Imixholo iquka imifuziselo, ukukhumbula, ukwenziwa kwesazisi, ukuzazisa ngezinto onazo, unxulumano phakathi kwemiboniso eyahlukeneyo Le ngxelo yophando igqibela ngokushwankathela umsebenzi wam ohlanganisa iingcinga ezivele ekuhlalutyeni kwam izifundo ezingumzekelo. Ingxoxo ibonisa ukuba umboniso wengqokelela yemisebenzi yam owaziwa ngokuba yiDigital Tourist, ubusebenzise njani ubuchwepheshe ekuthiwa yifotogrametri (obokufumana ulwazi ngokuhlalutya imifanekiso) ekushenxiseni unxulumano lwabucala lobomi bomntu ibubeke kwindawo ebonwa nguwonkewonke apho kubukwa imifanekiso neefoto (igalari).
https://doi.org/10.25399/UnisaData.14101913.v1
Arts and Music
M.A. (Visual Arts)