Tesi sul tema "Suivi par piège‐caméra"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-15 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Suivi par piège‐caméra".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Van, den Berg Reinier. "The breeding ecology of the northern lapwing (Vanellus vanellus) in France : investigating the decline of a widely-distributed wader". Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAJ017.
Testo completoThe Northern Lapwing (Vanellus vanellus), a wader breeding in open habitat across temperate Eurasia – including mainland France – is a species undergoing a decades‐long population decline. In this thesis, the primary objective was to quantify the rates of hatching success in two regions of France, where we found higher success rates in the region of Hauts‐de‐France as compared to Alsace.In a species conservation context, we were interested in the impact of disturbances during our nest visits might have on lapwings’ behaviour. We observed lapwings return to their nests more quickly when the clutch was closer to hatching, and when temperatures were higher. Finally, in the context of climatic change, which will lead to more frequent extreme climate events, we investigated which compensatory behaviours would be shown by lapwings in warm weather
Calvet, Lilian. "Méthodes de reconstruction tridimensionnelle intégrant des points cycliques : application au suivi d'une caméra". Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2014. http://tel.archives-ouvertes.fr/tel-00981191.
Testo completoBoui, Marouane. "Détection et suivi de personnes par vision omnidirectionnelle : approche 2D et 3D". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLE009/document.
Testo completoIn this thesis we will handle the problem of 3D people detection and tracking in omnidirectional images sequences, in order to realize applications allowing3D pose estimation, we investigate the problem of 3D people detection and tracking in omnidirectional images sequences. This requires a stable and accurate monitoring of the person in a real environment. In order to achieve this, we will use a catadioptric camera composed of a spherical mirror and a perspective camera. This type of sensor is commonly used in computer vision and robotics. Its main advantage is its wide field of vision, which allows it to acquire a 360-degree view of the scene with a single sensor and in a single image. However, this kind of sensor generally generates significant distortions in the images, not allowing a direct application of the methods conventionally used in perspective vision. Our thesis contains a description of two monitoring approaches that take into account these distortions. These methods show the progress of our work during these three years, allowing us to move from person detection to the 3Destimation of its pose. The first step of this work consisted in setting up a person detection algorithm in the omnidirectional images. We proposed to extend the conventional approach for human detection in perspective image, based on the Gradient-Oriented Histogram (HOG), in order to adjust it to spherical images. Our approach uses the Riemannian varieties to adapt the gradient calculation for omnidirectional images as well as the spherical gradient for spherical images to generate our omnidirectional image descriptor
Zhou, Yifan. "Suivi de multi-objet non-rigide par filtrage à particules dans des systèmes multi-caméra : application à la vidéo surveillance". Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14096/document.
Testo completoThe video surveillance is believed to play a so important role in the crime prevention that only in France, the number of cameras installed at public thoroughfare was tripled in the year 2009, from 20 000 to 60 000. Even though its increasing use has triggered a large debate about security versus privacy, it seems that no government has a willingness to stop the surveillance popularity. However, if we just put aside this social anxiety, from the scientific point of view, millions of surveillance systems do offer us a rich database and an exciting motivation for the multimedia research. We focus on the multiple non-rigid object tracking based on the Particle Filter method in multiple camera environments in this dissertation. The method of Multi-resolution Particle Filter Tracking with Consistency Check is firstly introduced as the basis of our tracking system. It is especially used for single non-rigid object tracking in videos of low and variable frame rate. It is then extended to track multiple non-rigid objects, denoted as Multi-object Particle Filter Tracking with Dual Consistency Check. It is in particularly applied to the challenge TRECVID 2009. An automatic semantic event detection and identification is integrated at last. Our tracking method is later extended from mono-camera to multi-camera environments. It is used for the single non-rigid object tracking with the interaction of cameras. Finally, a system named Multi-object Particle Filter Tracking with Event analysis is designed for tracking two non-rigid objects in two-camera environments. Our tracking system can be easily applied to various video surveillance systems since no prior knowledge of the scene is required
Dziri, Aziz. "Suivi visuel d'objets dans un réseau de caméras intelligentes embarquées". Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22610/document.
Testo completoMulti-object tracking constitutes a major step in several computer vision applications. The requirements of these applications in terms of performance, processing time, energy consumption and the ease of deployment of a visual tracking system, make the use of low power embedded platforms essential. In this thesis, we designed a multi-object tracking system that achieves real time processing on a low cost and a low power embedded smart camera. The tracking pipeline was extended to work in a network of cameras with nonoverlapping field of views. The tracking pipeline is composed of a detection module based on a background subtraction method and on a tracker using the probabilistic Gaussian Mixture Probability Hypothesis Density (GMPHD) filter. The background subtraction, we developed, is a combination of the segmentation resulted from the Zipfian Sigma-Delta method with the gradient of the input image. This combination allows reliable detection with low computing complexity. The output of the background subtraction is processed using a connected components analysis algorithm to extract the features of moving objects. The features are used as input to an improved version of GMPHD filter. Indeed, the original GMPHD do not manage occlusion problems. We integrated two new modules in GMPHD filter to handle occlusions between objects. If there are no occlusions, the motion feature of objects is used for tracking. When an occlusion is detected, the appearance features of the objects are saved to be used for re-identification at the end of the occlusion. The proposed tracking pipeline was optimized and implemented on an embedded smart camera composed of the Raspberry Pi version 1 board and the camera module RaspiCam. The results show that besides the low complexity of the pipeline, the tracking quality of our method is close to the stat of the art methods. A frame rate of 15 − 30 was achieved on the smart camera depending on the image resolution. In the second part of the thesis, we designed a distributed approach for multi-object tracking in a network of non-overlapping cameras. The approach was developed based on the fact that each camera in the network runs a GMPHD filter as a tracker. Our approach is based on a probabilistic formulation that models the correspondences between objects as an appearance probability and space-time probability. The appearance of an object is represented by a vector of m dimension, which can be considered as a histogram. The space-time features are represented by the transition time between two input-output regions in the network and the transition probability from a region to another. Transition time is modeled as a Gaussian distribution with known mean and covariance. The distributed aspect of the proposed approach allows a tracking over the network with few communications between the cameras. Several simulations were performed to validate the approach. The obtained results are promising for the use of this approach in a real network of smart cameras
Draréni, Jamil. "Exploitation de contraintes photométriques et géométriques en vision : application au suivi, au calibrage et à la reconstruction". Grenoble, 2010. http://www.theses.fr/2010GRENM061.
Testo completoThe topic of this thesis revolves around three fundamental problems in computer vision; namely, video tracking, camera calibration and shape recovery. The proposed methods are solely based on photometric and geometric constraints found in the images. Video tracking, usually performed on a video sequence, consists in tracking a region of interest, selected manually by an operator. We extend a successful tracking method by adding the ability to estimate the orientation of the tracked object. Furthermore, we consider another fundamental problem in computer vision: calibration. Here we tackle the problem of calibrating linear cameras (a. K. A: pushbroom)and video projectors. For the former one we propose a convenient plane-based calibration algorithm and for the latter, a calibration algorithm that does not require aphysical grid and a planar auto-calibration algorithm. Finally, we pointed our third research direction toward shape reconstruction using coplanar shadows. This technique is known to suffer from a bas-relief ambiguity if no extra information on the scene or light source is provided. We propose a simple method to reduce this ambiguity from four to a single parameter. We achieve this by taking into account the visibility of the light spots in the camera
Van, Ngoc Ty Claire. "Modélisation et analyse des étapes de simulation des émetteurs de positons générés lors des traitements en protonthérapie - du faisceau à la caméra TEP - pour le suivi des irradiations". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00778996.
Testo completoSalhi, Imane. "Intelligent embadded camera for robust object tracking on mobile platform". Thesis, Paris Est, 2021. http://www.theses.fr/2021PESC2001.
Testo completoInitially, the aim of this study is to analyze, compare and retain the most relevant tracking methods likely to respect the constraints of embedded systems, such as Micro Aerial Vehicles (MAVs), Unmanned Aerial Vehicles (UAVs), intelligent glasses, textit{etc.} in order to find a new robust embedded tracking system. A typical VINS consists of a monocular camera that provides visual data (frames), and a low-cost Inertial Measurement Unit (IMU), a Micro-Electro-Mechanical System (MEMS) that measures inertial data. This combination is very successful in the application field of system navigation thanks to the advantages that these sensors provide, mainly in terms of accuracy, cost and rapid reactivity. Over the last decade, various sufficiently accurate tracking algorithms and VINS have been developed, however, they require greater computational resources. In contrast, embedded systems are characterized by their high integration constraints and limited resources. Thus, a solution to embedded architecture must be based on efficient algorithms that provide less computational load.As part of this work, various tracking algorithms identified in the literature are discussed, focusing on their accuracy, robustness, and computational complexity. In parallel to this algorithmic survey, numerous recent embedded computation architectures, especially those dedicated to visual and/or visual-inertial tracking, are also presented. In this work, we propose a robust visual-inertial tracking method, called: "Context Adaptive Visual Inertial SLAM". This approach is adaptive to different system navigation context and well fitted for embedded systems such as MAVs. It focuses on the analysis of the impact of navigation condition on the tracking process in an embedded system. It also provides an execution control module able to switch between the most relevant tracking approaches by monitoring its internal state and external inputs variables. The main objective is to ensure tracking continuity and robustness despite difficult conditions
Bouchard, Marie-Astrid. "Suivi non destructif de l’indice de nutrition azotée par proxi- et télédétection en vue d’un pilotage dynamique et spatialisé de la fertilisation azotée du blé tendre". Electronic Thesis or Diss., Université de Lille (2022-....), 2022. http://www.theses.fr/2022ULILR011.
Testo completoIncrease the Nitrogen (N) use efficiency (NUE) to minimize the N pollution while maintaining high crop yield and satisfactory quality at harvest is essential for the development of sustainable agriculture. A better consideration of the spatial and temporal variability of N requirements would allow to adapt N fertilizer rate in space and time to match crops’ demand to increase N recovery. Knowledge of the crop's N status during growth should therefore make it possible to improve fertilisation practices. In this context, the main purpose of this thesis project was to make non-destructive monitoring of the winter wheat (Triticum aestivum L.) N nutrition index (NNI) during crops’ growth with the aim of subsequently integrating this knowledge into a dynamic management approach to N fertilisation. For this purpose, three experimental fields showing various patterns of NNI dynamics were monitored in the North of France, both on field with destructive measurements and with a leaf-clip sensor (Dualex, Force A, Orsay, France), but also with multispectral cameras embedded on unmanned aerial vehicles (UAV). During the three growing seasons (2019-2021), Dualex leaf-clip presents a stable and relevant application to predict NNI at stage two nodes (R²=0.78). For the first year, commonly used vegetation indices (VI) were calculated from pictures taken by Sequoia camera (Parrot, Paris, France) and evaluated to monitor NNI. The correlation between these VI remains average. According to these results, it was difficult to select a discriminant VI among commonly used VI which confirms the interest in studying new wavelengths to increase VI sensitivity to change in N status. Since 2020, to enhance the investigation of the relationship between VI and NNI, a six-lens multispectral modular camera (Kernel camera, Mapir, San Diego, CA, USA) was used and allow to take picture in 15 wavelengths, from 405nm to 940nm. Measurements taken at 15 wavelengths made it possible to calculate eight VI, with a total of 248 different wavelength combinations. Among these combinations, the combination of green and near-infrared measurements at the beginning of elongation and the associations of the near-infrared with the orange-early red portion of the spectrum at the end of elongation were of interest. Four non-parametric prediction models were then construct and evaluated to consider more explanatory variables than simple VI which combine only few wavelengths’ measurements. The partial least squares (PLS) regression model, which was of most interest in this study, was then combined with proximal sensing measurements to significantly improve the ability to predict N status. A prediction model of NNI combining remote and proximal sensing measurements was therefore built and should be tested in a farmer's plot. This study was completed by monitoring yield components, highlighting the number of spikes.m-² as the yield component most influenced by fertilisation but also the most determinant for yield. Finally, a second part of this work aims to compare the agronomic and environmental performances of 4 decision support tools (DSTs) used by farmers, with the classical balance sheet method (BSM), at the crop succession scale. The fertilizer N dose advised by DSTs was mostly higher than those calculated with the BSM without any significant increases neither in crop yield nor in grain quality. The excess of fertilizer N was weakly recovered by crop and led to over-fertilization, more pronounced in dry condition. In this context, a dynamic fertilisation method based on a diagnosis of nitrogen status or mineral N availability earlier in the season is relevant and could be based on a NNI monitoring model combining proximal and remote sensing measurements
De, goussencourt Timothée. "Système multimodal de prévisualisation “on set” pour le cinéma". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT106/document.
Testo completoPreviz on-set is a preview step that takes place directly during the shootingphase of a film with special effects. The aim of previz on-set is to show to the film director anassembled view of the final plan in realtime. The work presented in this thesis focuses on aspecific step of the previz : the compositing. This step consists in mixing multiple images tocompose a single and coherent one. In our case, it is to mix computer graphics with an imagefrom the main camera. The objective of this thesis is to propose a system for automaticadjustment of the compositing. The method requires the measurement of the geometry ofthe scene filmed. For this reason, a depth sensor is added to the main camera. The data issent to the computer that executes an algorithm to merge data from depth sensor and themain camera. Through a hardware demonstrator, we formalized an integrated solution in avideo game engine. The experiments gives encouraging results for compositing in real time.Improved results were observed with the introduction of a joint segmentation method usingdepth and color information. The main strength of this work lies in the development of ademonstrator that allowed us to obtain effective algorithms in the field of previz on-set
Sun, Haixin. "Moving Objects Detection and Tracking using Hybrid Event-based and Frame-based Vision for Autonomous Driving". Electronic Thesis or Diss., Ecole centrale de Nantes, 2023. http://www.theses.fr/2023ECDN0014.
Testo completoThe event-based camera is a bioinspiredsensor that differs from conventionalframe cameras: Instead of grabbing frameimages at a fixed rate, they asynchronouslymonitor per-pixel brightness change and outputa stream of events data that contains the time,location and sign of the brightness changes.Event cameras offer attractive propertiescompared to traditional cameras: high temporalresolution, high dynamic range, and low powerconsumption. Therefore, event cameras have anenormous potential for computer vision inchallenging scenarios for traditional framecameras, such as fast motion, and high dynamicrange.This thesis investigated the model-based anddeep-learning-based for object detection andtracking with the event camera. The fusionstrategy with the frame camera is proposedsince the frame camera is also needed toprovides appearance infomation. The proposedperception algorithms include optical flow,object detection and motion segmentation.Tests and analyses have been conducted toprove the feasibility and reliability of theproposed perception algorithms
Von, Arnim Axel. "Capteur visuel pour l'identification et la communication optique entre objets mobiles : des images aux événements". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4012.
Testo completoIn this doctoral thesis, we present the results of five years of research, design and production work on an active optical identification sensor comprising a near-infrared transmitter and a high-frequency receiver camera. This sensor locates and identifies a moving object in the visual scene by transmitting optical data. It is thus possible either to transmit data purely optically between moving objects in their respective fields of view, or to combine optical identification with conventional telecommunication means, with the aim of precisely locating the data transmitters, without recourse to or in the absence of GPS or other localization techniques. This technique, which we first explored in 2005, is known as Optical Camera Communication (OCC). Initially, between 2005 and 2008, we implemented the receiver with a CCD camera clocked at 595Hz, achieving a communication rate of 250 bits per second and an average identification time of 76ms (for a 16-bit identifier) over a maximum range of 378m. In a second study phase in 2022-2023, we used an event-driven camera, achieving a communication rate of 2,500 bits per second with a decoding rate of 94%, i.e. an average decoding time equal to the theoretical time of 6.4ms for 16 bits. So we've gained an order of magnitude. Our sensor differs from the state of the art in two ways. Its first version arrived very early, and contributed to the emergence of the concept of optical camera communication. A French patent protected the invention for ten years. Its second version outperforms the state of the art in terms of throughput, while adding robustness for tracking moving objects.Our initial use case was the localization of road objects for inter-vehicular and vehicle-to-infrastructure communication. In our more recent work, we have chosen drone surveillance and object tracking. Our sensor has many applications, particularly where other means of communication or identification are either unavailable or undesirable. These include industrial and military sites, confidential visual communications, the precise tracking of emergency vehicles by drone, and so on. Applications of similar technologies in the field of sports prove the usefulness and economic viability of the sensor.This thesis also presents my entire research career, from research engineer to researcher, then research project manager, and finally research director in a research institute. The areas of research application have varied widely, from driver assistance to neuromorphic AI, but have always followed the common thread of robotics, in its various implementations. We hope to convince the reader of the scientific innovation brought about by our work, and more generally of our contribution to research, its management and its direction
Morat, Julien. "Vision stéréoscopique par ordinateur pour la détection et le suivi de cibles pour une application automobile". Phd thesis, 2008. http://tel.archives-ouvertes.fr/tel-00343675.
Testo completoParmi tous les capteurs susceptibles de percevoir la complexité d'un environnement urbain, la stéréo-vision offre à la fois des performances intéressantes, une spectre d'applications très larges (détection de piéton, suivi de véhicules, détection de ligne blanches, etc.) et un prix compétitif. Pour ces raisons, Renault s'attache à identifier et résoudre les problèmes liés à l'implantation d'un tel système dans un véhicule de série, notamment pour une application de suivi de véhicules.
La première problématique à maîtriser concerne le calibrage du système
stéréoscopique. En effet, pour que le système puisse fournir une mesure, ses paramètres doivent être correctement estimés, y compris sous des conditions extrêmes (forte températures, chocs, vibrations, ...). Nous présentons donc une méthodologie d'évaluation permettant de répondre aux interrogations sur les dégradations de performances du système en fonction du calibrage.
Le deuxième problème concerne la détection des obstacles. La méthode mis au point utilise d'une originale les propriétés des rectifications. Le résultat est une segmentation de la route et des obstacles.
La dernière problématique concerne la calcul de vitesse des obstacles. Une grande majorité des approches de la littérature approxime la vitesse d'un obstacle à partir de ses positions successives. Lors de ce calcul, l'accumulation des incertitudes rendent cette estimation extrêmement bruitée. Notre approche combine efficacement les atouts de la stéréo-vision et du flux optique afin d'obtenir directement une mesure de vitesse 3-D robuste et précise.
Alla, Jules-Ryane S. "Détection de chute à l'aide d'une caméra de profondeur". Thèse, 2013. http://hdl.handle.net/1866/9992.
Testo completoElderly falls are a major public health problem. Studies show that about 30% of people aged 65 and older fall each year in Canada, with negative consequences on individuals, their families and our society. Faced with such a situation a video surveillance system is an effective solution to ensure the safety of these people. To this day many systems support services to the elderly. These devices allow the elderly to live at home while ensuring their safety by wearing a sensor. However the sensor must be worn at all times by the subject which is uncomfortable and restrictive. This is why research has recently been interested in the use of cameras instead of wearable sensors. The goal of this project is to demonstrate that the use of a video surveillance system can help to reduce this problem. In this thesis we present an approach for automatic detection of falls based on a method for tracking 3D subject using a depth camera (Kinect from Microsoft) positioned vertically to the ground. This monitoring is done using the silhouette extracted in real time with a robust approach for extracting 3D depth based on the depth variation of the pixels in the scene. This method is based on an initial capture the scene without any body. Once extracted, 10% of the silhouette corresponding to the uppermost region (nearest to the Kinect) will be analyzed in real time depending on the speed and the position of its center of gravity . These criteria will be analysed to detect the fall, then a signal (email or SMS) will be transmitted to an individual or to the authority in charge of the elderly. This method was validated using several videos of a stunt simulating falls. The camera position and depth information reduce so considerably the risk of false alarms. Positioned vertically above the ground, the camera makes it possible to analyze the scene especially for tracking the silhouette without major occlusion, which in some cases lead to false alarms. In addition, the various criteria for fall detection, are reliable characteristics for distinguishing the fall of a person, from squatting or sitting. Nevertheless, the angle of the camera remains a problem because it is not large enough to cover a large surface. A solution to this dilemma would be to fix a lens on the objective of the Kinect for the enlargement of the field of view and monitored area.
Drareni, Jamil. "Exploitation de contraintes photométriques et géométriques en vision. Application au suivi, au calibrage et à la reconstruction". Phd thesis, 2010. http://tel.archives-ouvertes.fr/tel-00593514.
Testo completo