Dissertations / Theses on the topic 'Caméras de profondeur'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 22 dissertations / theses for your research on the topic 'Caméras de profondeur.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Noury, Charles-Antoine. "Etalonnage de caméra plénoptique et estimation de profondeur à partir des données brutes." Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC063.
Full textUnlike a standard camera that records two dimensions of a light field, a plenoptic camera is designed to locally capture four of its dimensions. The richness of data from this sensor can be use for many applications : it is possible to post-process new points of view, to refocus on different scene’s planes, or to compute depth maps of a scene from a single acquisition and thus obtain 3D reconstructions of the environment. This passive sensor allows the capture of depth using a compact optical system, which makes it attractive for robotics applications. However, depth estimations from such a sensor requires its precise calibration. This camera is composed of a substantial number of elements, including a micro-lens array placed in front of the sensor, and its raw data is complex. Most of the state-of-the-art calibration approaches then consist in formulating simplified projection models and exploiting interpreted data such as synthesized images and associated depth maps. Hence, in our first contribution, carried out in collaboration with the TUM laboratory, we proposed a calibration method from a 3D test pattern using interpreted data. Then we proposed a new calibration approach based on raw data. We formalized a physical-based model of the camera and proposed a minimization expressed directly in the sensor data space to estimate its parameters. Finally, we proposed a new metric scaled depth estimation method using the camera projection model. This direct approach uses an error minimization between each micro-image content and the texture reprojection of the micro-images that surround it. Our algorithms performance was evaluated both on a simulator developed during this thesis and on real scenes. We have shown that the calibration is robust to bad model initialization and the depth estimation accuracy competes with the state-of-the-art
Belhedi, Amira. "Modélisation du bruit et étalonnage de la mesure de profondeur des caméras Temps-de-Vol." Thesis, Clermont-Ferrand 1, 2013. http://www.theses.fr/2013CLF1MM08/document.
Full text3D cameras open new possibilities in different fields such as 3D reconstruction, Augmented Reality and video-surveillance since they provide depth information at high frame-rates. However, they have limitations that affect the accuracy of their measures. In particular for TOF cameras, two types of error can be distinguished : the stochastic camera noise and the depth distortion. In state of the art of TOF cameras, the noise is not well studied and the depth distortion models are difficult to use and don't guarantee the accuracy required for some applications. The objective of this thesis is to study, to model and to propose a calibration method of these two errors of TOF cameras which is accurate and easy to set up. Both for the noise and for the depth distortion, two solutions are proposed. Each of them gives a solution for a different problem. The former aims to obtain an accurate model. The latter, promotes the simplicity of the set up. Thereby, for the noise, while the majority of the proposed models are only based on the amplitude information, we propose a first model which integrate also the pixel position in the image. For a better accuracy, we propose a second model where we replace the amplitude by the depth and the integration time. Regarding the depth distortion, we propose a first solution based on a non-parametric model which guarantee a better accuracy. Then, we use the prior knowledge of the planar geometry of the observed scene to provide a solution which is easier to use compared to the previous one and to those of the litterature
Benyoucef, Rayane. "Contribution à la cinématique à partir du mouvement : approches basées sur des observateurs non linéaires." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG091.
Full textThis manuscript outlines the various scientific contributions during this thesis, with the objective of solving the problem of the 3D reconstructure Structure from Motion (SfM) through the synthesis of dynamic models and observation algorithms for thereconstruction of unmeasured variables. Undoubtedly, vision sensors are the most important source providing rich information about the environment and that's where the scene structure estimation begins to find professional uses. There has been significant interest in recovering the 3D structure of a scene from 2D pixels considering a sequence of images. Several solutions have been proposed in the literature for this issue. The objective of the classic “structure from motion (SfM)” problem is to estimate the Euclidean coordinates of tracked feature points attached to an object (i.e., 3-D structure) provided the relative motion between the camera and the object is known. It isthoroughly addressed in literature that the camera translational motion has a strong effect on the performance of 3D structure estimation. For eye-in-hand cameras carried by robot manipulators fixed to the ground, it is straightforward to measure linear velocity. However when dealing with cameras riding on mobile robots, some practical issues may occur. In this context, particular attention has been devoted to estimating the camera linear velocities. Hence another objective of this thesis is to find strategies to develop deterministic nonlinear observers to address the SfM problem and obtain a reliable estimation of the unknown time-varying linear velocities used as input of the estimator scheme which is an implicit requirement for most of active vision. The techniques used are based on Takagi-Sugeno models and Lyapunov-based analysis. In the first part, a solution for the SfM problem is presented, where the main purpose is to identify the 3D structure of a tracked feature point observed by a moving calibrated camera assuming a precise knowledge of the camera linear and angular velocities. Then this work is extended to investigate the depth reconstruction considering a partially calibratedcamera. In the second part, the thesis focuses on identifying the 3D structure of a tracked feature point and the camera linear velocity. At first, the strategy presented aims to recover the 3D information and partial estimate of the camera linear velocity. Then, the thesis introduces the structure estimation and full estimation of the camera linear velocity, where the proposed observer only requires the camera angular velocity and the corresponding linear and angular accelerations. The performance of eachobserver is highlighted using a series of simulations with several scenarios and experimental tests
Ali-Bey, Mohamed. "Contribution à la spécification et à la calibration des caméras relief." Thesis, Reims, 2011. http://www.theses.fr/2011REIMS022/document.
Full textThe works proposed in this thesis are part of the projects ANR-Cam-Relief and CPER-CREATIS supported by the National Agency of Research, the Champagne-Ardenne region and the FEDER.These studies also join within the framework of a collaboration with the 3DTV-Solutions society and two groups of the CReSTIC (AUTO and SIC).The objective of this project is, among others, to design by analogy to the current 2D popularized systems, 3D shooting systems allowing to display on 3D screens (auto-stereoscopic) visible without glasses, 3D quality images. Our interest has focused particularly on shooting systems with parallel and decentred configuration. There search works led in this thesis are motivated by the inability of the static configurations of these shooting systems to capture correctly real dynamic scenes for a correct auto-stereoscopic endering. To overcome this drawback, an adaptation scheme of the geometrical configuration of the shooting system is proposed. To determine the parameters to be affected by this adaptation,the effect of the constancy of each parameter on the rendering quality is studied. The repercussions of the dynamic and mechanical constraints on the 3D rendering are then examined. Positioning accuracy of the structural parameters is approached through two methods proposed for the rendering quality assessment, to determine the thresholds of positioning error of the structural parameters of the shooting system. Finally, the problem of calibration is discussed where we propose an approach based on the DLT method, and perspectives are envisaged to automatic control of these shooting systems by classical approaches or by visual servoing
Auvinet, Edouard. "Analyse d’information tridimensionnelle issue de systèmes multi-caméras pour la détection de la chute et l’analyse de la marche." Thèse, Rennes 2, 2012. http://hdl.handle.net/1866/9770.
Full textThis thesis is concerned with defining new clinical investigation method to assess the impact of ageing on motricity. In particular, this thesis focuses on two main possible disturbance during ageing : the fall and walk impairment. This two motricity disturbances still remain unclear and their clinical analysis presents real scientist and technological challenges. In this thesis, we propose novel measuring methods usable in everyday life or in the walking clinic, with a minimum of technical constraints. In the first part, we address the problem of fall detection at home, which was widely discussed in previous years. In particular, we propose an approach to exploit the subject’s volume, reconstructed from multiple calibrated cameras. These methods are generally very sensitive to occlusions that inevitably occur in the home and we therefore propose an original approach much more robust to these occultations. The efficiency and real-time operation has been validated on more than two dozen videos of falls and lures, with results approaching 100 % sensitivity and specificity with at least four or more cameras. In the second part, we go a little further in the exploitation of reconstructed volumes of a person at a particular motor task : the treadmill, in a clinical diagnostic. In this section we analyze more specifically the quality of walking. For this we develop the concept of using depth camera for the quantification of the spatial and temporal asymmetry of lower limb movement during walking. After detecting each step in time, this method makes a comparison of surfaces of each leg with its corresponding symmetric leg in the opposite step. The validation performed on a cohort of 20 subjects showed the viability of the approach.
Réalisé en cotutelle avec le laboratoire M2S de Rennes 2
Voisin, Yvon. "Détermination d'un critère pour la mise au point automatique des caméras pour des scènes à faible profondeur de champ : contribution à la mise au point des microscopes." Besançon, 1993. http://www.theses.fr/1993BESA2016.
Full textAuvinet, Edouard. "Analyse d'information tridimensionnelle issue de systèmes multi-caméras pour la détection de la chute et l'analyse de la marche." Phd thesis, Université Rennes 2, 2012. http://tel.archives-ouvertes.fr/tel-00946188.
Full textArgui, Imane. "A vision-based mixed-reality framework for testing autonomous driving systems." Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMIR37.
Full textThis thesis explores the development and validation of autonomous navigation systems within a mixed-reality (MR) framework, aiming to bridge the gap between virtual simulation and real-world testing. The research emphasizes the potential of MR environments for safely, efficiently, and cost-effectively testing autonomous systems. The thesis is structured around several chapters, beginning with a review of state-of-the-art technologies in autonomous navigation and mixed-reality applications. Through both rule-based and learning-based models, the research investigates the performance of autonomous robots within simulated, real, and MR environments. One of the core objectives is to reduce the "reality gap"—the discrepancy between behaviors observed in simulations versus real-world applications—by integrating real- world elements with virtual components in MR environments. This approach allows for more accurate testing and validation of algorithms without the risks associated with physical trials. A significant part of the work is dedicated to implementing and testing an offline augmentation strategy aimed at enhancing the perception capabilities of autonomous systems using depth information. Furthermore, reinforcement learning (RL) is applied to evaluate its potential within MR environments. The thesis demonstrates that RL models can effectively learn to navigate and avoid obstacles in virtual simulations and perform similarly well when transferred to MR environments, highlighting the framework’s flexibility for different autonomous system models. Through these experiments, the thesis establishes MR environments as a versatile and robust platform for advancing autonomous navigation technologies, offering a safer, more scalable approach to model validation before real-world deployment
Sune, Jean-Luc. "Estimation de la profondeur d'une scène à partir d'une caméra en mouvement." Grenoble INPG, 1995. http://www.theses.fr/1995INPG0189.
Full textSoontranon, Narut. "Appariement entre images de point de vue éloignés par utilisation de carte de profondeur." Amiens, 2013. http://www.theses.fr/2013AMIE0114.
Full textThe work presented in this thesis focuses on extraction of homologous points (point matches) between images acquired from different viewpoints. This is a first essential step in many applications, particularly 3D reconstruction. Existing algorithms are very efficient for the images closed viewpoints but the performance can drop drastically for wide baseline images. We are interested in case that, in addition of radiometry, we have a depth information associated each pixel. The information can be obtained from different systems (LIDAR, RGB-D camera, light field camera). Our approach uses this information to solve the problem of distortion between the viewpoints, which is not efficient by the methods using only the radiometry. The first approach deals with planar surface scenes (urban environment, modern building). Each planar region is rectified to orthogonal view. The second approach is applied for more complex scenes (museum collections, sculptures). We rely on projection of conformal mapping. For two schemes, our contributions have efficiently improved the detection of point matches, both in terms of their quantity and their distribution (comparing with the algorithms: SIFT, MSER and ASIFT). Comparative results show that the homologous points obtained by our approach are numerous and efficiently distributed on the images
Guérin, Lucie. "Etude d'une nouvelle architecture de gamma caméra à base de semi-conducteurs CdZnTe /CdTe." Angers, 2007. https://tel.archives-ouvertes.fr/tel-01773265.
Full textCdZnTe / CdTe semiconductor gamma ray detectors are good candidates to replace NaI(Tl) scintillation detectors for medical applications, notably for nuclear imaging. In addition to compactness, they present very good performances, in terms of energy resolution, detection efficiency and intrinsic spatial resolution. These detectors provide also an important additional information: the depth of interaction of the gamma ray into the detector. This context led LETI into developing and realizing new gamma camera architecture, based on CdZnTe / CdTe semiconductor, in order to benefit from these recent performances. During this work, we have proposed a new architecture, named HiSens (High Sensitivity), allowing to improve sensitivity (about factor 5) while preserving spatial resolution. This architecture associates CdZnTe detectors, providing depth of interaction information, with a new parallel square hole collimator geometry and uses an adapted image reconstruction method. We have evaluated HiSens architecture performances with simulation, after development of simulations software and an adapted method of iterative reconstruction, using photon depth of interaction information. A preliminary experimental validation is currently investigated at CEA-LETI in order to confirm simulations results
Pinard, Clément. "Robust Learning of a depth map for obstacle avoidance with a monocular stabilized flying camera." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLY003/document.
Full textCustomer unmanned aerial vehicles (UAVs) are mainly flying cameras. They democratized aerial footage, but with thei success came security concerns.This works aims at improving UAVs security with obstacle avoidance, while keeping a smooth flight. In this context, we use only one stabilized camera, because of weight and cost incentives.For their robustness in computer vision and thei capacity to solve complex tasks, we chose to use convolutional neural networks (CNN). Our strategy is based on incrementally learning tasks with increasing complexity which first steps are to construct a depth map from the stabilized camera. This thesis is focused on studying ability of CNNs to train for this task.In the case of stabilized footage, the depth map is closely linked to optical flow. We thus adapt FlowNet, a CNN known for optical flow, to output directly depth from two stabilized frames. This network is called DepthNet.This experiment succeeded with synthetic footage, but is not robust enough to be used directly on real videos. Consequently, we consider self supervised training with real videos, based on differentiably reproject images. This training method for CNNs being rather novel in literature, a thorough study is needed in order not to depend too moch on heuristics.Finally, we developed a depth fusion algorithm to use DepthNet efficiently on real videos. Multiple frame pairs are fed to DepthNet to get a great depth sensing range
Pinard, Clément. "Robust Learning of a depth map for obstacle avoidance with a monocular stabilized flying camera." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLY003.
Full textCustomer unmanned aerial vehicles (UAVs) are mainly flying cameras. They democratized aerial footage, but with thei success came security concerns.This works aims at improving UAVs security with obstacle avoidance, while keeping a smooth flight. In this context, we use only one stabilized camera, because of weight and cost incentives.For their robustness in computer vision and thei capacity to solve complex tasks, we chose to use convolutional neural networks (CNN). Our strategy is based on incrementally learning tasks with increasing complexity which first steps are to construct a depth map from the stabilized camera. This thesis is focused on studying ability of CNNs to train for this task.In the case of stabilized footage, the depth map is closely linked to optical flow. We thus adapt FlowNet, a CNN known for optical flow, to output directly depth from two stabilized frames. This network is called DepthNet.This experiment succeeded with synthetic footage, but is not robust enough to be used directly on real videos. Consequently, we consider self supervised training with real videos, based on differentiably reproject images. This training method for CNNs being rather novel in literature, a thorough study is needed in order not to depend too moch on heuristics.Finally, we developed a depth fusion algorithm to use DepthNet efficiently on real videos. Multiple frame pairs are fed to DepthNet to get a great depth sensing range
Buat, Benjamin. "Caméra active 3D par Depth from Defocus pour l'inspection de surface : algorithmie, modèle de performance et réalisation expérimentale." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG058.
Full textThis thesis is dedicated to the design of a 3D camera capable of producing the complete depth map of a scene within the framework of surface inspection. This field of application generally involves objects with little texture and strict specifications concerning the compactness of the inspection system and the precision required. In this thesis, we propose to use a camera combined with a projector to add an artificial texture to the scene. 3D extraction is based on the principle of “Depth-From-Defocus” which consists in estimating the depth by exploiting the defocus blur. We first developed a single-image local depth estimation algorithm based on scene and blur learning. This algorithm works for any type of DFD system but it is particularly suitable for active DFD for which we control the scene which is a projected texture. Then we implemented an experimental active DFD prototype for surface inspection. It is composed of a chromatic camera whose lens has longitudinal chromatic aberrations to extend the estimable depth range and estimation accuracy, and a specialized projector whose pattern shape and scale have been particularly optimized by the simulation of the prototype. We also carried out an experimental an experimental validation of the prototype which achieved an accuracy of 0.45 mm over a working range of 310 to 340 mm. We then developed a performance model that predicts the accuracy of any active DFD system depending on the parameters of the optics, sensor, projector and treatments. This model paves the way for a joint optical/processing design study of an active 3D camera by DFD
De, goussencourt Timothée. "Système multimodal de prévisualisation “on set” pour le cinéma." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT106/document.
Full textPreviz on-set is a preview step that takes place directly during the shootingphase of a film with special effects. The aim of previz on-set is to show to the film director anassembled view of the final plan in realtime. The work presented in this thesis focuses on aspecific step of the previz : the compositing. This step consists in mixing multiple images tocompose a single and coherent one. In our case, it is to mix computer graphics with an imagefrom the main camera. The objective of this thesis is to propose a system for automaticadjustment of the compositing. The method requires the measurement of the geometry ofthe scene filmed. For this reason, a depth sensor is added to the main camera. The data issent to the computer that executes an algorithm to merge data from depth sensor and themain camera. Through a hardware demonstrator, we formalized an integrated solution in avideo game engine. The experiments gives encouraging results for compositing in real time.Improved results were observed with the introduction of a joint segmentation method usingdepth and color information. The main strength of this work lies in the development of ademonstrator that allowed us to obtain effective algorithms in the field of previz on-set
Dubois, Amandine. "Mesure de la fragilité et détection de chutes pour le maintien à domicile des personnes âgées." Phd thesis, Université de Lorraine, 2014. http://tel.archives-ouvertes.fr/tel-01070972.
Full textLaviole, Jérémy. "Interaction en réalité augmentée spatiale pour le dessin physique." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00935602.
Full textPage, Solenne. "Commande d'un déambulateur robotisé par la caractérisation posturale." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS221.
Full textMobility is a key factor for maintaining the autonomy of the elderly. This thesis proposes new approaches and solutions for three kinds of assistance in the field of smart walkers: fall prevention assistance; diagnosis and medical follow-up assistance; and mobility assistance. Towards preventing falls, this thesis presents an algorithm for detecting when balance is being lost (available literature mainly focuses on detecting falls, after it occured). Experiments involving healthy subjects show that our algorithm detects losses of balance within 600 ms. Towards assisting diagnosis and medical follow-up, this thesis proposes a portative and affordable device (kinect-like sensor) that offers a relevant tradeoff between portativity, affordability and precision (better precision than the existing literature on markerless diagnosis). Our solution achieves better precision than the existing literature on markerless diagnosis. Our algorithm enables real-time walking analysis. This solution is validated through experiences involving healthy and pathological elderly participants. Towards robotic assistance for mobility, this thesis presents a new prototype, which we called RoAM (Robot for Assisting Mobility), and approaches for controlling it. Experiments on an ecological path included three modes of control: one based on user position (developped in this thesis), another one on interaction forces, and the last one combining the two previous modes. All participants could complete the experiments with all the three modes. We also show that the most promising track seems to consist of a fusion of a force-based control with a position-based control
Dubois, Amandine. "Mesure de la fragilité et détection de chutes pour le maintien à domicile des personnes âgées." Electronic Thesis or Diss., Université de Lorraine, 2014. http://www.theses.fr/2014LORR0095.
Full textPopulation ageing is a major issue for society in the next years, especially because of the increase of dependent people. The limits in specialized institutes capacity and the wish of the elderly to stay at home as long as possible explain a growing need for new specific at home services. Technologies can help securing the person at home by detecting falls. They can also help in the evaluation of the frailty for preventing future accidents. This work concerns the development of low cost ambient systems for helping the stay at home of elderly. Depth cameras allow analysing in real time the displacement of the person. We show that it is possible to recognize the activity of the person and to measure gait parameters from the analysis of simple feature extracted from depth images. Activity recognition is based on Hidden Markov Models and allows detecting at risk behaviours and falls. When the person is walking, the analysis of the trajectory of her centre of mass allows measuring gait parameters that can be used for frailty evaluation. This work is based on laboratory experimentations for the acquisition of data used for models training and for the evaluation of the results. We show that some of the developed Hidden Markov Models are robust enough for classifying the activities. We also evaluate de precision of the gait parameters measurement in comparison to the measures provided by an actimetric carpet. We believe that such a system could be installed in the home of the elderly because it relies on a local processing of the depth images. It would be able to provide daily information on the person activity and on the evolution of her gait parameters that are useful for securing her and evaluating her frailty
Alla, Jules-Ryane S. "Détection de chute à l'aide d'une caméra de profondeur." Thèse, 2013. http://hdl.handle.net/1866/9992.
Full textElderly falls are a major public health problem. Studies show that about 30% of people aged 65 and older fall each year in Canada, with negative consequences on individuals, their families and our society. Faced with such a situation a video surveillance system is an effective solution to ensure the safety of these people. To this day many systems support services to the elderly. These devices allow the elderly to live at home while ensuring their safety by wearing a sensor. However the sensor must be worn at all times by the subject which is uncomfortable and restrictive. This is why research has recently been interested in the use of cameras instead of wearable sensors. The goal of this project is to demonstrate that the use of a video surveillance system can help to reduce this problem. In this thesis we present an approach for automatic detection of falls based on a method for tracking 3D subject using a depth camera (Kinect from Microsoft) positioned vertically to the ground. This monitoring is done using the silhouette extracted in real time with a robust approach for extracting 3D depth based on the depth variation of the pixels in the scene. This method is based on an initial capture the scene without any body. Once extracted, 10% of the silhouette corresponding to the uppermost region (nearest to the Kinect) will be analyzed in real time depending on the speed and the position of its center of gravity . These criteria will be analysed to detect the fall, then a signal (email or SMS) will be transmitted to an individual or to the authority in charge of the elderly. This method was validated using several videos of a stunt simulating falls. The camera position and depth information reduce so considerably the risk of false alarms. Positioned vertically above the ground, the camera makes it possible to analyze the scene especially for tracking the silhouette without major occlusion, which in some cases lead to false alarms. In addition, the various criteria for fall detection, are reliable characteristics for distinguishing the fall of a person, from squatting or sitting. Nevertheless, the angle of the camera remains a problem because it is not large enough to cover a large surface. A solution to this dilemma would be to fix a lens on the objective of the Kinect for the enlargement of the field of view and monitored area.
Brousseau, Pierre-André. "Calibrage de caméra fisheye et estimation de la profondeur pour la navigation autonome." Thèse, 2019. http://hdl.handle.net/1866/23782.
Full textThis thesis focuses on the problems of calibrating wide-angle cameras and estimating depth from a single camera, stationary or in motion. The work carried out is at the intersection between traditional 3D vision and new deep learning methods in the field of autonomous navigation. They are designed to allow the detection of obstacles by a moving drone equipped with a single camera with a very wide field of view. First, a new calibration method is proposed for fisheye cameras with very large field of view by planar calibration with dense correspondences obtained by structured light that can be modelled by a set of central virtual generic cameras. We demonstrate that this approach allows direct modeling of axial cameras, and validate it on synthetic and real data. Then, a method is proposed to estimate the depth from a single image, using only the strong depth cues, the T-junctions. We demonstrate that deep learning methods are likely to learn from the biases of their data sets and have weaknesses to invariance. Finally, we propose a method to estimate the depth from a camera in free 6 DoF motion. This involves calibrating the fisheye camera on the drone, visual odometry and depth resolution. The proposed methods allow the detection of obstacles for a drone.
Ndayikengurukiye, Didier. "Estimation de cartes d'énergie de hautes fréquences ou d'irrégularité de périodicité de la marche humaine par caméra de profondeur pour la détection de pathologies." Thèse, 2016. http://hdl.handle.net/1866/16178.
Full textThis work presents two new and simple human gait analysis systems based on a depth camera (Microsoft Kinect) placed in front of a subject walking on a conventional treadmill, capable of detecting a healthy gait from an impaired one. The first system presented relies on the fact that a normal walk typically exhibits a smooth motion (depth) signal, at each pixel with less high-frequency spectral energy content than an abnormal walk. This permits to estimate a map for that subject, showing the location and the amplitude of the high-frequency spectral energy (HFSE). The second system analyses the patient's body parts that have an irregular movement pattern, in terms of periodicity, during walking. Herein we assume that the gait of a healthy subject exhibits anywhere in the human body, during the walking cycles, a depth signal with a periodic pattern without noise. From each subject’s video sequence, we estimate a saliency color map showing the areas of strong gait irregularities also called aperiodic noise energy. Either the HFSE or aperiodic noise energy shown in the map can be used as a good indicator of possible pathology in an early, fast and reliable diagnostic tool or to provide information about the presence and extent of disease or (orthopedic, muscular or neurological) patient's problems. Even if the maps obtained are informative and highly discriminant for a direct visual classification, even for a non-specialist, the proposed systems allow us to automatically detect maps representing healthy individuals and those representing individuals with locomotor problems.