Indice
Letteratura scientifica selezionata sul tema "Reconnaissance d’activités humaines"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Reconnaissance d’activités humaines".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Reconnaissance d’activités humaines"
Fall, Amar. "Justice organisationnelle, reconnaissance au travail et motivation intrinsèque : résultats d’une étude empirique". Articles 69, n. 4 (21 gennaio 2015): 709–31. http://dx.doi.org/10.7202/1028109ar.
Testo completoVibert, Stéphane. "L’errance et la distance". Anthropologie et Sociétés 27, n. 3 (1 aprile 2004): 125–45. http://dx.doi.org/10.7202/007928ar.
Testo completoChikoc Barreda, Naivi. "De la COVID-19 à l’acte électronique à distance : réflexions sur les enjeux de l’authenticité dématérialisée". Revue générale de droit 51, n. 1 (21 settembre 2021): 97–133. http://dx.doi.org/10.7202/1081838ar.
Testo completoGagnon, Éric. "Âgisme". Anthropen, 2019. http://dx.doi.org/10.17184/eac.anthropen.089.
Testo completoTesi sul tema "Reconnaissance d’activités humaines"
Sarray, Ines. "Conception de systèmes de reconnaissance d’activités humaines". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4016.
Testo completoThe research area of activity recognition aims at describing, analyzing, recognizing, understanding and following the activities and movements of persons, animals, or animated objects. Numerous important and critical application domains, such as surveillance or health-care, require a certain form of recognition of (human) activities. In these domains, activity recognition can be useful for the early detection of abnormal behavior of people, such as vandalism, troubles due to age, or illness. Recognition systems must be real-time, reactive, correct, complete, and reliable. These stringent requirements led us to use formal methods to describe, analyze, verify, and generate effective and correct recognition systems. This thesis aims at contributing to define such a system while focusing on description and verification issues. Among many possible approaches, we propose to study how the synchronous paradigm can cope with the requirements of activity recognition. Indeed, this approach has several major assets such as well founded semantics, assurance of determinism, safe parallel composition, and possibility of verification owing to model checking. Existing synchronous languages can be used to describe models of activities, but they are difficult to master by non specialists (e.g., doctors). Therefore, we propose a new language to allow this kind of users to describe the activities that they wish to recognize. This language, named ADeL (Activity Description Language), proposes two input formats, the first textual, the other graphic. In order to make both verification and implementation possible, we supply this language with two synchronous and complementary semantics. First, a behavioral semantics gives a reference definition of program behavior using rewriting rules. Second, an operational semantics describes the behavior in a constructive way and can be directly implemented. The environment of recognition systems does not usually comply with the hypotheses of the synchronous paradigm. Hence, we propose an asynchronous/synchronous adapter. This adapter, that we call "synchronizer", receives the asynchronous events from the environment, filters them, decides on which ones can be considered as "simultaneous", groups them in logical instants according to predefined politics, and send them to the activity recognition engine
Selmi, Mouna. "Reconnaissance d’activités humaines à partir de séquences vidéo". Thesis, Evry, Institut national des télécommunications, 2014. http://www.theses.fr/2014TELE0029/document.
Testo completoHuman activity recognition (HAR) from video sequences is one of the major active research areas of computer vision. There are numerous application HAR systems, including video-surveillance, search and automatic indexing of videos, and the assistance of frail elderly. This task remains a challenge because of the huge variations in the way of performing activities, in the appearance of the person and in the variation of the acquisition conditions. The main objective of this thesis is to develop an efficient HAR method that is robust to different sources of variability. Approaches based on interest points have shown excellent state-of-the-art performance over the past years. They are generally related to global classification methods as these primitives are temporally and spatially disordered. More recent studies have achieved a high performance by modeling the spatial and temporal context of interest points by encoding, for instance, the neighborhood of the interest points over several scales. In this thesis, we propose a method of activity recognition based on a hybrid model Support Vector Machine - Hidden Conditional Random Field (SVM-HCRF) that models the sequential aspect of activities while exploiting the robustness of interest points in real conditions. We first extract the interest points and show their robustness with respect to the person's identity by a multilinear tensor analysis. These primitives are then represented as a sequence of local "Bags of Words" (BOW): The video is temporally fragmented using the sliding window technique and each of the segments thus obtained is represented by the BOW of interest points belonging to it. The first layer of our hybrid sequential classification system is a Support Vector Machine that converts each local BOW extracted from the video sequence into a vector of activity classes’ probabilities. The sequence of probability vectors thus obtained is used as input of the HCRF. The latter permits a discriminative classification of time series while modeling their internal structures via the hidden states. We have evaluated our approach on various human activity datasets. The results achieved are competitive with those of the current state of art. We have demonstrated, in fact, that the use of a low-level classifier (SVM) improves the performance of the recognition system since the sequential classifier HCRF directly exploits the semantic information from local BOWs, namely the probability of each activity relatively to the current local segment, rather than mere raw information from interest points. Furthermore, the probability vectors have a low-dimension which prevents significantly the risk of overfitting that can occur if the feature vector dimension is relatively high with respect to the training data size; this is precisely the case when using BOWs that generally have a very high dimension. The estimation of the HCRF parameters in a low dimension allows also to significantly reduce the duration of the HCRF training phase
Vaquette, Geoffrey. "Reconnaissance robuste d'activités humaines par vision". Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS090.
Testo completoThis thesis focuses on supervised activity segmentation from video streams within application context of smart homes. Three semantic levels are defined, namely gesture, action and activity, this thesis focuses mainly on the latter. Based on the Deeply Optimized Hough Transform paridigm, three fusion levels are introduced in order to benefit from various modalities. A review of existing action based datasets is presented and the lack of activity detection oriented database is noticed. Then, a new dataset is introduced. It is composed of unsegmented long time range daily activities and has been recorded in a realistic environment. Finaly, a hierarchical activity detection method is proposed aiming to detect high level activities from unsupervised action detection
Guermal, Mohammed. "Compréhension de l'activité humaine dans des vidéos". Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4015.
Testo completoUnderstanding actions in videos is a pivotal aspect of computer vision with profound implications across various domains. As our reliance on visual data continues to surge, the ability to comprehend and interpret human actions in videos is necessary for advancing technologies in surveillance, healthcare, autonomous systems,and human-computer interaction. Moreover, There is an unprecedented economical and societal demand for robots that can assist humans in their industrial workand daily life activities. Hence, understanding human behaviour and its activities would be very helpful and would facilitate development of such robots. The accurate interpretation of actions in videos serves as a cornerstone for the developmentof intelligent systems that can navigate and respond effectively to the complexities of the real world. In this context, advancements in action understanding not only push the boundaries of computer vision but also play a crucial role in shaping thelandscape of cutting-edge applications that impact our daily lives. Computer Vision has known huge progress with the rise of deep learning methods such as convolutional neural networks (CNNs) and more lately transformers. Such methods allowed computer vision community to evolve in many domains such image segmentation, object detection, scene understanding and so on. However, when it comes to video processing it is still limited compared to static images. In this thesis, we focus on action understanding and we divide it into two main parts: action recognition and action detection. Mainly, action understanding algorithms faces following challenges : 1) temporal and spacial analysis, 2) fine grained actions, and 3) temporal modeling. In this thesis we introduce with more details the different aspects and key challenges of action understanding. After that we are going to introduce our contributions and solution on how to deal with these challenges. We are going to focus mainly on recognising fine-grained action using spatio-temporal objects semantics and their dependencies in space and time, we are going also to tackle action detection in real-time and anticipation by introducing a new joint model of action anticipation and online action detection for a real life scenarios applications of action detection. We are going also to introduce a new method of efficiently training networks, specifically transformers and also a more efficient use of multi-modalities (RGB, Optical-Flow, Audio...). Finally, we will discuss some ongoing and future works. All our contributions where extensively evaluated on challenging bench-marks and outperformed previous works
Radouane, Karim. "Mécanisme d’attention pour le sous-titrage du mouvement humain : Vers une segmentation sémantique et analyse du mouvement interprétables". Electronic Thesis or Diss., IMT Mines Alès, 2024. http://www.theses.fr/2024EMAL0002.
Testo completoCaptioning tasks mainly focus on images or videos, and seldom on human poses. Yet, poses concisely describe human activities. Beyond text generation quality, we consider the motion caption task as an intermediate step to solve other derived tasks. In this holistic approach, our experiments are centered on the unsupervised learning of semantic motion segmentation and interpretability. We first conduct an extensive literature review of recent methods for human pose estimation, as a central prerequisite for pose-based captioning. Then, we take an interest in pose-representation learning, with an emphasis on the use of spatiotemporal graph-based learning, which we apply and evaluate on a real-world application (protective behavior detection). As a result, we win the AffectMove challenge. Next, we delve into the core of our contributions in motion captioning, where: (i) We design local recurrent attention for synchronous text generation with motion. Each motion and its caption are decomposed into primitives and corresponding sub-captions. We also propose specific metrics to evaluate the synchronous mapping between motion and language segments. (ii) We initiate the construction of a motion-language dataset to enable supervised segmentation. (iii) We design an interpretable architecture with a transparent reasoning process through spatiotemporal attention, showing state-of-the-art results on the two reference datasets, KIT-ML and HumanML3D. Effective tools are proposed for interpretability evaluation and illustration. Finally, we conduct a thorough analysis of potential applications: unsupervised action segmentation, sign language translation, and impact in other scenarios
Zoetgnandé, Yannick. "Fall detection and activity recognition using stereo low-resolution thermal imaging". Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S073.
Testo completoNowadays, it is essential to find solutions to detect and prevent the falls of seniors. We proposed a low-cost device based on a pair of thermal sensors. The counterpart of these low-cost sensors is their low resolution (80x60 pixels), low refresh rate, noise, and halo effects. We proposed some approaches to bypass these drawbacks. First, we proposed a calibration method with a grid adapted to the thermal image and a framework ensuring the robustness of the parameters estimation despite the low resolution. Then, for 3D vision, we proposed a threefold sub-pixel stereo matching framework (called ST for Subpixel Thermal): 1) robust features extraction method based on phase congruency, 2) matching of these features in pixel precision, and 3) refined matching in sub-pixel accuracy based on local phase correlation. We also proposed a super-resolution method called Edge Focused Thermal Super-resolution (EFTS), which includes an edge extraction module enforcing the neural networks to focus on the edge in images. After that, for fall detection, we proposed a new method (called TSFD for Thermal Stereo Fall Detection) based on stereo point matching but without calibration and the classification of matches as on the ground or not on the ground. Finally, we explored many approaches to learn activities from a limited amount of data for seniors activity monitoring
Li, Haoyu. "Recent hidden Markov models for lower limb locomotion activity detection and recognition using IMU sensors". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEC041.
Testo completoThe thesis context is that of the quantified self, a movement born in California that consists in getting to know oneself better by measuring data relating to one’s body and activities. The research work consisted in developing algorithms for analyzing signals from an IMU (Inertial Measurement Unit) sensor placed on the leg to recognize different movement activities such as walking, running, stair climbing... These activities are recognizable by the shape of the sensor’s acceleration and angular velocity signals, both tri-axial, during leg movement and gait cycle.To address the recognition problem, the thesis work resulted in the construction of a particular hidden Markov chain, called semi-triplet Markov chain, which combines a semi-Markov model and a Gaussian mixture model in a triplet Markov model. This model is both adapted to the nature of the gait cycle, and to the sequence of activities as it can be carried out in daily life. To adapt the model parameters to the differences in human morphology and behavior, we have developed algorithms for estimating parameters both off-line and on-line.To establish the classification and learning performance of the algorithms, we conducted experiments on the basis of recordings collected during the thesis and on public dataset. The results are systematically compared with state-of-the-art algorithms