Academic literature on the topic 'Mécanismes d'attention'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Mécanismes d'attention.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Mécanismes d'attention":
McGowan, Rosemary A., Kim Morouney, and Patricia Bradshaw. "Managers and Eldercare: Three Critical, Language-Based Approaches." Canadian Journal on Aging / La Revue canadienne du vieillissement 19, no. 2 (2000): 237–59. http://dx.doi.org/10.1017/s0714980800014033.
Cutler, Fred. "Whodunnit? Voters and Responsibility in Canadian Federalism." Canadian Journal of Political Science 41, no. 3 (September 2008): 627–54. http://dx.doi.org/10.1017/s0008423908080761.
Radmilovic, Vuk. "Governmental Interventions and Judicial Decision Making: The Supreme Court of Canada in the Age of the Charter." Canadian Journal of Political Science 46, no. 2 (June 2013): 323–44. http://dx.doi.org/10.1017/s0008423913000504.
Weiermair, K. "A Note on Manpower Forecasting." Commentaires 30, no. 2 (April 12, 2005): 228–40. http://dx.doi.org/10.7202/028608ar.
Aparicio-Valdez, Luis. "La gestion empresarial en latinoamérica y su impacto en las relaciones laborales." Articles 44, no. 1 (April 12, 2005): 124–48. http://dx.doi.org/10.7202/050476ar.
Dissertations / Theses on the topic "Mécanismes d'attention":
Das, Srijan. "Mécanismes d'attention spatio-temporels pour la reconnaissance d'activité." Thesis, Université Côte d'Azur, 2020. https://tel.archives-ouvertes.fr/tel-03177892.
This thesis targets recognition of human actions in videos. Action recognition is a complicated task in the field of computer vision due to its high complex challenges. With the emergence of deep learning and large scale datasets from internet sources, substantial improvements have been made in video understanding. For instance, state-of-the-art 3D convolutional networks like I3D pre-trained on huge datasets like Kinetics have successfully boosted the recognition of actions from internet videos. But, these networks with rigid kernels applied across the whole space-time volume cannot address the challenges exhibited by Activities of Daily Living (ADL). We are particularly interested in discriminative video representation for ADL. Besides the challenges in generic videos, ADL exhibits - (i) fine-grained actions with short and subtle motion like pouring grain and pouring water, (ii) actions with similar visual patterns differing in motion patterns like rubbing hands and clapping, and finally (iii) long complex actions like cooking. In order to address these challenges, we have made three key contributions. The first contribution includes - a multi-modal fusion strategy to take the benefits of multiple modalities into account for classifying actions. However the question remains, how to combine multiple modalities in an end-to-end manner? How can we make use of the 3D information to guide the current state-of-the-art RGB networks for action classification? To this end, we propose articulated pose driven attention mechanisms for action classification. We propose, three variants of spatio-temporal attention mechanisms exploiting RGB and 3D pose modalities to address the aforementioned challenges (i) and (ii) for short actions. Our third main contribution is a Temporal Model on top of our attention based model. The video representation retaining dense temporal information enables the temporal model to model long complex actions which is crucial for ADL.We have evaluated our first contribution on three small-scale public datasets: CAD-60, CAD-120 and MSRDailyActivity3D. On the other hand, we have evaluated our remaining two contributions on four public datasets: a large scale human activity dataset: NTU-RGB+D 120, its subset NTU-RGB+D 60, a real-world challenging human activity dataset: Toyota Smarthome and a small scale human-object interaction dataset Northwestern UCLA. Our experiments show that the methods proposed in this thesis outperform the state-of-the-art results
Thubert, Thibault. "Impact d'un détournement d'attention sur les mécanismes neuromusculaires impliqués dans la contraction des muscles du plancher pelvien." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066178/document.
Aims: Attention may be involved in pelvic floor muscles (PFM) Methods: The electromyographic (EMG) activity of the external anal sphincter (EAS) was recorded on healthy female volunteers, during voluntary and involuntary (induced by cough) PFM contraction, elicited by local stimulation, combined (or not) with a mental Distraction Task (DT). Reaction time (RT1), ie. the latency between stimulus and the onset of EAS EMG activity, RT3, ie. the latency between the onset of EAS EMG activity and the onset of External intercostal muscle (EIC) (cough) were measured. Following randomisation (2/1) 13 volunteers underwent dual task cognitive (an attentional test and PFM exercises) rehabilitation program and 26 were the control group (no specific instruction). RT1 and RT3 were recorded before and after the program in both group.Results: The mental distraction task led to a 3.98 times greater reaction time between stimulus and EAS EMG activation (RT1), (p<0.001). DT led to a 29% shorter anticipation of the involontary PFM contraction: RT3 were respectively -80.00 ms without a DT versus -56.67 ms with a DT (r=0.7, p=0.004). In the rehabilitation group RT1 in DT conditions decreased from 461.1 ms to 290.7 ms (r=0.6, p=0.006)vs 370 to 343 ms in the control group (r=0.9, p=NS). In the study group RT3 without a DT increased from −68.5 ms to −127.8 ms (r=1.89, p = 0.03) and from 42,6 ms to -59,3 ms with a DT (r= 1.4, p=0.04).Conclusions: A specific dual task rehabilitation can prevent the effect of DT on PFM contraction characteristics
Hoonakker, Marc. "Étude des mécanismes de contrôle cognitif sous-tendant les détériorations et fluctuations d'attention soutenue chez les patients souffrant de schizophrénie et les sujets sains." Thesis, Strasbourg, 2017. http://www.theses.fr/2017STRAJ089/document.
The purpose of this project was to gain more knowledge about cognitive control mechanisms underlying deteriorations and fluctuation of sustained attention in schizophrenia and healthy participants. To that end, we combined the use of behavioral, electrophysiological (event-related potentials and functional connectivity) and subjective measures. Our results revealed spared sustained attention in schizophrenia and a distinct patterns of sustained attention changes in schizophrenia. Deteriorations are underlined by a decrease of reactive mode of cognitive control in patients and by a decrease of proactive mode in controls. Our results also highlighted slightly distinct patterns of precursors of lapses in sustained attention in schizophrenia according to the attentional state. Sustained attention changes are associated with resource depletion in patients, whereas in healthy participants, according to attentional state, they could also be caused by disengagement of cognitive control
Pelloin, Valentin. "La compréhension de la parole dans les systèmes de dialogues humain-machine à l'heure des modèles pré-entraînés." Electronic Thesis or Diss., Le Mans, 2024. http://www.theses.fr/2024LEMA1002.
In this thesis, spoken language understanding (SLU) is studied in the application context of telephone dialogues with defined goals (hotel booking reservations, for example). Historically, SLU was performed through a cascade of systems: a first system would transcribe the speech into words, and a natural language understanding system would link those words to a semantic annotation. The development of deep neural methods has led to the emergence of end-to-end architectures, where the understanding task is performed by a single system, applied directly to the speech signal to extract the semantic annotation. Recently, so-called self-supervised learning (SSL) pre-trained models have brought new advances in natural language processing (NLP). Learned in a generic way on very large datasets, they can then be adapted for other applications. To date, the best SLU results have been obtained with pipeline systems incorporating SSL models.However, none of the architectures, pipeline or end-to-end, is perfect. In this thesis, we study these architectures and propose hybrid versions that attempt to benefit from the advantages of each. After developing a state-of-the-art end-to-end SLU model, we evaluated different hybrid strategies. The advances made by SSL models during the course of this thesis led us to integrate them into our hybrid architecture
Elbayad, Maha. "Une alternative aux modèles neuronaux séquence-à-séquence pour la traduction automatique." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM012.
In recent years, deep learning has enabled impressive achievements in Machine Translation.Neural Machine Translation (NMT) relies on training deep neural networks with large number of parameters on vast amounts of parallel data to learn how to translate from one language to another.One crucial factor to the success of NMT is the design of new powerful and efficient architectures. State-of-the-art systems are encoder-decoder models that first encode a source sequence into a set of feature vectors and then decode the target sequence conditioning on the source features.In this thesis we question the encoder-decoder paradigm and advocate for an intertwined encoding of the source and target so that the two sequences interact at increasing levels of abstraction. For this purpose, we introduce Pervasive Attention, a model based on two-dimensional convolutions that jointly encode the source and target sequences with interactions that are pervasive throughout the network.To improve the efficiency of NMT systems, we explore online machine translation where the source is read incrementally and the decoder is fed partial contexts so that the model can alternate between reading and writing. We investigate deterministic agents that guide the read/write alternation through a rigid decoding path, and introduce new dynamic agents to estimate a decoding path for each sample.We also address the resource-efficiency of encoder-decoder models and posit that going deeper in a neural network is not required for all instances.We design depth-adaptive Transformer decoders that allow for anytime prediction and sample-adaptive halting mechanisms to favor low cost predictions for low complexity instances and save deeper predictions for complex scenarios
Deramgozin, Mohammadmahdi. "Développement de modèles de reconnaissance des expressions faciales à base d’apprentissage profond pour les applications embarquées." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0286.
The field of Facial Emotion Recognition (FER) is pivotal in advancing human-machine interactions and finds essential applications in healthcare for conditions like depression and anxiety. Leveraging Convolutional Neural Networks (CNNs), this thesis presents a progression of models aimed at optimizing emotion detection and interpretation. The initial model is resource-frugal but competes favorably with state-of-the-art solutions, making it a strong candidate for embedded systems constrained in computational and memory resources. To capture the complexity and ambiguity of human emotions, the research work presented in this thesis enhances this CNN-based foundational model by incorporating facial Action Units (AUs). This approach not only refines emotion detection but also provides interpretability by identifying specific AUs tied to each emotion. Further sophistication is achieved by introducing neural attention mechanisms—both spatial and channel-based—improving the model's focus on salient facial features. This makes the CNN-based model adapted well to real-world scenarios, such as partially obscured or subtle facial expressions. Based on the previous results, in this thesis we propose finally an optimized, yet computationally efficient, CNN model that is ideal for resource-limited environments like embedded systems. While it provides a robust solution for FER, this research also identifies perspectives for future work, such as real-time applications and advanced techniques for model interpretability
Simonnet, Edwin. "Réseaux de neurones profonds appliqués à la compréhension de la parole." Thesis, Le Mans, 2019. http://www.theses.fr/2019LEMA1006/document.
This thesis is a part of the emergence of deep learning and focuses on spoken language understanding assimilated to the automatic extraction and representation of the meaning supported by the words in a spoken utterance. We study a semantic concept tagging task used in a spoken dialogue system and evaluated with the French corpus MEDIA. For the past decade, neural models have emerged in many natural language processing tasks through algorithmic advances or powerful computing tools such as graphics processors. Many obstacles make the understanding task complex, such as the difficult interpretation of automatic speech transcriptions, as many errors are introduced by the automatic recognition process upstream of the comprehension module. We present a state of the art describing spoken language understanding and then supervised automatic learning methods to solve it, starting with classical systems and finishing with deep learning techniques. The contributions are then presented along three axes. First, we develop an efficient neural architecture consisting of a bidirectional recurrent network encoder-decoder with attention mechanism. Then we study the management of automatic recognition errors and solutions to limit their impact on our performances. Finally, we envisage a disambiguation of the comprehension task making the systems more efficient
Book chapters on the topic "Mécanismes d'attention":
DRIF, Ahlem, Saad Eddine SELMANI, and Hocine CHERIFI. "Réseau interactif et apprentissage automatique pour les recommandations." In Optimisation et apprentissage, 123–51. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9071.ch5.