Literatura científica selecionada sobre o tema "Auto-supervisé"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Auto-supervisé".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Auto-supervisé"
Desplanche, Elodie, Gilles Thöni, Peig Harnett, Alain Varray, Aline Herbinet, Raphaël Chiron e Brian Casserly. "Un programme franco-irlandais d’APA supervisé par visioconférence, chez des adultes ayant la mucoviscidose : effets sur le niveau d’AP auto-renseignée et la condition physique". Science & Sports 33 (maio de 2018): S27—S28. http://dx.doi.org/10.1016/j.scispo.2018.03.039.
Texto completo da fonteTeses / dissertações sobre o assunto "Auto-supervisé"
Decoux, Benoît. "Un modèle connexionniste de vision 3-D : imagettes rétiniennes, convergence stéréoscopique, et apprentissage auto-supervisé de la fusion". Rouen, 1995. http://www.theses.fr/1995ROUES056.
Texto completo da fonteLefort, Mathieu. "Apprentissage spatial de corrélations multimodales par des mécanismes d'inspiration corticale". Phd thesis, Université Nancy II, 2012. http://tel.archives-ouvertes.fr/tel-00756687.
Texto completo da fonteLefort, Mathieu. "Apprentissage spatial de corrélations multimodales par des mécanismes d'inspiration corticale". Electronic Thesis or Diss., Université de Lorraine, 2012. http://www.theses.fr/2012LORR0106.
Texto completo da fonteThis thesis focuses on unifying multiple modal data flows that may be provided by sensors of an agent. This unification, inspired by psychological experiments like the ventriloquist effect, is based on detecting correlations which are defined as temporally recurrent spatial patterns that appear in the input flows. Learning of the input flow correlations space consists on sampling this space and generalizing theselearned samples. This thesis proposed some functional paradigms for multimodal data processing, leading to the connectionist, generic, modular and cortically inspired architecture SOMMA (Self-Organizing Maps for Multimodal Association). In this model, each modal stimulus is processed in a cortical map. Interconnectionof these maps provides an unifying multimodal data processing. Sampling and generalization of correlations are based on the constrained self-organization of each map. The model is characterised by a gradual emergence of these functional properties : monomodal properties lead to the emergence of multimodal ones and learning of correlations in each map precedes self-organization of these maps.Furthermore, the use of a connectionist architecture and of on-line and unsupervised learning provides plasticity and robustness properties to the data processing in SOMMA. Classical artificial intelligence models usually miss such properties
Geiler, Louis. "Deep learning for churn prediction". Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7333.
Texto completo da fonteThe problem of churn prediction has been traditionally a field of study for marketing. However, in the wake of the technological advancements, more and more data can be collected to analyze the customers behaviors. This manuscript has been built in this frame, with a particular focus on machine learning. Thus, we first looked at the supervised learning problem. We have demonstrated that logistic regression, random forest and XGBoost taken as an ensemble offer the best results in terms of Area Under the Curve (AUC) among a wide range of traditional machine learning approaches. We also have showcased that the re-sampling approaches are solely efficient in a local setting and not a global one. Subsequently, we aimed at fine-tuning our prediction by relying on customer segmentation. Indeed,some customers can leave a service because of a cost that they deem to high, and other customers due to a problem with the customer’s service. Our approach was enriched with a novel deep neural network architecture, which operates with both the auto-encoders and the k-means approach. Going further, we focused on self-supervised learning in the tabular domain. More precisely, the proposed architecture was inspired by the work on the SimCLR approach, where we altered the architecture with the Mean-Teacher model from semi-supervised learning. We showcased through the win matrix the superiority of our approach with respect to the state of the art. Ultimately, we have proposed to apply what we have built in this manuscript in an industrial setting, the one of Brigad. We have alleviated the company churn problem with a random forest that we optimized through grid-search and threshold optimization. We also proposed to interpret the results with SHAP (SHapley Additive exPlanations)
Zaiem, Mohamed Salah. "Informed Speech Self-supervised Representation Learning". Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAT009.
Texto completo da fonteFeature learning has been driving machine learning advancement with the recently proposed methods getting progressively rid of handcrafted parts within the transformations from inputs to desired labels. Self-supervised learning has emerged within this context, allowing the processing of unlabeled data towards better performance on low-labeled tasks. The first part of my doctoral work is aimed towards motivating the choices in the speech selfsupervised pipelines learning the unsupervised representations. In this thesis, I first show how conditional-independence-based scoring can be used to efficiently and optimally select pretraining tasks tailored for the best performance on a target task. The second part of my doctoral work studies the evaluation and usage of pretrained self-supervised representations. I explore, first, the robustness of current speech self-supervision benchmarks to changes in the downstream modeling choices. I propose, second, fine-tuning approaches for better efficicency and generalization
Jouffroy, Emma. "Développement de modèles non supervisés pour l'obtention de représentations latentes interprétables d'images". Electronic Thesis or Diss., Bordeaux, 2024. http://www.theses.fr/2024BORD0050.
Texto completo da fonteThe Laser Megajoule (LMJ) is a large research device that simulates pressure and temperature conditions similar to those found in stars. During experiments, diagnostics are guided into an experimental chamber for precise positioning. To minimize the risks associated with human error in such an experimental context, the automation of an anti-collision system is envisaged. This involves the design of machine learning tools offering reliable decision levels based on the interpretation of images from cameras positioned in the chamber. Our research focuses on probabilistic generative neural methods, in particular variational auto-encoders (VAEs). The choice of this class of models is linked to the fact that it potentially enables access to a latent space directly linked to the properties of the objects making up the observed scene. The major challenge is to study the design of deep network models that effectively enable access to such a fully informative and interpretable representation, with a view to system reliability. The probabilistic formalism intrinsic to VAE allows us, if we can trace back to such a representation, to access an analysis of the uncertainties of the encoded information
Roger, Vincent. "Modélisation de l'indice de sévérité du trouble de la parole à l'aide de méthodes d'apprentissage profond : d'une modélisation à partir de quelques exemples à un apprentissage auto-supervisé via une mesure entropique". Thesis, Toulouse 3, 2022. http://www.theses.fr/2022TOU30180.
Texto completo da fontePeople with head and neck cancers have speech difficulties after surgery or radiation therapy. It is important for health practitioners to have a measure that reflects the severity of speech. To produce this measure, a perceptual study is commonly performed with a group of five to six clinical experts. This process limits the use of this assessment in practice. Thus, the creation of an automatic measure, similar to the severity index, would allow a better follow-up of the patients by facilitating its obtaining. To realise such a measure, we relied on a reading task, classically performed. We used the recordings of the C2SI-RUGBI corpus, which includes more than 100 people. This corpus represents about one hour of recording to model the severity index. In this PhD work, a review of state-of-the-art methods on speech, emotion and speaker recognition using little data was undertaken. We then attempted to model severity using transfer learning and deep learning. Since the results were not usable, we turned to the so-called "few shot" techniques (learning from only a few examples). Thus, after promising first attempts at phoneme recognition, we obtained promising results for categorising the severity of patients. Nevertheless, the exploitation of these results for a medical application would require improvements. We therefore performed projections of the data from our corpus. As some score slices were separable using acoustic parameters, we proposed a new entropic measurement method. This one is based on self-supervised speech representations on the Librispeech corpus: the PASE+ model, which is inspired by the Inception Score (generally used in image processing to evaluate the quality of images generated by models). Our method allows us to produce a score similar to the severity index with a Spearman correlation of 0.87 on the reading task of the cancer corpus. The advantage of our approach is that it does not require data from the C2SI-RUGBI corpus for training. Thus, we can use the whole corpus for the evaluation of our system. The quality of our results has allowed us to consider a use in a clinical environment through an application on a tablet: tests are underway at the Larrey Hospital in Toulouse
Sarazin, Tugdual. "Apprentissage massivement distribué dans un environnement Big Data". Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD050.
Texto completo da fonteIn recent years, the amount of data analysed by companies and research laboratories increased strongly, opening the era of BigData. However, these raw data are frequently non-categorized and uneasy to use. This thesis aims to improve and ease the pre-treatment and comprehension of these big amount of data by using unsupervised machine learning algorithms.The first part of this thesis is dedicated to a state-of-the-art of clustering and biclustering algorithms and to an introduction to big data technologies. The first part introduces the conception of clustering Self-Organizing Map algorithm [Kohonen,2001] in big data environment. Our algorithm (SOM-MR) provides the same advantages as the original algorithm, namely the creation of data visualisation map based on data clusters. Moreover, it uses the Spark platform that makes it able to treat a big amount of data in a short time. Thanks to the popularity of this platform, it easily fits in many data mining environments. This is what we demonstrated it in our project \Square Predict" carried out in partnership with Axa insurance. The aim of this project was to provide a real-time data analysing platform in order to estimate the severity of natural disasters or improve residential risks knowledge. Throughout this project, we proved the efficiency of our algorithm through its capacity to analyse and create visualisation out of a big volume of data coming from social networks and open data.The second part of this work is dedicated to a new bi-clustering algorithm. BiClustering consists in making a cluster of observations and variables at the same time. In this contribution we put forward a new approach of bi-clustering based on the self-organizing maps algorithm that can scale on big amounts of data (BiTM-MR). To reach this goal, this algorithm is also based on a the Spark platform. It brings out more information than the SOM-MR algorithm because besides producing observation groups, it also associates variables to these groups,thus creating bi-clusters of variables and observations
Luce-Vayrac, Pierre. "Open-Ended Affordance Discovery in Robotics Using Pertinent Visual Features". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS670.
Texto completo da fonteScene understanding is a challenging problem in computer vision and robotics. It is traditionally addressed as an observation only process, in which the robot acquires data on its environment through its exteroceptive sensors, and processes it with specific algorithms (using for example Deep Neural Nets in modern approaches), to produce an interpretation: 'This is a chair because this looks like a chair'. For a robot to properly operate in its environment it needs to understand it. It needs to make sense of it in relation to its motivations and to its action capacities. We believe that scene understanding requires interaction with the environment, wherein perception, action and proprioception are integrated. The work described in this thesis explores this avenue which is inspired by work in Psychology and Neuroscience showing the strong link between action and perception. The concept of affordance has been introduced by James J. Gibson in 1977. It states that animals tend to perceive their environment through what they can accomplish with it (what it affords them), rather than solely through its intrinsic properties: 'This is a chair because I can sit on it.'. There is a variety of approaches studying affordances in robotics, largely agreeing on representing an affordance as a triplet (effect, (action, entity)), such that the effect effect is generated when action action is exerted on entity entity. However most authors use predefined features to describe the environment. We argue that building affordances on predefined features is actually defeating their purpose, by limiting them to the perceptual subspace generated by these features. Furthermore we affirm the impracticability of predefining a set of features general enough to describe entities in open-ended environments. In this thesis, we propose and develop an approach to enable a robot to learn affordances while simultaneously building relevant features describing the environment. To bootstrap affordance discovery we use a classical interaction loop. The robot executes a sequence of motor controls (action a) on a part of the environment ('object' o) described using a predefined set of initial features (color and size) and observes the result (effect e). By repeating this process, a dataset of (e, (a, o)) instances is built. This dataset is then used to train a predictive model of the affordance. To learn a new feature, the same loop is used, but instead of using a predefined set of descriptors of o we use a deep convolutional neural network (CNN). The raw data (2D images) of o is used as input and the effect e as expected output. The action is implicit as a different CNN is trained for each specific action. The training is self-supervised as the interaction data is produced by the robot itself. In order to correctly predict the affordance, the network must extract features which are directly relevant to the environment and the motor capabilities of the robot. Any feature learned by the method can then be added to the initial descriptors set. To achieve open-ended learning, whenever the agent executes the same action on two apparently similar objects (regarding a currently used set of features), but does not observe the same effect, it has to assume that it does not possess the relevant features to distinguish those objects in regard to this action, hence it needs to discover and learn these new features to reduce ambiguity. The robot will use the same approach to enrich its descriptor set. Several experiments on a real robotic setup showed that we can reach predictive performance similar to classical approaches which use predefined descriptors, while avoiding their limitation
Chareyre, Maxime. "Apprentissage non-supervisé pour la découverte de propriétés d'objets par découplage entre interaction et interprétation". Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2023. http://www.theses.fr/2023UCFA0122.
Texto completo da fonteRobots are increasingly used to achieve tasks in controlled environments. However, their use in open environments is still fraught with difficulties. Robotic agents are likely to encounter objects whose behaviour and function they are unaware of. In some cases, it must interact with these elements to carry out its mission by collecting or moving them, but without knowledge of their dynamic properties it is not possible to implement an effective strategy for resolving the mission.In this thesis, we present a method for teaching an autonomous robot a physical interaction strategy with unknown objects, without any a priori knowledge, the aim being to extract information about as many of the object's physical properties as possible from the interactions observed by its sensors. Existing methods for characterising objects through physical interactions do not fully satisfy these criteria. Indeed, the interactions established only provide an implicit representation of the object's dynamics, requiring supervision to identify their properties. Furthermore, the proposed solution is based on unrealistic scenarios without an agent. Our approach differs from the state of the art by proposing a generic method for learning interaction that is independent of the object and its properties, and can therefore be decoupled from the prediction phase. In particular, this leads to a completely unsupervised global pipeline.In the first phase, we propose to learn an interaction strategy with the object via an unsupervised reinforcement learning method, using an intrinsic motivation signal based on the idea of maximising variations in a state vector of the object. The aim is to obtain a set of interactions containing information that is highly correlated with the object's physical properties. This method has been tested on a simulated robot interacting by pushing and has enabled properties such as the object's mass, shape and friction to be accurately identified.In a second phase, we make the assumption that the true physical properties define a latent space that explains the object's behaviours and that this space can be identified from observations collected through the agent's interactions. We set up a self-supervised prediction task in which we adapt a state-of-the-art architecture to create this latent space. Our simulations confirm that combining the behavioural model with this architecture leads to the emergence of a representation of the object's properties whose principal components are shown to be strongly correlated with the object's physical properties.Once the properties of the objects have been extracted, the agent can use them to improve its efficiency in tasks involving these objects. We conclude this study by highlighting the performance gains achieved by the agent through training via reinforcement learning on a simplified object repositioning task where the properties are perfectly known.All the work carried out in simulation confirms the effectiveness of an innovative method aimed at autonomously discovering the physical properties of an object through the physical interactions of a robot. The prospects for extending this work involve transferring it to a real robot in a cluttered environment