Gotowa bibliografia na temat „Représentations robustes”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Représentations robustes”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Représentations robustes"
Laumond, Bénédicte. "La méthode expérimentale du jeu de cartes pour étudier les représentations pénales ordinaires en Allemagne et en France". Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique 147-148, nr 1-2 (sierpień 2020): 169–99. http://dx.doi.org/10.1177/0759106320939892.
Pełny tekst źródłaMainieri, Robin, Christophe Corona, Nicolas Eckert, Jérôme Lopez-Saez i Franck Bourrier. "Apports de la dendrogéomorphologie pour la connaissance de l’évolution de l’aléa rocheux dans les Préalpes françaises calcaires". Revue Française de Géotechnique, nr 163 (2020): 5. http://dx.doi.org/10.1051/geotech/2020014.
Pełny tekst źródłaPEYRAUD, J. L., i F. PHOCAS. "Dossier " Phénotypage des animaux d'élevage "". INRAE Productions Animales 27, nr 3 (25.08.2014): 179–1890. http://dx.doi.org/10.20870/productions-animales.2014.27.3.3065.
Pełny tekst źródłaBarney, Darin David. "Push-button Populism: The Reform Party and the Real World of Teledemocracy". Canadian Journal of Communication 21, nr 3 (1.03.1996). http://dx.doi.org/10.22230/cjc.1996v21n3a956.
Pełny tekst źródłaRozprawy doktorskie na temat "Représentations robustes"
Morchid, Mohamed. "Représentations robustes de documents bruités dans des espaces homogènes". Thesis, Avignon, 2014. http://www.theses.fr/2014AVIG0202/document.
Pełny tekst źródłaIn the Information Retrieval field, documents are usually considered as a "bagof-words". This model does not take into account the temporal structure of thedocument and is sensitive to noises which can alter its lexical form. These noisescan be produced by different sources : uncontrolled form of documents in microbloggingplatforms, automatic transcription of speech documents which are errorprone,lexical and grammatical variabilities in Web forums. . . The work presented inthis thesis addresses issues related to document representations from noisy sources.The thesis consists of three parts in which different representations of content areavailable. The first one compares a classical representation based on a term-frequencyrepresentation to a higher level representation based on a topic space. The abstractionof the document content allows us to limit the alteration of the noisy document byrepresenting its content with a set of high-level features. Our experiments confirm thatmapping a noisy document into a topic space allows us to improve the results obtainedduring different information retrieval tasks compared to a classical approach based onterm frequency. The major problem with such a high-level representation is that it isbased on a space theme whose parameters are chosen empirically.The second part presents a novel representation based on multiple topic spaces thatallow us to solve three main problems : the closeness of the subjects discussed in thedocument, the tricky choice of the "right" values of the topic space parameters and therobustness of the topic-based representation. Based on the idea that a single representationof the contents cannot capture all the relevant information, we propose to increasethe number of views on a single document. This multiplication of views generates "artificial"observations that contain fragments of useful information. The first experimentvalidated the multi-view approach to represent noisy texts. However, it has the disadvantageof being very large and redundant and of containing additional variability associatedwith the diversity of views. In the second step, we propose a method based onfactor analysis to compact the different views and to obtain a new robust representationof low dimension which contains only the informative part of the document whilethe noisy variabilities are compensated. During a dialogue classification task, the compressionprocess confirmed that this compact representation allows us to improve therobustness of noisy document representation.Nonetheless, during the learning process of topic spaces, the document is consideredas a "bag-of-words" while many studies have showed that the word position in a7document is useful. A representation which takes into account the temporal structureof the document based on hyper-complex numbers is proposed in the third part. Thisrepresentation is based on the hyper-complex numbers of dimension four named quaternions.Our experiments on a classification task have showed the effectiveness of theproposed approach compared to a conventional "bag-of-words" representation
Paulin, Mattis. "De l'apprentissage de représentations visuelles robustes aux invariances pour la classification et la recherche d'images". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM007/document.
Pełny tekst źródłaThis dissertation focuses on designing image recognition systems which are robust to geometric variability. Image understanding is a difficult problem, as images are two-dimensional projections of 3D objects, and representations that must fall into the same category, for instance objects of the same class in classification can display significant differences. Our goal is to make systems robust to the right amount of deformations, this amount being automatically determined from data. Our contributions are twofolds. We show how to use virtual examples to enforce robustness in image classification systems and we propose a framework to learn robust low-level descriptors for image retrieval. We first focus on virtual examples, as transformation of real ones. One image generates a set of descriptors –one for each transformation– and we show that data augmentation, ie considering them all as iid samples, is the best performing method to use them, provided a voting stage with the transformed descriptors is conducted at test time. Because transformations have various levels of information, can be redundant, and can even be harmful to performance, we propose a new algorithm able to select a set of transformations, while maximizing classification accuracy. We show that a small amount of transformations is enough to considerably improve performance for this task. We also show how virtual examples can replace real ones for a reduced annotation cost. We report good performance on standard fine-grained classification datasets. In a second part, we aim at improving the local region descriptors used in image retrieval and in particular to propose an alternative to the popular SIFT descriptor. We propose new convolutional descriptors, called patch-CKN, which are learned without supervision. We introduce a linked patch- and image-retrieval dataset based on structure from motion of web-crawled images, and design a method to accurately test the performance of local descriptors at patch and image levels. Our approach outperforms both SIFT and all tested approaches with convolutional architectures on our patch and image benchmarks, as well as several styate-of-theart datasets
Barbano, Carlo Alberto Maria. "Collateral-Free Learning of Deep Representations : From Natural Images to Biomedical Applications". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT038.
Pełny tekst źródłaDeep Learning (DL) has become one of the predominant tools for solving a variety of tasks, often with superior performance compared to previous state-of-the-art methods. DL models are often able to learn meaningful and abstract representations of the underlying data. However, it has been shown that they might also learn additional features, which are not necessarily relevant or required for the desired task. This could pose a number of issues, as this additional information can contain bias, noise, or sensitive information, that should not be taken into account (e.g. gender, race, age, etc.) by the model. We refer to this information as collateral. The presence of collateral information translates into practical issues when deploying DL-based pipelines, especially if they involve private users' data. Learning robust representations that are free of collateral information can be highly relevant for a variety of fields and applications, like medical applications and decision support systems.In this thesis, we introduce the concept of Collateral Learning, which refers to all those instances in which a model learns more information than intended. The aim of Collateral Learning is to bridge the gap between different fields in DL, such as robustness, debiasing, generalization in medical imaging, and privacy preservation. We propose different methods for achieving robust representations free of collateral information. Some of our contributions are based on regularization techniques, while others are represented by novel loss functions.In the first part of the thesis, we lay the foundations of our work, by developing techniques for robust representation learning on natural images. We focus on one of the most important instances of Collateral Learning, namely biased data. Specifically, we focus on Contrastive Learning (CL), and we propose a unified metric learning framework that allows us to both easily analyze existing loss functions, and derive novel ones. Here, we propose a novel supervised contrastive loss function, ε-SupInfoNCE, and two debiasing regularization techniques, EnD and FairKL, that achieve state-of-the-art performance on a number of standard vision classification and debiasing benchmarks.In the second part of the thesis, we focus on Collateral Learning in medical imaging, specifically on neuroimaging and chest X-ray images. For neuroimaging, we present a novel contrastive learning approach for brain age estimation. Our approach achieves state-of-the-art results on the OpenBHB dataset for age regression and shows increased robustness to the site effect. We also leverage this method to detect unhealthy brain aging patterns, showing promising results in the classification of brain conditions such as Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD). For chest X-ray images (CXR), we will target Covid-19 classification, showing how Collateral Learning can effectively hinder the reliability of such models. To tackle such issue, we propose a transfer learning approach that, combined with our regularization techniques, shows promising results on an original multi-site CXRs dataset.Finally, we provide some hints about Collateral Learning and privacy preservation in DL models. We show that some of our proposed methods can be effective in preventing certain information from being learned by the model, thus avoiding potential data leakage
Hafidi, Hakim. "Robust machine learning for Graphs/Networks". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAT004.
Pełny tekst źródłaThis thesis addresses advancements in graph representation learning, focusing on the challengesand opportunities presented by Graph Neural Networks (GNNs). It highlights the significanceof graphs in representing complex systems and the necessity of learning node embeddings that capture both node features and graph structure. The study identifies key issues in GNNs, such as their dependence on high-quality labeled data, inconsistent performanceacross various datasets, and susceptibility to adversarial attacks.To tackle these challenges, the thesis introduces several innovative approaches. Firstly, it employs contrastive learning for node representation, enabling self-supervised learning that reduces reliance on labeled data. Secondly, a Bayesian-based classifier isproposed for node classification, which considers the graph’s structure to enhance accuracy. Lastly, the thesis addresses the vulnerability of GNNs to adversarialattacks by assessing the robustness of the proposed classifier and introducing effective defense mechanisms.These contributions aim to improve both the performance and resilience of GNNs in graph representation learning
Roussillon, Tristan. "Algorithmes d'extraction de modèles géométriques discrets pour la représentation robuste des formes". Thesis, Lyon 2, 2009. http://www.theses.fr/2009LYO20103/document.
Pełny tekst źródłaThe work presented in this thesis concerns the fields of image analysis and discrete geometry. Image analysis aims at automatically describing the visual content of a digital image and discrete geometry provides tools devoted to digital image processing. A two-dimensional analog signal is regularly sampled in order to be handled on computers. This acquisition process results in a digital image, which is made up of a finite set of discrete elements. The topic of discrete geometry is to study the geometric properties of such kind of discrete spaces. In this work, we consider homogeneous regions of an image having a meaning for a user. The objective is to represent their digital contour by means of geometric patterns and compute measures. The scope of applications is wide in image analysis. For instance, our results would be of great interest for segmentation or object recognition. We focus on three discrete geometric patterns defined by Gauss digitization: the convex or concave part, the digital straight segment and the digital circular arc. We present several algorithms that detect or recognize these patterns on a digital contour. These algorithms are on-line, exact (integer-only computations without any approximation error) and fast (simplified computations thanks to arithmetic properties and linear-time complexity). They provide a way for segmenting a digital contour or for representing a digital contour by a reversible polygon. Moreover, we define a measure of convexity, a measure of straightness and a measure of circularity. These measures fulfil the following important properties: they are robust to rigid transformations, they may be applied on any part of a digital contour, they reach their maximal value for the template with which the data are compared to. From these measures, we introduce new patterns having a parameter that ranges from 0 to 1. The parameter is set to 1 when the localisation of the digital contour is reliable, but is set to a lower value when the digital contour is expected to have been shifted because of some acquisition noise. This measure-based approach provides a way for robustly decomposing a digital contour into convex, concave or straight parts
El, jili Fatimetou. "Représentation de signaux robuste aux bruits - Application à la détection et l'identification des signaux d'alarme". Thesis, Reims, 2018. http://www.theses.fr/2018REIMS040/document.
Pełny tekst źródłaThis work targets the detection and identification of audio signals and in particular alarm signals from priority cars. First, we propose a method for detecting alarm signals in a noisy environment, based on time-frequency signal analysis. This method makes it possible to detect and identify alarm signals embedded in noise, even with negative signal-to-noise ratios. Then we propose a signal quantization robust against transmission noise. This involves replacing each bit level of a vector of time or frequency samples with a binary word of the same length provided by an error- correcting encoder. In a first approach, each bit level is quantized independently of the others according to the Hamming distance minimization criterion. In a second approach, to reduce the quantization error at equal robustness, the different bit levels are quantized successively by a matching pursuit algorithm. This quantization gives the signals a specific shape that allows them to be easily recognized among other signals. Finally, we propose two methods for detecting and identifying signals based on robust quantization, operating in the time domain or in the frequency domain, by minimizing the distance between the received signals restricted to their high-weight bits and the reference signals. These methods make it possible to detect and identify signals in environments with very low signal-to-noise ratios, thanks to quantization. In addition, the first method, based on the time-frequency signature, is more efficient with quantized signals
Boimond, Jean-Louis. "Commande à modèle interne en représentation d'état. : Problèmes de synthèse d'algorithme de commande". Lyon, INSA, 1990. http://www.theses.fr/1990ISAL0102.
Pełny tekst źródła[The works presented in this thesis concern the Internal Model Control (I. M. C. ). The first part presents the main properties of this structure which combines the advantages of open-loop scheme (the controller is an approximate inverse of the model) and closed-loop structure (ability to cope with modelling errors and unmeasured disturbances). A comparison with the conventional closed-loop is briefly presented. In the second part, an asymptotic precision criterion is introduced; The conditions that are to be verified by the blocks of the I. M. C. , for zeroing the asymptotic error between the output and a polynomial input, are settled down. The controller is interpreted as an approximate inverse of the model. In discrete time, the use of F. I. R. (Finite Impulse Response) forms permits the synthesis of a stable and realisable controller. The third part deals with the problem of the model inversion in discrete time and in state space. It allows us to consider some vary linear or non-linear models, which are linear versus the control variable. The controller is decomposed in two parts: the first one generates the control variable in terms of model state and the reference objective, the second one generates the prediction of the reference signal. Asymptotic accuracy is guaranteed for reference inputs that are polynomial, with a given order, versus time. The last part presents the synthesis of an I. M. C. Based on the use of the above controller. The robustness filter becomes a predictor of the error between plant and model outputs, the dynamic of which is tuned according to the knowledge of the plant-model mismatch. Two approaches have been proposed to built in this filter. The first one uses the same technique as for the reference predictor. In the other, the usual notion of filtering is replaced by a measure of the prediction quality. ]
Oneata, Dan. "Modèles robustes et efficaces pour la reconnaissance d'action et leur localisation". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM019/document.
Pełny tekst źródłaVideo interpretation and understanding is one of the long-term research goals in computer vision. Realistic videos such as movies present a variety of challenging machine learning problems, such as action classification/action retrieval, human tracking, human/object interaction classification, etc. Recently robust visual descriptors for video classification have been developed, and have shown that it is possible to learn visual classifiers in realistic difficult settings. However, in order to deploy visual recognition systems on large-scale in practice it becomes important to address the scalability of the techniques. The main goal is this thesis is to develop scalable methods for video content analysis (eg for ranking, or classification)
Tran, Thi Quynh Nhi. "Robust and comprehensive joint image-text representations". Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1096/document.
Pełny tekst źródłaThis thesis investigates the joint modeling of visual and textual content of multimedia documents to address cross-modal problems. Such tasks require the ability to match information across modalities. A common representation space, obtained by eg Kernel Canonical Correlation Analysis, on which images and text can be both represented and directly compared is a generally adopted solution.Nevertheless, such a joint space still suffers from several deficiencies that may hinder the performance of cross-modal tasks. An important contribution of this thesis is therefore to identify two major limitations of such a space. The first limitation concerns information that is poorly represented on the common space yet very significant for a retrieval task. The second limitation consists in a separation between modalities on the common space, which leads to coarse cross-modal matching. To deal with the first limitation concerning poorly-represented data, we put forward a model which first identifies such information and then finds ways to combine it with data that is relatively well-represented on the joint space. Evaluations on emph{text illustration} tasks show that by appropriately identifying and taking such information into account, the results of cross-modal retrieval can be strongly improved. The major work in this thesis aims to cope with the separation between modalities on the joint space to enhance the performance of cross-modal tasks.We propose two representation methods for bi-modal or uni-modal documents that aggregate information from both the visual and textual modalities projected on the joint space. Specifically, for uni-modal documents we suggest a completion process relying on an auxiliary dataset to find the corresponding information in the absent modality and then use such information to build a final bi-modal representation for a uni-modal document. Evaluations show that our approaches achieve state-of-the-art results on several standard and challenging datasets for cross-modal retrieval or bi-modal and cross-modal classification
Siméoni, Oriane. "Robust image representation for classification, retrieval and object discovery". Thesis, Rennes 1, 2020. https://ged.univ-rennes1.fr/nuxeo/site/esupversions/415eb65b-d5f7-4be7-85e6-c2ecb2aba4dc.
Pełny tekst źródłaNeural network representations proved to be relevant for many computer vision tasks such as image classification, object detection, segmentation or instance-level image retrieval. A network is trained for one particular task and requires a large number of labeled data. We propose in this thesis solutions to extract the most information with the least supervision. First focusing on the classification task, we examine the active learning process in the context of deep learning and show that combining it to semi-supervised and unsupervised techniques boost greatly results. We then investigate the image retrieval task, and in particular we exploit the spatial localization information available ``for free'' in CNN feature maps. We first propose to represent an image by a collection of affine local features detected within activation maps, which are memory-efficient and robust enough to perform spatial matching. Then again extracting information from feature maps, we discover objects of interest in images of a dataset and gather their representations in a nearest neighbor graph. Using the centrality measure on the graph, we are able to construct a saliency map per image which focuses on the repeating objects and allows us to compute a global representation excluding clutter and background