Índice
Literatura académica sobre el tema "Classification large-échelle"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Classification large-échelle".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Classification large-échelle"
Postadjian, Tristan, Arnaud Le Bris, Hichem Sahbi y Clément Mallet. "Classification à très large échelle d'images satellites à très haute résolution spatiale par réseaux de neurones convolutifs". Revue Française de Photogrammétrie et de Télédétection, n.º 217-218 (21 de septiembre de 2018): 73–86. http://dx.doi.org/10.52638/rfpt.2018.418.
Texto completoMilward, David. "Sweating it Out: Facilitating Corrections and Parole in Canada Through Aboriginal Spiritual Healing". Windsor Yearbook of Access to Justice 29 (1 de febrero de 2011): 27. http://dx.doi.org/10.22329/wyaj.v29i0.4479.
Texto completoJassionnesse, Christophe. "Réflexions sur la stabilité en section courante des tunnels profonds". Revue Française de Géotechnique, n.º 176 (2023): 4. http://dx.doi.org/10.1051/geotech/2024001.
Texto completoVideau, Manon, Maxime Thibault, Denis Lebel, Suzanne Atkinson y Jean-François Bussières. "Surveillance des substances contrôlées en établissements de santé : une contribution à la gestion de la crise des opioïdes au Canada". Canadian Journal of Hospital Pharmacy 73, n.º 2 (28 de abril de 2020). http://dx.doi.org/10.4212/cjhp.v73i2.2977.
Texto completoLau, Louise, Harkaryn Bagri, Michael Legal y Karen Dahri. "Comparison of Clinical Importance of Drug Interactions Identified by Hospital Pharmacists and a Local Clinical Decision Support System". Canadian Journal of Hospital Pharmacy 74, n.º 3 (5 de julio de 2021). http://dx.doi.org/10.4212/cjhp.v74i3.3147.
Texto completoMoussaoui, Abderrahmane. "Violence". Anthropen, 2019. http://dx.doi.org/10.17184/eac.anthropen.123.
Texto completoTesis sobre el tema "Classification large-échelle"
Maggiori, Emmanuel. "Approches d'apprentissage pour la classification à large échelle d'images de télédétection". Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4041/document.
Texto completoThe analysis of airborne and satellite images is one of the core subjects in remote sensing. In recent years, technological developments have facilitated the availability of large-scale sources of data, which cover significant extents of the earth’s surface, often at impressive spatial resolutions. In addition to the evident computational complexity issues that arise, one of the current challenges is to handle the variability in the appearance of the objects across different geographic regions. For this, it is necessary to design classification methods that go beyond the analysis of individual pixel spectra, introducing higher-level contextual information in the process. In this thesis, we first propose a method to perform classification with shape priors, based on the optimization of a hierarchical subdivision data structure. We then delve into the use of the increasingly popular convolutional neural networks (CNNs) to learn deep hierarchical contextual features. We investigate CNNs from multiple angles, in order to address the different points required to adapt them to our problem. Among other subjects, we propose different solutions to output high-resolution classification maps and we study the acquisition of training data. We also created a dataset of aerial images over dissimilar locations, and assess the generalization capabilities of CNNs. Finally, we propose a technique to polygonize the output classification maps, so as to integrate them into operational geographic information systems, thus completing the typical processing pipeline observed in a wide number of applications. Throughout this thesis, we experiment on hyperspectral, atellite and aerial images, with scalability, generalization and applicability goals in mind
Babbar, Rohit. "Machine Learning Strategies for Large-scale Taxonomies". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM064/document.
Texto completoIn the era of Big Data, we need efficient and scalable machine learning algorithms which can perform automatic classification of Tera-Bytes of data. In this thesis, we study the machine learning challenges for classification in large-scale taxonomies. These challenges include computational complexity of training and prediction and the performance on unseen data. In the first part of the thesis, we study the underlying power-law distribution in large-scale taxonomies. This analysis then motivates the derivation of bounds on space complexity of hierarchical classifiers. Exploiting the study of this distribution further, we then design classification scheme which leads to better accuracy on large-scale power-law distributed categories. We also propose an efficient method for model-selection when training multi-class version of classifiers such as Support Vector Machine and Logistic Regression. Finally, we address another key model selection problem in large scale classification concerning the choice between flat versus hierarchical classification from a learning theoretic aspect. The presented generalization error analysis provides an explanation to empirical findings in many recent studies in large-scale hierarchical classification. We further exploit the developed bounds to propose two methods for adapting the given taxonomy of categories to output taxonomies which yield better test accuracy when used in a top-down setup
Paulin, Mattis. "De l'apprentissage de représentations visuelles robustes aux invariances pour la classification et la recherche d'images". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM007/document.
Texto completoThis dissertation focuses on designing image recognition systems which are robust to geometric variability. Image understanding is a difficult problem, as images are two-dimensional projections of 3D objects, and representations that must fall into the same category, for instance objects of the same class in classification can display significant differences. Our goal is to make systems robust to the right amount of deformations, this amount being automatically determined from data. Our contributions are twofolds. We show how to use virtual examples to enforce robustness in image classification systems and we propose a framework to learn robust low-level descriptors for image retrieval. We first focus on virtual examples, as transformation of real ones. One image generates a set of descriptors –one for each transformation– and we show that data augmentation, ie considering them all as iid samples, is the best performing method to use them, provided a voting stage with the transformed descriptors is conducted at test time. Because transformations have various levels of information, can be redundant, and can even be harmful to performance, we propose a new algorithm able to select a set of transformations, while maximizing classification accuracy. We show that a small amount of transformations is enough to considerably improve performance for this task. We also show how virtual examples can replace real ones for a reduced annotation cost. We report good performance on standard fine-grained classification datasets. In a second part, we aim at improving the local region descriptors used in image retrieval and in particular to propose an alternative to the popular SIFT descriptor. We propose new convolutional descriptors, called patch-CKN, which are learned without supervision. We introduce a linked patch- and image-retrieval dataset based on structure from motion of web-crawled images, and design a method to accurately test the performance of local descriptors at patch and image levels. Our approach outperforms both SIFT and all tested approaches with convolutional architectures on our patch and image benchmarks, as well as several styate-of-theart datasets
Gerald, Thomas. "Representation Learning for Large Scale Classification". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS316.
Texto completoThe past decades have seen the rise of new technologies that simplify information sharing. Today, a huge part of the data is accessible to most users. In this thesis, we propose to study the problems of document annotation to ease access to information thanks to retrieved annotations. We will be interested in extreme classification-related tasks which characterizes the tasks of automatic annotation when the number of labels is important. Many difficulties arise from the size and complexity of this data: prediction time, storage and the relevance of the annotations are the most representative. Recent research dealing with this issue is based on three classification schemes: "one against all" approaches learning as many classifiers as labels; "hierarchical" methods organizing a simple classifier structure; representation approaches embedding documents into small spaces. In this thesis, we study the representation classification scheme. Through our contributions, we study different approaches either to speed up prediction or to better structure representations. In a first part, we will study discrete representations such as "ECOC" methods to speed up the annotation process. In a second step, we will consider hyperbolic embeddings to take advantage of the qualities of this space for the representation of structured data
Akata, Zeynep. "Contributions à l'apprentissage grande échelle pour la classification d'images". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM003/document.
Texto completoBuilding algorithms that classify images on a large scale is an essential task due to the difficulty in searching massive amount of unlabeled visual data available on the Internet. We aim at classifying images based on their content to simplify the manageability of such large-scale collections. Large-scale image classification is a difficult problem as datasets are large with respect to both the number of images and the number of classes. Some of these classes are fine grained and they may not contain any labeled representatives. In this thesis, we use state-of-the-art image representations and focus on efficient learning methods. Our contributions are (1) a benchmark of learning algorithms for large scale image classification, and (2) a novel learning algorithm based on label embedding for learning with scarce training data. Firstly, we propose a benchmark of learning algorithms for large scale image classification in the fully supervised setting. It compares several objective functions for learning linear classifiers such as one-vs-rest, multiclass, ranking and weighted average ranking using the stochastic gradient descent optimization. The output of this benchmark is a set of recommendations for large-scale learning. We experimentally show that, online learning is well suited for large-scale image classification. With simple data rebalancing, One-vs-Rest performs better than all other methods. Moreover, in online learning, using a small enough step size with respect to the learning rate is sufficient for state-of-the-art performance. Finally, regularization through early stopping results in fast training and a good generalization performance. Secondly, when dealing with thousands of classes, it is difficult to collect sufficient labeled training data for each class. For some classes we might not even have a single training example. We propose a novel algorithm for this zero-shot learning scenario. Our algorithm uses side information, such as attributes to embed classes in a Euclidean space. We also introduce a function to measure the compatibility between an image and a label. The parameters of this function are learned using a ranking objective. Our algorithm outperforms the state-of-the-art for zero-shot learning. It is flexible and can accommodate other sources of side information such as hierarchies. It also allows for a smooth transition from zero-shot to few-shots learning
Doan, Thanh-Nghi. "Large scale support vector machines algorithms for visual classification". Thesis, Rennes 1, 2013. http://www.theses.fr/2013REN1S083/document.
Texto completoWe have proposed a novel method of combination multiple of different features for image classification. For large scale learning classifiers, we have developed the parallel versions of both state-of-the-art linear and nonlinear SVMs. We have also proposed a novel algorithm to extend stochastic gradient descent SVM for large scale learning. A class of large scale incremental SVM classifiers has been developed in order to perform classification tasks on large datasets with very large number of classes and training data can not fit into memory
Leveau, Valentin. "Représentations d'images basées sur un principe de voisins partagés pour la classification fine". Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT257/document.
Texto completoThis thesis focuses on the issue of fine-grained classification which is a particular classification task where classes may be visually distinguishable only from subtle localized details and where background often acts as a source of noise. This work is mainly motivated by the need to devise finer image representations to address such fine-grained classification tasks by encoding enough localized discriminant information such as spatial arrangement of local features.To this aim, the main research line we investigate in this work relies on spatially localized similarities between images computed thanks to efficient approximate nearest neighbor search techniques and localized parametric geometry. The main originality of our approach is to embed such spatially consistent localized similarities into a high-dimensional global image representation that preserves the spatial arrangement of the fine-grained visual patterns (contrary to traditional encoding methods such as BoW, Fisher or VLAD Vectors). In a nutshell, this is done by considering all raw patches of the training set as a large visual vocabulary and by explicitly encoding their similarity to the query image. In more details:The first contribution proposed in this work is a classification scheme based on a spatially consistent k-nn classifier that relies on pooling similarity scores between local features of the query and those of the similar retrieved images in the vocabulary set. As this set can be composed of a lot of local descriptors, we propose to scale up our approach by using approximate k-nearest neighbors search methods. Then, the main contribution of this work is a new aggregation-based explicit embedding derived from a newly introduced match kernel based on shared nearest neighbors of localized feature vectors combined with local geometric constraints. The originality of this new similarity-based representation space is that it directly integrates spatially localized geometric information in the aggregation process.Finally, as a third contribution, we proposed a strategy to drastically reduce, by up to two orders of magnitude, the high-dimensionality of the previously introduced over-complete image representation while still providing competitive image classification performance.We validated our approaches by conducting a series of experiments on several classification tasks involving rigid objects such as FlickrsLogos32 or Vehicles29 but also on tasks involving finer visual knowledge such as FGVC-Aircrafts, Oxford-Flower102 or CUB-Birds200. We also demonstrated significant results on fine-grained audio classification tasks such as the LifeCLEF 2015 bird species identification challenge by proposing a temporal extension of our image representation. Finally, we notably showed that our dimensionality reduction technique used on top of our representation resulted in highly interpretable visual vocabulary composed of the most representative image regions for different visual concepts of the training base
Mensink, Thomas. "Learning Image Classification and Retrieval Models". Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM113/document.
Texto completoWe are currently experiencing an exceptional growth of visual data, for example, millions of photos are shared daily on social-networks. Image understanding methods aim to facilitate access to this visual data in a semantically meaningful manner. In this dissertation, we define several detailed goals which are of interest for the image understanding tasks of image classification and retrieval, which we address in three main chapters. First, we aim to exploit the multi-modal nature of many databases, wherein documents consists of images with a form of textual description. In order to do so we define similarities between the visual content of one document and the textual description of another document. These similarities are computed in two steps, first we find the visually similar neighbors in the multi-modal database, and then use the textual descriptions of these neighbors to define a similarity to the textual description of any document. Second, we introduce a series of structured image classification models, which explicitly encode pairwise label interactions. These models are more expressive than independent label predictors, and lead to more accurate predictions. Especially in an interactive prediction scenario where a user provides the value of some of the image labels. Such an interactive scenario offers an interesting trade-off between accuracy and manual labeling effort. We explore structured models for multi-label image classification, for attribute-based image classification, and for optimizing for specific ranking measures. Finally, we explore k-nearest neighbors and nearest-class mean classifiers for large-scale image classification. We propose efficient metric learning methods to improve classification performance, and use these methods to learn on a data set of more than one million training images from one thousand classes. Since both classification methods allow for the incorporation of classes not seen during training at near-zero cost, we study their generalization performances. We show that the nearest-class mean classification method can generalize from one thousand to ten thousand classes at negligible cost, and still perform competitively with the state-of-the-art
Mathieu, Jordane. "Modèles d'impact statistiques en agriculture : de la prévision saisonnière à la prévision à long terme, en passant par les estimations annuelles". Thesis, Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE006/document.
Texto completoIn agriculture, weather is the main factor of variability between two consecutive years. This thesis aims to build large-scale statistical models that estimate the impact of weather conditions on agricultural yields. The scarcity of available agricultural data makes it necessary to construct simple models with few predictors, and to adapt model selection methods to avoid overfitting. Careful validation of statistical models is a major concern of this thesis. Neural networks and mixed effects models are compared, showing the importance of local specificities. Estimates of US corn yield at the end of the year show that temperature and precipitation information account for an average of 28% of yield variability. In several more weather-sensitive states, this score increases to nearly 70%. These results are consistent with recent studies on the subject. Mid-season maize crop yield forecasts are possible from July: as of July, the meteorological information available accounts for an average of 25% of the variability in final yield in the United States and close to 60% in more weather-sensitive states like Virginia. The northern and southeastern regions of the United States are the least well predicted. Predicting years for which extremely low yields are encountered is an important task. We use a specific method of classification, and show that with only 4 weather predictors, 71% of the very low yields are well detected on average. The impact of climate change on yields up to 2060 is also studied: the model we build provides information on the speed of evolution of yields in different counties of the United States. This highlights areas that will be most affected. For the most affected states (south and east coast), and with constant agricultural practice, the model predicts yields nearly divided by two in 2060, under the IPCC RCP 4.5 scenario. The northern states would be less affected. The statistical models we build can help for management on the short-term (seasonal forecasts) or to quantify the quality of the harvests before post-harvest surveys, as an aid to the monitoring (estimate at the end of the year). Estimations for the next 50 years help to anticipate the consequences of climate change on agricultural yields, and to define adaptation or mitigation strategies. The methodology used in this thesis is easily generalized to other cultures and other regions of the world
Mathieu, Jordane. "Modèles d'impact statistiques en agriculture : de la prévision saisonnière à la prévision à long terme, en passant par les estimations annuelles". Electronic Thesis or Diss., Paris Sciences et Lettres (ComUE), 2018. http://www.theses.fr/2018PSLEE006.
Texto completoIn agriculture, weather is the main factor of variability between two consecutive years. This thesis aims to build large-scale statistical models that estimate the impact of weather conditions on agricultural yields. The scarcity of available agricultural data makes it necessary to construct simple models with few predictors, and to adapt model selection methods to avoid overfitting. Careful validation of statistical models is a major concern of this thesis. Neural networks and mixed effects models are compared, showing the importance of local specificities. Estimates of US corn yield at the end of the year show that temperature and precipitation information account for an average of 28% of yield variability. In several more weather-sensitive states, this score increases to nearly 70%. These results are consistent with recent studies on the subject. Mid-season maize crop yield forecasts are possible from July: as of July, the meteorological information available accounts for an average of 25% of the variability in final yield in the United States and close to 60% in more weather-sensitive states like Virginia. The northern and southeastern regions of the United States are the least well predicted. Predicting years for which extremely low yields are encountered is an important task. We use a specific method of classification, and show that with only 4 weather predictors, 71% of the very low yields are well detected on average. The impact of climate change on yields up to 2060 is also studied: the model we build provides information on the speed of evolution of yields in different counties of the United States. This highlights areas that will be most affected. For the most affected states (south and east coast), and with constant agricultural practice, the model predicts yields nearly divided by two in 2060, under the IPCC RCP 4.5 scenario. The northern states would be less affected. The statistical models we build can help for management on the short-term (seasonal forecasts) or to quantify the quality of the harvests before post-harvest surveys, as an aid to the monitoring (estimate at the end of the year). Estimations for the next 50 years help to anticipate the consequences of climate change on agricultural yields, and to define adaptation or mitigation strategies. The methodology used in this thesis is easily generalized to other cultures and other regions of the world