Contents
Academic literature on the topic 'Classification grande échelle'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Classification grande échelle.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Classification grande échelle"
Doan, Tranh-Nghi, Tranh-Nghi Do, and François Poulet. "Classification d’images à grande échelle avec des SVM." Traitement du signal 31, no. 1-2 (October 28, 2014): 39–56. http://dx.doi.org/10.3166/ts.31.39-56.
Full textChehata, Nesrine, Karim Ghariani, Arnaud Le Bris, and Philippe Lagacherie. "Apport des images pléiades pour la délimitation des parcelles agricoles à grande échelle." Revue Française de Photogrammétrie et de Télédétection, no. 209 (January 29, 2015): 165–71. http://dx.doi.org/10.52638/rfpt.2015.220.
Full textGourmelon, Françoise. "Classification d'ortho-photographies numérisées pour une cartographie à grande échelle de la végétation terrestre." Canadian Journal of Remote Sensing 28, no. 2 (January 2002): 168–74. http://dx.doi.org/10.5589/m02-010.
Full textDaniel, Sylvie. "Revue des descripteurs tridimensionnels (3D) pour la catégorisation des nuages de points acquis avec un système LiDAR de télémétrie mobile." Geomatica 72, no. 1 (March 1, 2018): 1–15. http://dx.doi.org/10.1139/geomat-2018-0001.
Full textBargaoui, Zoubeïda, Hamouda Dakhlaoui, and Ahmed Houcine. "Modélisation pluie-débit et classification hydroclimatique." Revue des sciences de l'eau 21, no. 2 (July 22, 2008): 233–45. http://dx.doi.org/10.7202/018468ar.
Full textWeill, Frédéric. "La prospective territoriale." Mondes en développement N° 206, no. 2 (July 15, 2024): 13–30. http://dx.doi.org/10.3917/med.206.0013.
Full textPostadjian, Tristan, Arnaud Le Bris, Hichem Sahbi, and Clément Mallet. "Classification à très large échelle d'images satellites à très haute résolution spatiale par réseaux de neurones convolutifs." Revue Française de Photogrammétrie et de Télédétection, no. 217-218 (September 21, 2018): 73–86. http://dx.doi.org/10.52638/rfpt.2018.418.
Full textMilward, David. "Sweating it Out: Facilitating Corrections and Parole in Canada Through Aboriginal Spiritual Healing." Windsor Yearbook of Access to Justice 29 (February 1, 2011): 27. http://dx.doi.org/10.22329/wyaj.v29i0.4479.
Full textVialou, Denis. "L’art des grottes en Ariège magdalénienne." Gallia préhistoire. Suppléments 22, no. 1 (1986): 5–28. http://dx.doi.org/10.3406/galip.1986.2542.
Full textStoczkowski, Wiktor. "Race." Anthropen, 2017. http://dx.doi.org/10.17184/eac.anthropen.042.
Full textDissertations / Theses on the topic "Classification grande échelle"
Akata, Zeynep. "Contributions à l'apprentissage grande échelle pour la classification d'images." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM003/document.
Full textBuilding algorithms that classify images on a large scale is an essential task due to the difficulty in searching massive amount of unlabeled visual data available on the Internet. We aim at classifying images based on their content to simplify the manageability of such large-scale collections. Large-scale image classification is a difficult problem as datasets are large with respect to both the number of images and the number of classes. Some of these classes are fine grained and they may not contain any labeled representatives. In this thesis, we use state-of-the-art image representations and focus on efficient learning methods. Our contributions are (1) a benchmark of learning algorithms for large scale image classification, and (2) a novel learning algorithm based on label embedding for learning with scarce training data. Firstly, we propose a benchmark of learning algorithms for large scale image classification in the fully supervised setting. It compares several objective functions for learning linear classifiers such as one-vs-rest, multiclass, ranking and weighted average ranking using the stochastic gradient descent optimization. The output of this benchmark is a set of recommendations for large-scale learning. We experimentally show that, online learning is well suited for large-scale image classification. With simple data rebalancing, One-vs-Rest performs better than all other methods. Moreover, in online learning, using a small enough step size with respect to the learning rate is sufficient for state-of-the-art performance. Finally, regularization through early stopping results in fast training and a good generalization performance. Secondly, when dealing with thousands of classes, it is difficult to collect sufficient labeled training data for each class. For some classes we might not even have a single training example. We propose a novel algorithm for this zero-shot learning scenario. Our algorithm uses side information, such as attributes to embed classes in a Euclidean space. We also introduce a function to measure the compatibility between an image and a label. The parameters of this function are learned using a ranking objective. Our algorithm outperforms the state-of-the-art for zero-shot learning. It is flexible and can accommodate other sources of side information such as hierarchies. It also allows for a smooth transition from zero-shot to few-shots learning
Akata, Zeynep. "Contributions à l'apprentissage grande échelle pour la classification d'images." Phd thesis, Université de Grenoble, 2014. http://tel.archives-ouvertes.fr/tel-00873807.
Full textPaulin, Mattis. "De l'apprentissage de représentations visuelles robustes aux invariances pour la classification et la recherche d'images." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM007/document.
Full textThis dissertation focuses on designing image recognition systems which are robust to geometric variability. Image understanding is a difficult problem, as images are two-dimensional projections of 3D objects, and representations that must fall into the same category, for instance objects of the same class in classification can display significant differences. Our goal is to make systems robust to the right amount of deformations, this amount being automatically determined from data. Our contributions are twofolds. We show how to use virtual examples to enforce robustness in image classification systems and we propose a framework to learn robust low-level descriptors for image retrieval. We first focus on virtual examples, as transformation of real ones. One image generates a set of descriptors –one for each transformation– and we show that data augmentation, ie considering them all as iid samples, is the best performing method to use them, provided a voting stage with the transformed descriptors is conducted at test time. Because transformations have various levels of information, can be redundant, and can even be harmful to performance, we propose a new algorithm able to select a set of transformations, while maximizing classification accuracy. We show that a small amount of transformations is enough to considerably improve performance for this task. We also show how virtual examples can replace real ones for a reduced annotation cost. We report good performance on standard fine-grained classification datasets. In a second part, we aim at improving the local region descriptors used in image retrieval and in particular to propose an alternative to the popular SIFT descriptor. We propose new convolutional descriptors, called patch-CKN, which are learned without supervision. We introduce a linked patch- and image-retrieval dataset based on structure from motion of web-crawled images, and design a method to accurately test the performance of local descriptors at patch and image levels. Our approach outperforms both SIFT and all tested approaches with convolutional architectures on our patch and image benchmarks, as well as several styate-of-theart datasets
Gressin, Adrien. "Mise à jour d’une base de données d’occupation du sol à grande échelle en milieux naturels à partir d’une image satellite THR." Thesis, Paris 5, 2014. http://www.theses.fr/2014PA05S022/document.
Full textLand-Cover geospatial databases (LC-BDs) are mandatory inputs for various purposes such as for natural resources monitoring land planning, and public policies management. To improve this monitoring, users look for both better geometric, and better semantic levels of detail. To fulfill such requirements, a large-scale LC-DB is being established at the French National Mapping Agency (IGN). However, to meet the users needs, this DB must be updated as regularly as possible while keeping the initial accuracies. Consequently, automatic updating methods should be set up in order to allow such large-scale computation. Furthermore, Earth observation satellites have been successfully used to the constitution of LC-DB at various scales such as Corine Land Cover (CLC). Nowadays, very high resolution (VHR) sensors, such as Pléiades satellite, allow to product large-scale LC-DB. Consequently, the purpose of this thesis is to propose an automatic updating method of such large-scale LC-DB from VHR monoscopic satellite image (to limit acquisition costs) while ensuring the robustness of the detected changes. Our proposed method is based on a multilevel supervised learning algorithm MLMOL, which allows to best take into account the possibly multiple appearances of each DB classes. This algorithm can be applied to various images and DB data sets, independently of the classifier, and the attributes extracted from the input image. Moreover, the classifications stacking improves the robustness of the method, especially on classes having multiple appearances (e.g., plowed or not plowed fields, stand-alone houses or industrial warehouse buildings, ...). In addition, the learning algorithm is integrated into a processing chain (LUPIN) allowing, first to automatically fit to the different existing DB themes and, secondly, to be robust to in-homogeneous areas. As a result, the method is successfully applied to a Pleiades image on an area near Tarbes (southern France) covered by the IGN large-scale LC-DB. Results show the contribution of Pleiades images (in terms of sub-meter resolution and spectral dynamics). Indeed, thanks to the texture and shape attributes (morphological profiles, SFS, ...), VHR satellite images give good classification results, even on classes such as roads, and buildings that usually require specific methods. Moreover, the proposed method provides relevant change indicators in the area. In addition, our method provides a significant support for the creation of LC-DB obtain by merging several existing DBs. Indeed, our method allows to take a decision when the fusion of initials DBs generates overlapping areas, particularly when such DBs come from different sources with their own specification. In addition, our method allows to fill potential gaps in the coverage of such generating DB, but also to extend the data to the coverage of a larger image. Finally, the proposed workflow is applied to different remote sensing data sets in order to assess its versatility and the relevance of such data. Results show that our method is able to deal with such different spatial resolutions data sets (Pléiades at 0.5 m, SPOT 6 at 1.5 m and RapidEye at 5 m), and to take into account the strengths of each sensor, e.g., the RapidEye red-edge channel for discrimination theme forest, the good balance of the SPOT~6 resolution for built-up areas classes and the capability of VHR of Pléiades images to discriminate objects of small spatial extent such as roads or hedge
Cui, Yanwei. "Kernel-based learning on hierarchical image representations : applications to remote sensing data classification." Thesis, Lorient, 2017. http://www.theses.fr/2017LORIS448/document.
Full textHierarchical image representations have been widely used in the image classification context. Such representations are capable of modeling the content of an image through a tree structure. In this thesis, we investigate kernel-based strategies that make possible taking input data in a structured form and capturing the topological patterns inside each structure through designing structured kernels. We develop a structured kernel dedicated to unordered tree and path (sequence of nodes) structures equipped with numerical features, called Bag of Subpaths Kernel (BoSK). It is formed by summing up kernels computed on subpaths (a bag of all paths and single nodes) between two bags. The direct computation of BoSK yields a quadratic complexity w.r.t. both structure size (number of nodes) and amount of data (training size). We also propose a scalable version of BoSK (SBoSK for short), using Random Fourier Features technique to map the structured data in a randomized finite-dimensional Euclidean space, where inner product of the transformed feature vector approximates BoSK. It brings down the complexity from quadratic to linear w.r.t. structure size and amount of data, making the kernel compliant with the large-scale machine-learning context. Thanks to (S)BoSK, we are able to learn from cross-scale patterns in hierarchical image representations. (S)BoSK operates on paths, thus allowing modeling the context of a pixel (leaf of the hierarchical representation) through its ancestor regions at multiple scales. Such a model is used within pixel-based image classification. (S)BoSK also works on trees, making the kernel able to capture the composition of an object (top of the hierarchical representation) and the topological relationships among its subparts. This strategy allows tile/sub-image classification. Further relying on (S)BoSK, we introduce a novel multi-source classification approach that performs classification directly from a hierarchical image representation built from two images of the same scene taken at different resolutions, possibly with different modalities. Evaluations on several publicly available remote sensing datasets illustrate the superiority of (S)BoSK compared to state-of-the-art methods in terms of classification accuracy, and experiments on an urban classification task show the effectiveness of proposed multi-source classification approach
Mensink, Thomas. "Learning Image Classification and Retrieval Models." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM113/document.
Full textWe are currently experiencing an exceptional growth of visual data, for example, millions of photos are shared daily on social-networks. Image understanding methods aim to facilitate access to this visual data in a semantically meaningful manner. In this dissertation, we define several detailed goals which are of interest for the image understanding tasks of image classification and retrieval, which we address in three main chapters. First, we aim to exploit the multi-modal nature of many databases, wherein documents consists of images with a form of textual description. In order to do so we define similarities between the visual content of one document and the textual description of another document. These similarities are computed in two steps, first we find the visually similar neighbors in the multi-modal database, and then use the textual descriptions of these neighbors to define a similarity to the textual description of any document. Second, we introduce a series of structured image classification models, which explicitly encode pairwise label interactions. These models are more expressive than independent label predictors, and lead to more accurate predictions. Especially in an interactive prediction scenario where a user provides the value of some of the image labels. Such an interactive scenario offers an interesting trade-off between accuracy and manual labeling effort. We explore structured models for multi-label image classification, for attribute-based image classification, and for optimizing for specific ranking measures. Finally, we explore k-nearest neighbors and nearest-class mean classifiers for large-scale image classification. We propose efficient metric learning methods to improve classification performance, and use these methods to learn on a data set of more than one million training images from one thousand classes. Since both classification methods allow for the incorporation of classes not seen during training at near-zero cost, we study their generalization performances. We show that the nearest-class mean classification method can generalize from one thousand to ten thousand classes at negligible cost, and still perform competitively with the state-of-the-art
Mensink, Thomas. "Apprentissage de Modèles pour la Classification et la Recherche d'Images." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00752022.
Full textGressin, Adrien. "Mise à jour d’une base de données d’occupation du sol à grande échelle en milieux naturels à partir d’une image satellite THR." Electronic Thesis or Diss., Paris 5, 2014. http://www.theses.fr/2014PA05S022.
Full textLand-Cover geospatial databases (LC-BDs) are mandatory inputs for various purposes such as for natural resources monitoring land planning, and public policies management. To improve this monitoring, users look for both better geometric, and better semantic levels of detail. To fulfill such requirements, a large-scale LC-DB is being established at the French National Mapping Agency (IGN). However, to meet the users needs, this DB must be updated as regularly as possible while keeping the initial accuracies. Consequently, automatic updating methods should be set up in order to allow such large-scale computation. Furthermore, Earth observation satellites have been successfully used to the constitution of LC-DB at various scales such as Corine Land Cover (CLC). Nowadays, very high resolution (VHR) sensors, such as Pléiades satellite, allow to product large-scale LC-DB. Consequently, the purpose of this thesis is to propose an automatic updating method of such large-scale LC-DB from VHR monoscopic satellite image (to limit acquisition costs) while ensuring the robustness of the detected changes. Our proposed method is based on a multilevel supervised learning algorithm MLMOL, which allows to best take into account the possibly multiple appearances of each DB classes. This algorithm can be applied to various images and DB data sets, independently of the classifier, and the attributes extracted from the input image. Moreover, the classifications stacking improves the robustness of the method, especially on classes having multiple appearances (e.g., plowed or not plowed fields, stand-alone houses or industrial warehouse buildings, ...). In addition, the learning algorithm is integrated into a processing chain (LUPIN) allowing, first to automatically fit to the different existing DB themes and, secondly, to be robust to in-homogeneous areas. As a result, the method is successfully applied to a Pleiades image on an area near Tarbes (southern France) covered by the IGN large-scale LC-DB. Results show the contribution of Pleiades images (in terms of sub-meter resolution and spectral dynamics). Indeed, thanks to the texture and shape attributes (morphological profiles, SFS, ...), VHR satellite images give good classification results, even on classes such as roads, and buildings that usually require specific methods. Moreover, the proposed method provides relevant change indicators in the area. In addition, our method provides a significant support for the creation of LC-DB obtain by merging several existing DBs. Indeed, our method allows to take a decision when the fusion of initials DBs generates overlapping areas, particularly when such DBs come from different sources with their own specification. In addition, our method allows to fill potential gaps in the coverage of such generating DB, but also to extend the data to the coverage of a larger image. Finally, the proposed workflow is applied to different remote sensing data sets in order to assess its versatility and the relevance of such data. Results show that our method is able to deal with such different spatial resolutions data sets (Pléiades at 0.5 m, SPOT 6 at 1.5 m and RapidEye at 5 m), and to take into account the strengths of each sensor, e.g., the RapidEye red-edge channel for discrimination theme forest, the good balance of the SPOT~6 resolution for built-up areas classes and the capability of VHR of Pléiades images to discriminate objects of small spatial extent such as roads or hedge
Doan, Thanh-Nghi. "Large scale support vector machines algorithms for visual classification." Thesis, Rennes 1, 2013. http://www.theses.fr/2013REN1S083/document.
Full textWe have proposed a novel method of combination multiple of different features for image classification. For large scale learning classifiers, we have developed the parallel versions of both state-of-the-art linear and nonlinear SVMs. We have also proposed a novel algorithm to extend stochastic gradient descent SVM for large scale learning. A class of large scale incremental SVM classifiers has been developed in order to perform classification tasks on large datasets with very large number of classes and training data can not fit into memory
Bellet, Valentine. "Intelligence artificielle appliquée aux séries temporelles d'images satellites pour la surveillance des écosystèmes." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES013.
Full textIn the context of climate change, ecosystem monitoring is a crucial task. It allows to better understand the changes that affect them and also enables decision-making to preserve them for current and future generations. Land use and land cover (LULC) maps are an essential tool in ecosystem monitoring providing information on different types of physical cover of the Earth's surface (e.g. forests, grasslands, croplands). Nowadays, an increasing number of satellite missions generate huge amounts of free and open data. In particular, satellite image time series (SITS), such as the ones produced by Sentinel-2, offer high temporal, spectral and spatial resolutions and provide relevant information about vegetation dynamics. Combined with machine learning algorithms, they allow the production of frequent and accurate LULC maps. This thesis is focused on the development of pixel-based supervised classification algorithms for the production of LULC maps at large scale. Four main challenges arise in an operational context. Firstly, unprecedented amounts of data are available and the algorithms need to be adapted accordingly. Secondly, with the improvement in spatial, spectral and temporal resolutions, the algorithms should be able to take into account correlations between the spectro-temporal features to extract meaningful representations for the purpose of classification. Thirdly, in wide geographical coverage, the problem of non-stationarity of the data arises, therefore the algorithms should be able to take into account this spatial variability. Fourthly, because of the different satellite orbits or meteorological conditions, the acquisition times are irregular and unaligned between pixels, thus, the algorithms should be able to work with irregular and unaligned SITS. This work has been divided into two main parts. The first PhD contribution is the development of stochastic variational Gaussian Processes (SVGP) on massive data sets. The proposed Gaussian Processes (GP) model can be trained with millions of samples, compared to few thousands for traditional GP methods. The spatial and spectro-temporal structure of the data is taken into account thanks to the inclusion of the spatial information in bespoke composite covariance functions. Besides, this development enables to take into account the spatial information and thus to be robust to the spatial variability of the data. However, the time series are linearly resampled independently from the classification. Therefore, the second PhD contribution is the development of an end-to-end learning by combining a time and space informed kernel interpolator with the previous SVGP classifier. The interpolator embeds irregular and unaligned SITS onto a fixed and reduced size latent representation. The obtained latent representation is given to the SVGP classifier and all the parameters are jointly optimized w.r.t. the classification task. Experiments were run with Sentinel-2 SITS of the full year 2018 over an area of 200 000 km^2 (about 2 billion pixels) in the south of France (27 MGRS tiles), which is representative of an operational setting. Results show that both methods (i.e. SVGP classifier with linearly interpolated time series and the spatially kernel interpolator combined with the SVGP classifier) outperform the method used for current operational systems (i.e. Random Forest with linearly interpolated time series using spatial stratification)