Inhaltsverzeichnis
Auswahl der wissenschaftlichen Literatur zum Thema „Approches d'apprentissage automatique“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Approches d'apprentissage automatique" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Approches d'apprentissage automatique"
Chehata, Nesrine, Karim Ghariani, Arnaud Le Bris und Philippe Lagacherie. „Apport des images pléiades pour la délimitation des parcelles agricoles à grande échelle“. Revue Française de Photogrammétrie et de Télédétection, Nr. 209 (29.01.2015): 165–71. http://dx.doi.org/10.52638/rfpt.2015.220.
Der volle Inhalt der QuelleDissertationen zum Thema "Approches d'apprentissage automatique"
Girard, Nicolas. „Approches d'apprentissage et géométrique pour l'extraction automatique d'objets à partir d'images de télédétection“. Thesis, Université Côte d'Azur, 2020. https://tel.archives-ouvertes.fr/tel-03177997.
Der volle Inhalt der QuelleCreating a digital double of the Earth in the form of maps has many applications in e.g. autonomous driving, automated drone delivery, urban planning, telecommunications, and disaster management. Geographic Information Systems (GIS) are the frameworks used to integrate geolocalized data and represent maps. They represent shapes of objects in a vector representation so that it is as sparse as possible while representing shapes accurately, as well as making it easier to edit than raster data. With the increasing amount of satellite and aerial images being captured every day, automatic methods are being developed to transfer the information found in those remote sensing images into Geographic Information Systems. Deep learning methods for image segmentation are able to delineate the shapes of objects found in images however they do so with a raster representation, in the form of a mask. Post-processing vectorization methods then convert that raster representation into a vector representation compatible with GIS. Another challenge in remote sensing is to deal with a certain type of noise in the data, which is the misalignment between different layers of geolocalized information (e.g. between images and building cadaster data). This type of noise is frequent due to various errors introduced during the processing of remote sensing data. This thesis develops combined learning and geometric approaches with the purpose to improve automatic GIS mapping from remote sensing images.We first propose a method for correcting misaligned maps over images, with the first motivation for them to match, but also with the motivation to create remote sensing datasets for image segmentation with alignment-corrected ground truth. Indeed training a model on misaligned ground truth would not lead to great performance, whereas aligned ground truth annotations will result in better models. During this work we also observed a denoising effect of our alignment model and use it to denoise a misaligned dataset in a self-supervised manner, meaning only the misaligned dataset was used for training.We then propose a simple approach to use a neural network to directly output shape information in the vector representation, in order to by-pass the post-processing vectorization step. Experimental results on a dataset of solar panels show that the proposed network succeeds in learning to regress polygon coordinates, yielding directly vectorial map outputs. Our simple method is limited to predicting polygons with a fixed number of vertices though.While more recent methods for learning directly in the vector representation do not have this limitation, they still have other limitations in terms of the type of object shapes they can predict. More complex topological cases such as objects with holes or buildings touching each other (with a common wall which is very typical of European city centers) are not handled by these fully deep learning methods. We thus propose a hybrid approach alleviating those limitations by training a neural network to output a segmentation probability map as usual and also to output a frame field aligned with the contours of detected objects (buildings in our case). That frame field constitutes additional shape information learned by the network. We then propose our highly parallelizable polygonization method for leveraging that frame field information to vectorize the segmentation probability map efficiently. Because our polygonization method has access to additional information in the form of a frame field, it can be less complex than other advanced vectorization methods and is thus faster. Lastly, requiring an image segmentation network to also output a frame field only adds two convolutional layers and virtually does not increase inference time, making the use of a frame field only beneficial
Maggiori, Emmanuel. „Approches d'apprentissage pour la classification à large échelle d'images de télédétection“. Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4041/document.
Der volle Inhalt der QuelleThe analysis of airborne and satellite images is one of the core subjects in remote sensing. In recent years, technological developments have facilitated the availability of large-scale sources of data, which cover significant extents of the earth’s surface, often at impressive spatial resolutions. In addition to the evident computational complexity issues that arise, one of the current challenges is to handle the variability in the appearance of the objects across different geographic regions. For this, it is necessary to design classification methods that go beyond the analysis of individual pixel spectra, introducing higher-level contextual information in the process. In this thesis, we first propose a method to perform classification with shape priors, based on the optimization of a hierarchical subdivision data structure. We then delve into the use of the increasingly popular convolutional neural networks (CNNs) to learn deep hierarchical contextual features. We investigate CNNs from multiple angles, in order to address the different points required to adapt them to our problem. Among other subjects, we propose different solutions to output high-resolution classification maps and we study the acquisition of training data. We also created a dataset of aerial images over dissimilar locations, and assess the generalization capabilities of CNNs. Finally, we propose a technique to polygonize the output classification maps, so as to integrate them into operational geographic information systems, thus completing the typical processing pipeline observed in a wide number of applications. Throughout this thesis, we experiment on hyperspectral, atellite and aerial images, with scalability, generalization and applicability goals in mind
Motta, Jesus Antonio. „VENCE : un modèle performant d'extraction de résumés basé sur une approche d'apprentissage automatique renforcée par de la connaissance ontologique“. Doctoral thesis, Université Laval, 2014. http://hdl.handle.net/20.500.11794/26076.
Der volle Inhalt der QuelleSeveral methods and techniques of artificial intelligence for information extraction, pattern recognition and data mining are used for extraction of summaries. More particularly, new machine learning models with the introduction of ontological knowledge allow the extraction of the sentences containing the greatest amount of information from a corpus. This corpus is considered as a set of sentences on which different optimization methods are applied to identify the most important attributes. They will provide a training set from which a machine learning algorithm will can abduce a classification function able to discriminate the sentences of new corpus according their information content. Currently, even though the results are interesting, the effectiveness of models based on this approach is still low, especially in the discriminating power of classification functions. In this thesis, a new model based on this approach is proposed and its effectiveness is improved by inserting ontological knowledge to the training set. The originality of this model is described through three papers. The first paper aims to show how linear techniques could be applied in an original way to optimize workspace in the context of extractive summary. The second article explains how to insert ontological knowledge to significantly improve the performance of classification functions. This introduction is performed by inserting lexical chains of ontological knowledge based in the training set. The third article describes VENCE , the new machine learning model to extract sentences with the most information content in order to produce summaries. An assessment of the VENCE performance is achieved comparing the results with those produced by current commercial and public software as well as those published in very recent scientific articles. The use of usual metrics recall, precision and F_measure and the ROUGE toolkit showed the superiority of VENCE. This model could benefit other contexts of information extraction as for instance to define models for sentiment analysis.
Sayadi, Karim. „Classification du texte numérique et numérisé. Approche fondée sur les algorithmes d'apprentissage automatique“. Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066079/document.
Der volle Inhalt der QuelleDifferent disciplines in the humanities, such as philology or palaeography, face complex and time-consuming tasks whenever it comes to examining the data sources. The introduction of computational approaches in humanities makes it possible to address issues such as semantic analysis and systematic archiving. The conceptual models developed are based on algorithms that are later hard coded in order to automate these tedious tasks. In the first part of the thesis we propose a novel method to build a semantic space based on topics modeling. In the second part and in order to classify historical documents according to their script. We propose a novel representation learning method based on stacking convolutional auto-encoder. The goal is to automatically learn plot representations of the script or the written language
Loisel, Julie. „Détection des ruptures de la chaîne du froid par une approche d'apprentissage automatique“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASB014.
Der volle Inhalt der QuelleThe cold chain is essential to ensure food safety and avoid food waste. Wireless sensors are increasingly used to monitor the air temperature through the cold chain, however, the exploitation of these measurements is still limited. This thesis explores how machine learning can be used to predict the temperature of different food products types from the measured air temperature in a pallet and detect cold chain breaks. We introduced, firstly, a definition of a cold chain break based on two main product categories: products obligatorily preserved at a regulated temperature such as meat and fish, and products for which a temperature is recommended such as fruits and vegetables. The cold chain break leads to food poisoning for the first product category and organoleptic quality degradation for the second one.For temperature-regulated products, it is crucial to predict the product temperature to ensure that it does not exceed the regulatory temperature. Although several studies demonstrated the effectiveness of neural networks for the prediction, none has compared the synthetic and experimental data to train them. In this thesis, we proposed to compare these two types of data in order to provide guidelines for the development of neural networks. In practice, the products and packaging are diverse; experiments for each application are impossible due to the complexity of implementation. By comparing synthetic and experimental data, we were able to determine best practices for developing neural networks to predict product temperature and maintain cold chain. For temperature-regulated products, once the cold chain break is detected, they are no more consumable and must be eliminated. For temperature-recommended products, we compared three different approaches to detect cold chain breaks and implement corrective actions: a) method based on a temperature threshold, b) method based on a classifier which determines whether the products will be delivered with the expected qualities, and c) method also based on a classifier but which integrates the cost of the corrective measure in the decision-making process. The performances of the three methods are discussed and prospects for improvement are proposed
Arman, Molood. „Machine Learning Approaches for Sub-surface Geological Heterogeneous Sources“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG014.
Der volle Inhalt der QuelleIn oil and gas exploration and production, understanding subsurface geological structures, such as well logs and rock samples, is essential to provide predictive and decision support tools. Gathering and using data from a variety of sources, both structured and unstructured, such as relational databases and digitized reports on the subsurface geology, are critical. The main challenge for the structured data is the lack of a global schema to cross-reference all attributes from different sources. The challenges are different for unstructured data. Most subsurface geological reports are scanned versions of documents. Our dissertation aims to provide a structured representation of the different data sources and to build domain-specific language models for learning named entities related to subsurface geology
Oum, Oum Sack Pierre Marie. „Contribution à l'étude de la qualité du logiciel : approche à base d'apprentissage automatique et de transformation de modèles“. Littoral, 2009. http://www.theses.fr/2009DUNK0221.
Der volle Inhalt der QuelleThis thesis shows the various works we perform in the area of software quality definition and evaluation. We consider the software quality as a key and transversal concept that must be considered by all the software development activities. We must then provide tools dealing with interoperability of such activities. So, an important part of our work has been devoted to this topic by the means of the adoption of GXL (Graph eXchange Language). GXL is, in fact the medium allowing software objects or artefacts interchange. GXL is then used as a technological support in order to implement our approach that consists of defining and evaluating the software quality by combining concepts of the Model Driven Engineering and the machine learning. Our aim is to provide an operational platform allowing a precise definition of software quality by means of machine learning algorithms and an incremental quality models construction by means of model transformation operations that are implemented by a graph transformation system
Moulet, Lucie. „Modélisation de l'apprenant avec une approche par compétences dans le cadre d'environnement d'apprentissage en ligne“. Paris 6, 2011. http://www.theses.fr/2011PA066636.
Der volle Inhalt der QuelleQamar, Ali Mustafa. „Mesures de similarité et cosinus généralisé : une approche d'apprentissage supervisé fondée sur les k plus proches voisins“. Phd thesis, Université de Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00591988.
Der volle Inhalt der QuelleQamar, Ali Mustafa. „Mesures de similarité et cosinus généralisé : une approche d'apprentissage supervisé fondée sur les k plus proches voisins“. Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM083.
Der volle Inhalt der QuelleAlmost all machine learning problems depend heavily on the metric used. Many works have proved that it is a far better approach to learn the metric structure from the data rather than assuming a simple geometry based on the identity matrix. This has paved the way for a new research theme called metric learning. Most of the works in this domain have based their approaches on distance learning only. However some other works have shown that similarity should be preferred over distance metrics while dealing with textual datasets as well as with non-textual ones. Being able to efficiently learn appropriate similarity measures, as opposed to distances, is thus of high importance for various collections. If several works have partially addressed this problem for different applications, no previous work is known which has fully addressed it in the context of learning similarity metrics for kNN classification. This is exactly the focus of the current study. In the case of information filtering systems where the aim is to filter an incoming stream of documents into a set of predefined topics with little supervision, cosine based category specific thresholds can be learned. Learning such thresholds can be seen as a first step towards learning a complete similarity measure. This strategy was used to develop Online and Batch algorithms for information filtering during the INFILE (Information Filtering) track of the CLEF (Cross Language Evaluation Forum) campaign during the years 2008 and 2009. However, provided enough supervised information is available, as is the case in classification settings, it is usually beneficial to learn a complete metric as opposed to learning thresholds. To this end, we developed numerous algorithms for learning complete similarity metrics for kNN classification. An unconstrained similarity learning algorithm called SiLA is developed in which case the normalization is independent of the similarity matrix. SiLA encompasses, among others, the standard cosine measure, as well as the Dice and Jaccard coefficients. SiLA is an extension of the voted perceptron algorithm and allows to learn different types of similarity functions (based on diagonal, symmetric or asymmetric matrices). We then compare SiLA with RELIEF, a well known feature re-weighting algorithm. It has recently been suggested by Sun and Wu that RELIEF can be seen as a distance metric learning algorithm optimizing a cost function which is an approximation of the 0-1 loss. We show here that this approximation is loose, and propose a stricter version closer to the the 0-1 loss, leading to a new, and better, RELIEF-based algorithm for classification. We then focus on a direct extension of the cosine similarity measure, defined as a normalized scalar product in a projected space. The associated algorithm is called generalized Cosine simiLarity Algorithm (gCosLA). All of the algorithms are tested on many different datasets. A statistical test, the s-test, is employed to assess whether the results are significantly different. GCosLA performed statistically much better than SiLA on many of the datasets. Furthermore, SiLA and gCosLA were compared with many state of the art algorithms, illustrating their well-foundedness