Contents
Academic literature on the topic 'Apprentissage automatique – Évaluation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Apprentissage automatique – Évaluation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Apprentissage automatique – Évaluation"
Hamida, Soufiane, Bouchaib Cherradi, Abdelhadi Raihani, and Hassan Ouajji. "Evaluation des apprentissages au sein d’un environnement de type MOOC adaptatif." ITM Web of Conferences 39 (2021): 03005. http://dx.doi.org/10.1051/itmconf/20213903005.
Full textGuisset, Manuela, Liesje Coertjens, Dominique De Jaeger, Guillaume Lobet, Olivier Servais, Vincent Wertz, Patrick Willems, and Jean-François Rees. "Évaluations par Cartes Conceptuelles à trous et apprentissage par les pairs." Les Annales de QPES 1, no. 3 (May 27, 2021). http://dx.doi.org/10.14428/qpes.v1i3.62073.
Full textDissertations / Theses on the topic "Apprentissage automatique – Évaluation"
Bove, Clara. "Conception et évaluation d’interfaces utilisateur explicatives pour systèmes complexes en apprentissage automatique." Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS247.pdf.
Full textThis thesis focuses on human-centered eXplainable AI (XAI) and more specif- ically on the intelligibility of Machine Learning (ML) explanations for non-expert users. The technical context is as follows: on one side, either an opaque classifier or regressor provides a prediction, with an XAI post-hoc approach that generates pieces of information as explanations; on the other side, the user receives both the prediction and the explanations. Within this XAI technical context, several is- sues might lessen the quality of explanations. The ones we focus on are: the lack of contextual information in ML explanations, the unguided design of function- alities or the user’s exploration, as well as confusion that could be caused when delivering too much information. To solve these issues, we develop an experimental procedure to design XAI functional interfaces and evaluate the intelligibility of ML explanations by non-expert users. Doing so, we investigate the XAI enhancements provided by two types of local explanation components: feature importance and counterfac- tual examples. Thus, we propose generic XAI principles for contextualizing and allowing exploration on feature importance; and for guiding users in their com- parative analysis of counterfactual explanations with plural examples. We pro- pose an implementation of such principles into two distinct explanation-based user interfaces, respectively for an insurance and a financial scenarios. Finally, we use the enhanced interfaces to conduct users studies in lab settings and to measure two dimensions of intelligibility, namely objective understanding and subjective satisfaction. For local feature importance, we demonstrate that con- textualization and exploration improve the intelligibility of such explanations. Similarly for counterfactual examples, we demonstrate that the plural condition improve the intelligibility as well, and that comparative analysis appears to be a promising tool for users’ satisfaction. At a fundamental level, we consider the issue of inconsistency within ML explanations from a theoretical point of view. In the explanation process consid- ered for this thesis, the quality of an explanation relies both on the ability of the Machine Learning system to generate a coherent explanation and on the ability of the end user to make a correct interpretation of these explanations. Thus, there can be limitations: on one side, as reported in the literature, technical limitations of ML systems might produce potentially inconsistent explanations; on the other side, human inferences can be inaccurate, even if users are presented with con- sistent explanations. Investigating such inconsistencies, we propose an ontology to structure the most common ones from the literature. We advocate that such an ontology can be useful to understand current XAI limitations for avoiding explanations pitfalls
Pomorski, Denis. "Apprentissage automatique symbolique/numérique : construction et évaluation d'un ensemble de règles à partir des données." Lille 1, 1991. http://www.theses.fr/1991LIL10117.
Full textDang, Quang Vinh. "Évaluation de la confiance dans la collaboration à large échelle." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0002/document.
Full textLarge-scale collaborative systems wherein a large number of users collaborate to perform a shared task attract a lot of attention from both academic and industry. Trust is an important factor for the success of a large-scale collaboration. It is difficult for end-users to manually assess the trust level of each partner in this collaboration. We study the trust assessment problem and aim to design a computational trust model for collaborative systems. We focused on three research questions. 1. What is the effect of deploying a trust model and showing trust scores of partners to users? We designed and organized a user-experiment based on trust game, a well-known money-exchange lab-control protocol, wherein we introduced user trust scores. Our comprehensive analysis on user behavior proved that: (i) showing trust score to users encourages collaboration between them significantly at a similar level with showing nick- name, and (ii) users follow the trust score in decision-making. The results suggest that a trust model can be deployed in collaborative systems to assist users. 2. How to calculate trust score between users that experienced a collaboration? We designed a trust model for repeated trust game that computes user trust scores based on their past behavior. We validated our trust model against: (i) simulated data, (ii) human opinion, and (iii) real-world experimental data. We extended our trust model to Wikipedia based on user contributions to the quality of the edited Wikipedia articles. We proposed three machine learning approaches to assess the quality of Wikipedia articles: the first one based on random forest with manually-designed features while the other two ones based on deep learning methods. 3. How to predict trust relation between users that did not interact in the past? Given a network in which the links represent the trust/distrust relations between users, we aim to predict future relations. We proposed an algorithm that takes into account the established time information of the links in the network to predict future user trust/distrust relationships. Our algorithm outperforms state-of-the-art approaches on real-world signed directed social network datasets
Soumm, Michaël. "Refining machine learning evaluation : statistical insights into model performance and fairness." Electronic Thesis or Diss., université Paris-Saclay, 2024. https://theses.hal.science/tel-04951896.
Full textThis thesis addresses limitations in machine learning evaluation methodologies by introducing rigorous statistical approaches adapted from econometrics. Through applications in three distinct machine learning do-mains, we demonstrate how statistical tools can enhance model evaluation robustness, interpretability, and fairness. In class incremental learning, we examine the importance of pretraining methods compared to the choice of the incremental algorithm and show that these methods are crucial in determining final performance ; in face recognition systems, we quantify demographic biases and show that demographically-balanced synthetic data can significantly reduce performance disparities across ethnic groups ; in recommender systems, we develop novel information theory-based measures to analyze performance variations across user profiles, revealing that deep learning methods don’t consistently out-perform traditional approaches and highlighting the importance of user behavior patterns. These findings demonstrate the value of statistical rigor in machine learning evaluation and provide practical guidelines for improving model assessment across diverse applications
Choquette, Philippe. "Nouveaux algorithmes d'apprentissage pour classificateurs de type SCM." Master's thesis, Québec : Université Laval, 2007. http://www.theses.ulaval.ca/2007/24840/24840.pdf.
Full textBawden, Rachel. "Going beyond the sentence : Contextual Machine Translation of Dialogue." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS524/document.
Full textWhile huge progress has been made in machine translation (MT) in recent years, the majority of MT systems still rely on the assumption that sentences can be translated in isolation. The result is that these MT models only have access to context within the current sentence; context from other sentences in the same text and information relevant to the scenario in which they are produced remain out of reach. The aim of contextual MT is to overcome this limitation by providing ways of integrating extra-sentential context into the translation process. Context, concerning the other sentences in the text (linguistic context) and the scenario in which the text is produced (extra-linguistic context), is important for a variety of cases, such as discourse-level and other referential phenomena. Successfully taking context into account in translation is challenging. Evaluating such strategies on their capacity to exploit context is also a challenge, standard evaluation metrics being inadequate and even misleading when it comes to assessing such improvement in contextual MT. In this thesis, we propose a range of strategies to integrate both extra-linguistic and linguistic context into the translation process. We accompany our experiments with specifically designed evaluation methods, including new test sets and corpora. Our contextual strategies include pre-processing strategies designed to disambiguate the data on which MT models are trained, post-processing strategies to integrate context by post-editing MT outputs and strategies in which context is exploited during translation proper. We cover a range of different context-dependent phenomena, including anaphoric pronoun translation, lexical disambiguation, lexical cohesion and adaptation to properties of the scenario such as speaker gender and age. Our experiments for both phrase-based statistical MT and neural MT are applied in particular to the translation of English to French and focus specifically on the translation of informal written dialogues
Ghidalia, Sarah. "Etude sur les mesures d'évaluation de la cohérence entre connaissance et compréhension dans le domaine de l'intelligence artificielle." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2024. http://www.theses.fr/2024UBFCK001.
Full textThis thesis investigates the concept of coherence within intelligent systems, aiming to assess how coherence can be understood and measured in artificial intelligence, with a particular focus on pre-existing knowledge embedded in these systems. This research is funded as part of the European H2020 RESPONSE project and is set in the context of smart cities, where assessing the consistency between AI predictions and real-world data is a fundamental prerequisite for policy initiatives. The main objective of this work is to examine consistency in the field of artificial intelligence meticulously and to conduct a thorough exploration of prior knowledge. To this end, we conduct a systematic literature review to map the current landscape, focusing on the convergence and interaction between machine learning and ontologies, and highlighting, in particular, the algorithmic techniques employed. In addition, our comparative analysis positions our research in the broader context of important work in the field.An in-depth study of different knowledge integration methods is undertaken to analyze how consistency can be assessed based on the learning techniques employed. The overall quality of artificial intelligence systems, with particular emphasis on consistency assessment, is also examined. The whole study is then applied to the coherence evaluation of models concerning the representation of physical laws in ontologies. We present two case studies, one on predicting the motion of a harmonic oscillator and the other on estimating the lifetime of materials, to highlight the importance of respecting physical constraints in consistency assessment. In addition, we propose a new method for formalizing knowledge within an ontology and evaluate its effectiveness. This research aims to provide new perspectives in the evaluation of machine learning algorithms by introducing a coherence evaluation method. This thesis aspires to make a substantial contribution to the field of artificial intelligence by highlighting the critical role of consistency in the development of reliable and relevant intelligent systems
Douwes, Constance. "On the Environmental Impact of Deep Generative Models for Audio." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS074.
Full textIn this thesis, we investigate the environmental impact of deep learning models for audio generation and we aim to put computational cost at the core of the evaluation process. In particular, we focus on different types of deep learning models specialized in raw waveform audio synthesis. These models are now a key component of modern audio systems, and their use has increased significantly in recent years. Their flexibility and generalization capabilities make them powerful tools in many contexts, from text-to-speech synthesis to unconditional audio generation. However, these benefits come at the cost of expensive training sessions on large amounts of data, operated on energy-intensive dedicated hardware, which incurs large greenhouse gas emissions. The measures we use as a scientific community to evaluate our work are at the heart of this problem. Currently, deep learning researchers evaluate their works primarily based on improvements in accuracy, log-likelihood, reconstruction, or opinion scores, all of which overshadow the computational cost of generative models. Therefore, we propose using a new methodology based on Pareto optimality to help the community better evaluate their work's significance while bringing energy footprint -- and in fine carbon emissions -- at the same level of interest as the sound quality. In the first part of this thesis, we present a comprehensive report on the use of various evaluation measures of deep generative models for audio synthesis tasks. Even though computational efficiency is increasingly discussed, quality measurements are the most commonly used metrics to evaluate deep generative models, while energy consumption is almost never mentioned. Therefore, we address this issue by estimating the carbon cost of training generative models and comparing it to other noteworthy carbon costs to demonstrate that it is far from insignificant. In the second part of this thesis, we propose a large-scale evaluation of pervasive neural vocoders, which are a class of generative models used for speech generation, conditioned on mel-spectrogram. We introduce a multi-objective analysis based on Pareto optimality of both quality from human-based evaluation and energy consumption. Within this framework, we show that lighter models can perform better than more costly models. By proposing to rely on a novel definition of efficiency, we intend to provide practitioners with a decision basis for choosing the best model based on their requirements. In the last part of the thesis, we propose a method to reduce the inference costs of neural vocoders, based on quantizated neural networks. We show a significant gain on the memory size and give some hints for the future use of these models on embedded hardware. Overall, we provide keys to better understand the impact of deep generative models for audio synthesis as well as a new framework for developing models while accounting for their environmental impact. We hope that this work raises awareness on the need to investigate energy-efficient models simultaneously with high perceived quality
Pavão, Adrien. "Methodology for Design and Analysis of Machine Learning Competitions." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG088.
Full textWe develop and study a systematic and unified methodology to organize and use scientific challenges in research, particularly in the domain of machine learning (data-driven artificial intelligence). As of today, challenges are becoming more and more popular as a pedagogic tool and as a means of pushing the state-of-the-art by engaging scientists of all ages, within or outside academia. This can be thought of as a form of citizen science. There is the promise that this form of community involvement in science might contribute to reproducible research and democratize artificial intelligence. However, while the distinction between organizers and participants may mitigate certain biases, there exists a risk that biases in data selection, scoring metrics, and other experimental design elements could compromise the integrity of the outcomes and amplify the influence of randomness. In extreme cases, the results could range from being useless to detrimental for the scientific community and, ultimately, society at large. Our objective is to structure challenge organization within a rigorous framework and offer the community insightful guidelines. In conjunction with the tools of challenge organization that we are developing as part of the CodaLab project, we aim to provide a valuable contribution to the community. This thesis includes theoretical fundamental contributions drawing on experimental design, statistics and game theory, and practical empirical findings resulting from the analysis of data from previous challenges
Dang, Quang Vinh. "Évaluation de la confiance dans la collaboration à large échelle." Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0002.
Full textLarge-scale collaborative systems wherein a large number of users collaborate to perform a shared task attract a lot of attention from both academic and industry. Trust is an important factor for the success of a large-scale collaboration. It is difficult for end-users to manually assess the trust level of each partner in this collaboration. We study the trust assessment problem and aim to design a computational trust model for collaborative systems. We focused on three research questions. 1. What is the effect of deploying a trust model and showing trust scores of partners to users? We designed and organized a user-experiment based on trust game, a well-known money-exchange lab-control protocol, wherein we introduced user trust scores. Our comprehensive analysis on user behavior proved that: (i) showing trust score to users encourages collaboration between them significantly at a similar level with showing nick- name, and (ii) users follow the trust score in decision-making. The results suggest that a trust model can be deployed in collaborative systems to assist users. 2. How to calculate trust score between users that experienced a collaboration? We designed a trust model for repeated trust game that computes user trust scores based on their past behavior. We validated our trust model against: (i) simulated data, (ii) human opinion, and (iii) real-world experimental data. We extended our trust model to Wikipedia based on user contributions to the quality of the edited Wikipedia articles. We proposed three machine learning approaches to assess the quality of Wikipedia articles: the first one based on random forest with manually-designed features while the other two ones based on deep learning methods. 3. How to predict trust relation between users that did not interact in the past? Given a network in which the links represent the trust/distrust relations between users, we aim to predict future relations. We proposed an algorithm that takes into account the established time information of the links in the network to predict future user trust/distrust relationships. Our algorithm outperforms state-of-the-art approaches on real-world signed directed social network datasets