Gotowa bibliografia na temat „Prédiction automatique”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Spis treści
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Prédiction automatique”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "Prédiction automatique"
Dias, G., A. Qureshi, S. Saha i M. Hasanuzzaman. "Prédiction automatique des scores aux questionnaires PHQ-8 par intelligence artificielle". French Journal of Psychiatry 1 (grudzień 2019): S167—S168. http://dx.doi.org/10.1016/j.fjpsy.2019.10.456.
Pełny tekst źródłaHARINAIVO, A., H. HAUDUC i I. TAKACS. "Anticiper l’impact de la météo sur l’influent des stations d’épuration grâce à l’intelligence artificielle". Techniques Sciences Méthodes 3 (20.03.2023): 33–42. http://dx.doi.org/10.36904/202303033.
Pełny tekst źródłaFerdynus, C., J. Allyn i P. Montravers. "Prédiction de la mortalité postopératoire après chirurgie cardiaque : apprentissage automatique versus Euroscore". Revue d'Épidémiologie et de Santé Publique 66 (maj 2018): S138. http://dx.doi.org/10.1016/j.respe.2018.03.349.
Pełny tekst źródłaJ.B, Koto, T. R. Ramahefy i S. Randrianja. "Détection Des Anomalies Du Donnée De Démographie Par Région A Madagascar Par La Méthode Iforest". International Journal of Progressive Sciences and Technologies 34, nr 1 (27.09.2022): 367. http://dx.doi.org/10.52155/ijpsat.v34.1.4578.
Pełny tekst źródłaReys, Victor, i Gilles Labesse. "Profilage in silico des inhibiteurs de protéine kinases". médecine/sciences 36 (październik 2020): 38–41. http://dx.doi.org/10.1051/medsci/2020182.
Pełny tekst źródłaGarnotel, Maël, Hector-Manuel Romero-Ugalde, Thomas Bastian, Maeva Doron, Pierre Jallon, Guillaume Charpentier, Sylvia Franc, Stéphane Blanc, Stéphane Bonnet i Chantal Simon. "La reconnaissance automatique des activités améliore la prédiction de la dépense énergétique à partir du signla d’accélérométrie". Diabetes & Metabolism 43, nr 2 (marzec 2017): A102—A103. http://dx.doi.org/10.1016/s1262-3636(17)30401-9.
Pełny tekst źródłaFoucart, Jean-Michel, Augustin Chavanne i Jérôme Bourriau. "Intelligence artificielle : le futur de l’Orthodontie ?" Revue d'Orthopédie Dento-Faciale 53, nr 3 (wrzesień 2019): 281–94. http://dx.doi.org/10.1051/odf/2019026.
Pełny tekst źródłaBourkhime, H., N. Qarmiche, N. Bahra, M. Omari, M. Berraho, N. Tachfouti, S. El Fakir i N. Otmani. "P36 - La prédiction de la dépression chez les Marocains atteints de maladies respiratoires chroniques - Analyse comparative des algorithmes d'apprentissage automatique". Journal of Epidemiology and Population Health 72 (maj 2024): 202476. http://dx.doi.org/10.1016/j.jeph.2024.202476.
Pełny tekst źródłaMouchabac, Stéphane, i Christian Guinchard. "Apport des méthodes d'apprentissage automatique dans la prédiction de la transition vers la psychose : quels enjeux pour le patient et le psychiatre ?" L'information psychiatrique 89, nr 10 (2013): 811. http://dx.doi.org/10.3917/inpsy.8910.0811.
Pełny tekst źródłaMouchabac, Stéphane, i Christian Guinchard. "Apport des méthodes d'apprentissage automatique dans la prédiction de la transition vers la psychose : quels enjeux pour le patient et le psychiatre ?" L'information psychiatrique Volume 89, nr 10 (7.01.2014): 811–17. http://dx.doi.org/10.1684/ipe.2013.1131.
Pełny tekst źródłaRozprawy doktorskie na temat "Prédiction automatique"
Elloumi, Zied. "Prédiction de performances des systèmes de Reconnaissance Automatique de la Parole". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM005/document.
Pełny tekst źródłaIn this thesis, we focus on performance prediction of automatic speech recognition (ASR) systems.This is a very useful task to measure the reliability of transcription hypotheses for a new data collection, when the reference transcription is unavailable and the ASR system used is unknown (black box).Our contribution focuses on several areas: first, we propose a heterogeneous French corpus to learn and evaluate ASR prediction systems.We then compare two prediction approaches: a state-of-the-art (SOTA) performance prediction based on engineered features and a new strategy based on learnt features using convolutional neural networks (CNNs).While the joint use of textual and signal features did not work for the SOTA system, the combination of inputs for CNNs leads to the best WER prediction performance. We also show that our CNN prediction remarkably predicts the shape of the WER distribution on a collection of speech recordings.Then, we analyze factors impacting both prediction approaches. We also assess the impact of the training size of prediction systems as well as the robustness of systems learned with the outputs of a particular ASR system and used to predict performance on a new data collection.Our experimental results show that both prediction approaches are robust and that the prediction task is more difficult on short speech turns as well as spontaneous speech style.Finally, we try to understand which information is captured by our neural model and its relation with different factors.Our experiences show that intermediate representations in the network automatically encode information on the speech style, the speaker's accent as well as the broadcast program type.To take advantage of this analysis, we propose a multi-task system that is slightly more effective on the performance prediction task
Kawala, François. "Prédiction de l'activité dans les réseaux sociaux". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM021/document.
Pełny tekst źródłaThis dissertation is devoted to a social-media-mining problem named the activity-prediction problem. In this problem one aims to predict the number of user-generated-contents that will be created about a topic in the near future. The user-generated-contents that belong to a topic are not necessary related to each other.In order to study the activity-prediction problem without referring directly to a particular social-media, a generic framework is proposed. This generic framework allows to describe various social-media in a unified way. With this generic framework the activityprediction problem is defined independently of an actual social-media. Three examples are provided to illustrate how this generic framework describes social-media. Three defi- nitions of the activity-prediction problem are proposed. Firstly the magnitude prediction problem defines the activity-prediction as a regression problem. With this definition one aims to predict the exact activity of a topic. Secondly, the buzz classification problem defines the activity-prediction as a binary classification problem. With this definition one aims to predict if a topic will have an activity burst of a predefined amplitude. Thirdly the rank prediction problem defines the activity-prediction as a learning-to-rank problem. With this definition one aims to rank the topics accordingly to theirs future activity-levels. These three definitions of the activity prediction problem are tackled with state-of-the-art machine learning approaches applied to generic features. Indeed, these features are defined with the help of the generic framework. Therefore these features are easily adaptable to various social-media. There are two types of features. Firstly the features which describe a single topic. Secondly the features which describe the interplay between two topics.Our ability to predict the activity is tested against an industrial-size multilingual dataset. The data has been collected during 51 weeks. Two sources of data were used: Twitter and a bulletin-board-system. The collected data contains three languages: English, French and German. More than five hundred millions user-generated-contents were captured. Most of these user-generated-contents are related to computer hardware, video games, and mobile telephony. The data collection necessitated the implementation of a daily routine. The data was prepared so that commercial-contents and technical failure are not sources of noise. A cross-validation method that takes into account the time of observations is used. In addition an unsupervised method to extract buzz candidates is proposed. Indeed the training-sets are very ill-balanced for the buzz classification problem, and it is necessary to preselect buzz candidates. The activity-prediction problems are studied within two different experimental settings. The first experimental setting includes data from Twitter and the bulletin-board-system, on a long time-scale, and with three different languages. The second experimental setting is dedicated specifically to Twitter. This second experiment aims to increase the reproducibility of experiments as much as possible. Hence, this experimental setting includes user-generated-contents collected with respect to a list of unambiguous English terms. In addition the observation are restricted to ten consecutive weeks. Hence the risk of unannounced change in the public API of Twitter is minimized
Hmamouche, Youssef. "Prédiction des séries temporelles larges". Electronic Thesis or Diss., Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0480.
Pełny tekst źródłaNowadays, storage and data processing systems are supposed to store and process large time series. As the number of variables observed increases very rapidly, their prediction becomes more and more complicated, and the use of all the variables poses problems for classical prediction models.Univariate prediction models are among the first models of prediction. To improve these models, the use of multiple variables has become common. Thus, multivariate models and become more and more used because they consider more information.With the increase of data related to each other, the application of multivariate models is also questionable. Because the use of all existing information does not necessarily lead to the best predictions. Therefore, the challenge in this situation is to find the most relevant factors among all available data relative to a target variable.In this thesis, we study this problem by presenting a detailed analysis of the proposed approaches in the literature. We address the problem of prediction and size reduction of massive data. We also discuss these approaches in the context of Big Data.The proposed approaches show promising and very competitive results compared to well-known algorithms, and lead to an improvement in the accuracy of the predictions on the data used.Then, we present our contributions, and propose a complete methodology for the prediction of wide time series. We also extend this methodology to big data via distributed computing and parallelism with an implementation of the prediction process proposed in the Hadoop / Spark environment
Popov, Mihail. "Décomposition automatique des programmes parallèles pour l'optimisation et la prédiction de performance". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLV087/document.
Pełny tekst źródłaIn high performance computing, benchmarks evaluate architectures, compilers and optimizations. Standard benchmarks are mostly issued from the industrial world and may have a very long execution time. So, evaluating a new architecture or an optimization is costly. Most of the benchmarks are composed of independent kernels. Usually, users are only interested by a small subset of these kernels. To get faster and easier local optimizations, we should find ways to extract kernels as standalone executables. Also, benchmarks have redundant computational kernels. Some calculations do not bring new informations about the system that we want to study, despite that we measure them many times. By detecting similar operations and removing redundant kernels, we can reduce the benchmarking cost without loosing information about the system. This thesis proposes a method to automatically decompose applications into small kernels called codelets. Each codelet is a standalone executable that can be replayed in different execution contexts to evaluate them. This thesis quantifies how much the decomposition method accelerates optimization and benchmarking processes. It also quantify the benefits of local optimizations over global optimizations. There are many related works which aim to enhance the benchmarking process. In particular, we note machine learning approaches and sampling techniques. Decomposing applications into independent pieces is not a new idea. It has been successfully applied on sequential codes. In this thesis we extend it to parallel programs. Evaluating scalability or new micro-architectures is 25× faster with codelets than with full application executions. Codelets predict the execution time with an accuracy of 94% and find local optimizations that outperform the best global optimization up to 1.06×
Colombet-Madinier, Isabelle. "Aspects méthodologiques de la prédiction du risque cardiovasculaire : apports de l'apprentissage automatique". Paris 6, 2002. http://www.theses.fr/2002PA066083.
Pełny tekst źródłaHmamouche, Youssef. "Prédiction des séries temporelles larges". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0480.
Pełny tekst źródłaNowadays, storage and data processing systems are supposed to store and process large time series. As the number of variables observed increases very rapidly, their prediction becomes more and more complicated, and the use of all the variables poses problems for classical prediction models.Univariate prediction models are among the first models of prediction. To improve these models, the use of multiple variables has become common. Thus, multivariate models and become more and more used because they consider more information.With the increase of data related to each other, the application of multivariate models is also questionable. Because the use of all existing information does not necessarily lead to the best predictions. Therefore, the challenge in this situation is to find the most relevant factors among all available data relative to a target variable.In this thesis, we study this problem by presenting a detailed analysis of the proposed approaches in the literature. We address the problem of prediction and size reduction of massive data. We also discuss these approaches in the context of Big Data.The proposed approaches show promising and very competitive results compared to well-known algorithms, and lead to an improvement in the accuracy of the predictions on the data used.Then, we present our contributions, and propose a complete methodology for the prediction of wide time series. We also extend this methodology to big data via distributed computing and parallelism with an implementation of the prediction process proposed in the Hadoop / Spark environment
Herry, Sébastien. "Détection automatique de langue par discrimination d'experts". Paris 6, 2007. http://www.theses.fr/2007PA066101.
Pełny tekst źródłaThe purpose of the presented work in this memoir is to automatically detect language in audio stream. For this we suggest a model which, like bilingual expert, done an discrimination by language pair with only acoustic information. The system have constraint : Operating in real time, Use database without phonetic information, Able to add a new language without retrain all the model In a first time we have done an Automatic language detection system derived from the stat of the art. The results obtained by this system are used as reference for the rest of memoir, and we compare those results with the results obtained by the developed model. In a first time, we propose a set of discriminator, by pair of language, based on neural network. The treatment is done on the whole duration of speech segment. The results of these discriminators are fused to create de detection. This model has a patent. We have study more precisely the influence of different parameter as the number of locator, the variation intra and inter corpus or the hardiness. Next we have compared the proposed modelling based on discrimination, with modelling auto regressive or predictive. This system has been tested with our participation of the international campaign organised by NIST in December 2005. To conclude on this campaign where 17 international teams have participated, we have proposed several improvements as: A normalisation of database, A modification of speaker database for learning only, Increase scores with segment duration. To conclude, the system proposed fulfils the constraints because the system is real time, and use only acoustic information. More over the system is more efficient than the derived model from the stat of the art. At last the model is hardiness for noise, for unknown language, for new evaluation database
Kashnikov, Yuriy. "Une approche holistique pour la prédiction des optimisations du compilateur par apprentissage automatique". Versailles-St Quentin en Yvelines, 2013. http://www.theses.fr/2013VERS0047.
Pełny tekst źródłaEffective compiler optimizations can greatly improve applications performance. These optimizations are numerous and can be applied in any order. Compilers select these optimizations using solutions driven by heuristics which may degrade programs performance. Therefore, developers resort to the tedious manual search for the best optimizations. Combinatorial search space makes this effort intractable and one can easily fall into a local minimum and miss the best combination. This thesis develops a holistic approach to improve applications performance with compiler optimizations and machine learning. A combination of static loop analysis and statistical learning is used to analyze a large corpus of loops and reveal good potential for compiler optimizations. Milepost GCC, a machine-learning based compiler, is applied to optimize benchmarks and an industrial database application. It uses function level static features and classification algorithms to predict a good sequence of optimizations. While Milepost GCC can mispredict the best optimizations, in general it obtains considerable speedups and outperforms state-of-the-art compiler heuristics. The culmination of this thesis is the ULM meta-optimization framework. ULM characterizes applications at different levels with static code features and hardware performance counters and finds the most important combination of program features. By selecting among three classification algorithms and tuning their parameters, ULM builds a sophisticated predictor that can outperform existing solutions. As a result, the ULM framework predicted correctly the best sequence of optimizations sequence in 92% of cases
Hue, Martial. "Méthodes à noyau pour l'annotation automatique et la prédiction d'interaction de structures de protéine". Paris 7, 2011. http://www.theses.fr/2011PA077151.
Pełny tekst źródłaAs large quantities of protein 3D structures are now routinely solved, there is a need for computational tools to automatically annotate protein structures. In this thesis, we investigate several machine learning approaches for this purpose, based on the popular support vector machine (SVM) algorithm. Indeed, the SVM offers several possibilities to overcome the complexity of protein structures, and their interactions. We propose to solve both issues by investigating new positive definite kernels. First, a kernel function for the annotation of protein structures is devised. The kernel is based on a similarity measure called MAMMOTH. Classification tasks corresponding to Enzyme Classification (EC), Structural Classification of Proteins (SCOP), and Gene Ontology (GO) annotation, show that the MAMMOTH kernel significantly outperforms other choices of kernels for protein structures and classifiers. Second, we design a kernel in the context of binary supervised prediction of objects with a specific structure, namely pairs of general objects. The problem of the inference of missing edges in a protein-protein interaction network may be cast in this context. Our results on three benchmarks of interaction between protein structures suggest that the Metric Learning Pairwise Kernel (MLPK), in combination with the MAMMOTH kernel, yield the best performance. Lastly, we introduce a new and efficient learning method for the supervised prediction of protein interaction. A pairwise kernel method is motivated by two previous methods, the Tensor Product Pairwise Kernel (TPPK) and the local model. The connection between the approaches is explicited and the two methods are formulated in a new common framework, that yields to natural generalization by an interpolation
Grenet, Ingrid. "De l’utilisation des données publiques pour la prédiction de la toxicité des produits chimiques". Thesis, Université Côte d'Azur (ComUE), 2019. http://www.theses.fr/2019AZUR4050.
Pełny tekst źródłaCurrently, chemical safety assessment mostly relies on results obtained in in vivo studies performed in laboratory animals. However, these studies are costly in term of time, money and animals used and therefore not adapted for the evaluation of thousands of compounds. In order to rapidly screen compounds for their potential toxicity and prioritize them for further testing, alternative solutions are envisioned such as in vitro assays and computational predictive models. The objective of this thesis is to evaluate how the public data from ToxCast and ToxRefDB can allow the construction of this type of models in order to predict in vivo effects induced by compounds, only based on their chemical structure. To do so, after data pre-processing, we first focus on the prediction of in vitro bioactivity from chemical structure and then on the prediction of in vivo effects from in vitro bioactivity data. For the in vitro bioactivity prediction, we build and test various models based on compounds’ chemical structure descriptors. Since learning data are highly imbalanced in favor of non-toxic compounds, we test a data augmentation technique and show that it improves models’ performances. We also perform a largescale study to predict hundreds of in vitro assays from ToxCast and show that the stacked generalization ensemble method leads to reliable models when used on their applicability domain. For the in vivo effects prediction, we evaluate the link between results from in vitro assays targeting pathways known to induce endocrine effects and in vivo effects observed in endocrine organs during longterm studies. We highlight that, unexpectedly, these assays are not predictive of the in vivo effects, which raises the crucial question of the relevance of in vitro assays. We thus hypothesize that the selection of assays able to predict in vivo effects should be based on complementary information such as, in particular, mechanistic data
Książki na temat "Prédiction automatique"
Stephen, Coggeshall, red. Foundations of predictive analytics. Boca Raton: CRC Press, 2012.
Znajdź pełny tekst źródłaGeyer, Tobias. Model Predictive Control of High Power Converters and Industrial Drives. Wiley & Sons, Incorporated, John, 2016.
Znajdź pełny tekst źródłaGeyer, Tobias. Model Predictive Control of High Power Converters and Industrial Drives. Wiley & Sons, Incorporated, John, 2016.
Znajdź pełny tekst źródłaGeyer, Tobias. Model Predictive Control of High Power Converters and Industrial Drives. Wiley & Sons, Limited, John, 2016.
Znajdź pełny tekst źródłaGeyer, Tobias. Model Predictive Control of High Power Converters and Industrial Drives. Wiley & Sons, Incorporated, John, 2016.
Znajdź pełny tekst źródłaGeyer, Tobias. Model Predictive Control of High Power Converters and Industrial Drives. Wiley & Sons, Incorporated, John, 2016.
Znajdź pełny tekst źródłaModel Predictive Control of High Power Converters and Industrial Drives. Wiley, 2016.
Znajdź pełny tekst źródłaCzęści książek na temat "Prédiction automatique"
AMRAOUI, Asma, i Badr BENMAMMAR. "Optimisation des réseaux à l’aide des techniques de l’intelligence artificielle". W Gestion et contrôle intelligents des réseaux, 71–94. ISTE Group, 2020. http://dx.doi.org/10.51926/iste.9008.ch3.
Pełny tekst źródłaStreszczenia konferencji na temat "Prédiction automatique"
Ferreira, Sébastien, Jérôme Farinas, Julien Pinquier i Stéphane Rabant. "Prédiction a priori de la qualité de la transcription automatique de la parole bruitée". W XXXIIe Journées d’Études sur la Parole. ISCA: ISCA, 2018. http://dx.doi.org/10.21437/jep.2018-29.
Pełny tekst źródłaRaporty organizacyjne na temat "Prédiction automatique"
Jacob, Steve, i Sébastien Brousseau. L’IA dans le secteur public : cas d’utilisation et enjeux éthiques. Observatoire international sur les impacts sociétaux de l'IA et du numérique, maj 2024. http://dx.doi.org/10.61737/fcxm4981.
Pełny tekst źródła