Добірка наукової літератури з теми "Probabilités – Prévision"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Probabilités – Prévision".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Probabilités – Prévision":
Galbraith, John W. "Les progrès dans les prévisions : météorologie et économique." Articles 81, no. 4 (April 12, 2007): 559–93. http://dx.doi.org/10.7202/014910ar.
Moulin, Lætitia, Alain Abonnel, Damien Puygrenier, Audrey Valéry, and Rémy Garçon. "Prévision hydrométéorologique opérationnelle à EDF-DTG – Progrès récents et état des lieux en 2018." La Houille Blanche, no. 2 (April 2019): 44–54. http://dx.doi.org/10.1051/lhb/2019014.
Celie, Sabrina, Guillaume Bontron, David Ouf, and Evelyne Pont. "Apport de l'expertise dans la prévision hydro-météorologique opérationnelle." La Houille Blanche, no. 2 (April 2019): 55–62. http://dx.doi.org/10.1051/lhb/2019015.
Viatgé, Julie, Lionel Berthet, Renaud Marty, François Bourgin, Olivier Piotte, Maria-Helena Ramos, and Charles Perrin. "Vers une production en temps réel d'intervalles prédictifs associés aux prévisions de crue dans Vigicrues en France." La Houille Blanche, no. 2 (April 2019): 63–71. http://dx.doi.org/10.1051/lhb/2019016.
Demargne, Julie, Pierre Javelle, Didier Organde, Léa Garandeau, and Bruno Janet. "Intégration des prévisions immédiates de pluie à haute-résolution pour une meilleure anticipation des crues soudaines." La Houille Blanche, no. 3-4 (October 2019): 13–21. http://dx.doi.org/10.1051/lhb/2019023.
Loumagne, C., J. J. Vidal, C. Feliu, J. P. Torterotot, and P. A. Roche. "Procédures de décision multimodèle pour une prévision des crues en temps réel: Application au bassin supérieur de la Garonne." Revue des sciences de l'eau 8, no. 4 (April 12, 2005): 539–61. http://dx.doi.org/10.7202/705237ar.
ROUSSEAU, Daniel. "Un critère général d'évaluation des prévisions déterministes et probabilistes - Application aux prévisions locales diffusées sur Internet." La Météorologie 8, no. 35 (2001): 48. http://dx.doi.org/10.4267/2042/36194.
Gautrin, J. F., and B. Verdon. "Modèle de prévision et de simulation de l’aide sociale au Québec." Articles 50, no. 1 (July 9, 2009): 3–26. http://dx.doi.org/10.7202/803030ar.
Breton, Éléanor. "Les usagers professionnels des prévisions météorologiques probabilistes : des rapports à l'incertitude variés." La Météorologie, no. 123 (2023): 032. http://dx.doi.org/10.37053/lameteorologie-2023-0092.
Thirel, Guillaume. "Assimilation de débits observés pour des prévisions hydrologiques probabilistes sur la France." La Houille Blanche, no. 2 (April 2011): 87–90. http://dx.doi.org/10.1051/lhb/2011025.
Дисертації з теми "Probabilités – Prévision":
Houdant, Benoît. "Contribution à l'amélioration de la prévision hydrométéorologique opérationnelle : pour l'usage des probabilités dans la communication entre acteurs." Phd thesis, ENGREF (AgroParisTech), 2004. http://pastel.archives-ouvertes.fr/pastel-00000925.
Akil, Nicolas. "Etude des incertitudes des modèles neuronaux sur la prévision hydrogéologique. Application à des bassins versants de typologies différentes." Electronic Thesis or Diss., IMT Mines Alès, 2021. http://www.theses.fr/2021EMAL0005.
Floods and droughts are the two main risks in France and require a special attention. In these conditions, where climate change generates increasingly frequent extreme phenomena, modeling these risks is an essential element for water resource management.Currently, discharges and water heights are mainly predicted from physical or conceptual based models. Although efficient and necessary, the calibration and implementation of these models require long and costly studies.Hydrogeological forecasting models often use data from incomplete or poorly dimensioned measurement networks. Moreover, the behavior of the study basins is in most cases difficult to understand. This difficulty is thus noted to estimate the uncertainties associated with hydrogeological modeling.In this context, this thesis, supported by IMT Mines Alès and financed by the company aQuasys and ANRT, aims at developing models based on the systemic paradigm. These models require only basic knowledge on the physical characterization of the studied basin, and can be calibrated from only input and output information (rainfall and discharge/height).The most widely used models in the environmental world are neural networks, which are used in this project. This thesis seeks to address three main goals:1. Development of a model design method adapted to different variables (surface water flows/height) and to very different types of basins: watersheds or hydrogeological basins (groundwater height)2. Evaluation of the uncertainties associated with these models in relation to the types of targeted basins3. Reducing of these uncertaintiesSeveral basins are used to address these issues: the Blavet basin in Brittany and the basin of the Southern and Central Champagne Chalk groundwater table
Carraro, Laurent. "Questions de prédiction pour le mouvement brownien et le processus de Wiener à plusieurs paramètres." Lyon 1, 1985. http://www.theses.fr/1985LYO11660.
Roulin, Emmannuel. "Medium-range probabilistic river streamflow predictions." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209270.
The research began by analyzing the meteorological predictions at the medium-range (up to 10-15 days) and their use in hydrological forecasting. Precipitation from the ensemble prediction system of the European Centre for Medium-Range Weather Forecasts (ECMWF) were used. A semi-distributed hydrological model was used to transform these precipitation forecasts into ensemble streamflow predictions. The performance of these forecasts was analyzed in probabilistic terms. A simple decision model also allowed to compare the relative economic value of hydrological ensemble predictions and some deterministic alternatives.
Numerical weather prediction models are imperfect. The ensemble forecasts are therefore affected by errors implying the presence of biases and the unreliability of probabilities derived from the ensembles. By comparing the results of these predictions to the corresponding observed data, a statistical model for the correction of forecasts, known as post-processing, has been adapted and shown to improve the performance of probabilistic forecasts of precipitation. This approach is based on retrospective forecasts made by the ECMWF for the past twenty years, providing a sufficient statistical sample.
Besides the errors related to meteorological forcing, hydrological forecasts also display errors related to initial conditions and to modeling errors (errors in the structure of the hydrological model and in the parameter values). The last stage of the research was therefore to investigate, using simple models, the impact of these different sources of error on the quality of hydrological predictions and to explore the possibility of using hydrological reforecasts for post-processing, themselves based on retrospective precipitation forecasts.
/
La prévision des débits des rivières se fait traditionnellement sur la base de mesures en temps réel des précipitations sur les bassins-versant et des débits à l'exutoire et en amont. Ces données sont traitées dans des modèles mathématiques de complexité variée et permettent d'obtenir des prévisions précises pour des temps courts. Pour prolonger l'horizon de prévision à quelques jours – afin d'être en mesure d'émettre des alertes précoces – il est nécessaire de prendre en compte les prévisions météorologiques. Cependant celles-ci présentent par nature une dynamique sensible aux erreurs sur les conditions initiales et, par conséquent, pour une gestion appropriée des risques, il faut considérer les prévisions en termes probabilistes. Actuellement, les prévisions d'ensemble sont effectuées à l'aide d'un modèle numérique de prévision du temps avec des conditions initiales perturbées et permettent d'évaluer l'incertitude.
La recherche a commencé par l'analyse des prévisions météorologiques à moyen-terme (10-15 jours) et leur utilisation pour des prévisions hydrologiques. Les précipitations issues du système de prévisions d'ensemble du Centre Européen pour les Prévisions Météorologiques à Moyen-Terme ont été utilisées. Un modèle hydrologique semi-distribué a permis de traduire ces prévisions de précipitations en prévisions d'ensemble de débits. Les performances de ces prévisions ont été analysées en termes probabilistes. Un modèle de décision simple a également permis de comparer la valeur économique relative des prévisions hydrologiques d'ensemble et d'alternatives déterministes.
Les modèles numériques de prévision du temps sont imparfaits. Les prévisions d'ensemble sont donc entachées d'erreurs impliquant la présence de biais et un manque de fiabilité des probabilités déduites des ensembles. En comparant les résultats de ces prévisions aux données observées correspondantes, un modèle statistique pour la correction des prévisions, connue sous le nom de post-processing, a été adapté et a permis d'améliorer les performances des prévisions probabilistes des précipitations. Cette approche se base sur des prévisions rétrospectives effectuées par le Centre Européen sur les vingt dernières années, fournissant un échantillon statistique suffisant.
A côté des erreurs liées au forçage météorologique, les prévisions hydrologiques sont également entachées d'erreurs liées aux conditions initiales et aux erreurs de modélisation (structure du modèle hydrologique et valeur des paramètres). La dernière étape de la recherche a donc consisté à étudier, à l'aide de modèles simples, l'impact de ces différentes sources d'erreur sur la qualité des prévisions hydrologiques et à explorer la possibilité d'utiliser des prévisions hydrologiques rétrospectives pour le post-processing, elles-même basées sur les prévisions rétrospectives des précipitations.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Horrigue, Walid. "Prévision non paramétrique dans les modèles de censure via l'estimation du quantile conditionnel en dimension infinie." Thesis, Littoral, 2012. http://www.theses.fr/2012DUNK0511.
In this thesis, we study some asymptotic properties of conditional functional parameters in nonparametric statistics setting, when the explanatory variable takes its values in infinite dimension space. In this nonparametric setting, we consider the estimators of the usual functional parameters, as the conditional law, the conditional probability density, the conditional quantile. We are essentially interested in the problem of forecasting in the nonparametric conditional models, when the data are functional random variables. Firstly, we propose an estimator of the conditional quantile and we establish its uniform strong convergence with rates over a compact subset. To follow the convention in biomedical studies, we consider an identically distributed sequence {Ti, i ≥ 1}, here density f, right censored by a random {Ci, i ≥ 1} also assumed independent identically distributed and independent of {Ti, i ≥ 1}. Our study focuses on dependent data and the covariate X takes values in an infinite space dimension. In a second step we establish the asymptotic normality of the kernel estimator of the conditional quantile, under α-mixing assumption and on the concentration properties on small balls of the probability measure of the functional regressors. Many applications in some particular cases have been also given
Candille, Guillem. "Validation des systèmes de prévisions météorologiques probabilistes." Paris 6, 2003. http://www.theses.fr/2003PA066511.
Smadi, Charline. "Modèles probabilistes de populations : branchement avec catastrophes et signature génétique de la sélection." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1035/document.
This thesis is devoted to the probabilistic study of demographic and genetical responses of a population to some point wise events. In a first part, we are interested in the effect of random catastrophes, which kill a fraction of the population and occur repeatedly, in populations modeled by branching processes. First we construct a new class of processes, the continuous state branching processes with catastrophes, as the unique strong solution of a stochastic differential equation. Then we describe the conditions for the population extinction. Finally, in the case of almost sure absorption, we state the asymptotical rate of absorption. This last result has a direct application to the determination of the number of infected cells in a model of cell infection by parasites. Indeed, the parasite population size in a lineage follows in this model a branching process, and catastrophes correspond to the sharing of the parasites between the two daughter cells when a division occurs. In a second part, we focus on the genetic signature of selective sweeps. The genetic material of an individual (mostly) determines its phenotype and in particular some quantitative traits, as birth and intrinsic death rates, and interactions with others individuals. But genotype is not sufficient to determine "adaptation" in a given environment: for example the life expectancy of a human being is very dependent on his environment (access to drinking water, to medical infrastructures,...). The eco-evolutive approach aims at taking into account the environment by modeling interactions between individuals. When a mutation or an environmental modification occurs, some alleles can invade the population to the detriment of other alleles: this phenomenon is called a selective sweep and leaves signatures in the neutral diversity in the vicinity of the locus where the allele fixates. Indeed, this latter "hitchhiking” alleles situated on loci linked to the selected locus. The only possibility for an allele to escape this "hitchhiking" is the occurrence of a genetical recombination, which associates it to another haplotype in the population. We quantify the signature left by such a selective sweep on the neutral diversity. We first focus on neutral proportion variation in loci partially linked with the selected locus, under different scenari of selective sweeps. We prove that these different scenari leave distinct signatures on neutral diversity, which can allow to discriminate them. Then we focus on the linked genealogies of two neutral alleles situated in the vicinity of the selected locus. In particular, we quantify some statistics under different scenari of selective sweeps, which are currently used to detect recent selective events in current population genetic data. In these works the population evolves as a multitype birth and death process with competition. If such a model is more realistic than branching processes, the non-linearity caused by competitions makes its study more complex
Cifonelli, Antonio. "Probabilistic exponential smoothing for explainable AI in the supply chain domain." Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMIR41.
The key role that AI could play in improving business operations has been known for a long time, but the penetration process of this new technology has encountered certain obstacles within companies, in particular, implementation costs. On average, it takes 2.8 years from supplier selection to full deployment of a new solution. There are three fundamental points to consider when developing a new model. Misalignment of expectations, the need for understanding and explanation, and performance and reliability issues. In the case of models dealing with supply chain data, there are five additionally specific issues: - Managing uncertainty. Precision is not everything. Decision-makers are looking for a way to minimise the risk associated with each decision they have to make in the presence of uncertainty. Obtaining an exact forecast is a advantageous; obtaining a fairly accurate forecast and calculating its limits is realistic and appropriate. - Handling integer and positive data. Most items sold in retail cannot be sold in subunits. This simple aspect of selling, results in a constraint that must be satisfied by the result of any given method or model: the result must be a positive integer. - Observability. Customer demand cannot be measured directly, only sales can be recorded and used as a proxy. - Scarcity and parsimony. Sales are a discontinuous quantity. By recording sales by day, an entire year is condensed into just 365 points. What’s more, a large proportion of them will be zero. - Just-in-time optimisation. Forecasting is a key function, but it is only one element in a chain of processes supporting decision-making. Time is a precious resource that cannot be devoted entirely to a single function. The decision-making process and associated adaptations must therefore be carried out within a limited time frame, and in a sufficiently flexible manner to be able to be interrupted and restarted if necessary in order to incorporate unexpected events or necessary adjustments. This thesis fits into this context and is the result of the work carried out at the heart of Lokad, a Paris-based software company aiming to bridge the gap between technology and the supply chain. The doctoral research was funded by Lokad in collaborationwith the ANRT under a CIFRE contract. The proposed work aims to be a good compromise between new technologies and business expectations, addressing the various aspects presented above. We have started forecasting using the exponential smoothing family which are easy to implement and extremely fast to run. As they are widely used in the industry, they have already won the confidence of users. What’s more, they are easy to understand and explain to an unlettered audience. By exploiting more advanced AI techniques, some of the limitations of the models used can be overcome. Cross-learning proved to be a relevant approach for extrapolating useful information when the number of available data was very limited. Since the common Gaussian assumption is not suitable for discrete sales data, we proposed using a model associatedwith either a Poisson distribution or a Negative Binomial one, which better corresponds to the nature of the phenomena we are seeking to model and predict. We also proposed using Monte Carlo simulations to deal with uncertainty. A number of scenarios are generated, sampled and modelled using a distribution. From this distribution, confidence intervals of different and adapted sizes can be deduced. Using real company data, we compared our approach with state-of-the-art methods such as DeepAR model, DeepSSMs and N-Beats. We deduced a new model based on the Holt-Winter method. These models were implemented in Lokad’s work flow
Yang, Gen. "Modèles prudents en apprentissage statistique supervisé." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2263/document.
In some areas of supervised machine learning (e.g. medical diagnostics, computer vision), predictive models are not only evaluated on their accuracy but also on their ability to obtain more reliable representation of the data and the induced knowledge, in order to allow for cautious decision making. This is the problem we studied in this thesis. Specifically, we examined two existing approaches of the literature to make models and predictions more cautious and more reliable: the framework of imprecise probabilities and the one of cost-sensitive learning. These two areas are both used to make models and inferences more reliable and cautious. Yet few existing studies have attempted to bridge these two frameworks due to both theoretical and practical problems. Our contributions are to clarify and to resolve these problems. Theoretically, few existing studies have addressed how to quantify the different classification errors when set-valued predictions are produced and when the costs of mistakes are not equal (in terms of consequences). Our first contribution has been to establish general properties and guidelines for quantifying the misclassification costs for set-valued predictions. These properties have led us to derive a general formula, that we call the generalized discounted cost (GDC), which allow the comparison of classifiers whatever the form of their predictions (singleton or set-valued) in the light of a risk aversion parameter. Practically, most classifiers basing on imprecise probabilities fail to integrate generic misclassification costs efficiently because the computational complexity increases by an order (or more) of magnitude when non unitary costs are used. This problem has led to our second contribution, the implementation of a classifier that can manage the probability intervals produced by imprecise probabilities and the generic error costs with the same order of complexity as in the case where standard probabilities and unitary costs are used. This is to use a binary decomposition technique, the nested dichotomies. The properties and prerequisites of this technique have been studied in detail. In particular, we saw that the nested dichotomies are applicable to all imprecise probabilistic models and they reduce the imprecision level of imprecise models without loss of predictive power. Various experiments were conducted throughout the thesis to illustrate and support our contributions. We characterized the behavior of the GDC using ordinal data sets. These experiences have highlighted the differences between a model based on standard probability framework to produce indeterminate predictions and a model based on imprecise probabilities. The latter is generally more competent because it distinguishes two sources of uncertainty (ambiguity and the lack of information), even if the combined use of these two types of models is also of particular interest as it can assist the decision-maker to improve the data quality or the classifiers. In addition, experiments conducted on a wide variety of data sets showed that the use of nested dichotomies significantly improves the predictive power of an indeterminate model with generic costs
Atger, Frédéric. "Validation et étude de quelques propriétés de systèmes de prévision météorologique ensemblistes." Toulouse 3, 2003. http://www.theses.fr/2003TOU30051.
Probabilistic meteorological forecasts based on ensemble prediction systems are evaluated. The resolution and reliability components of the decomposition of the Brier score are used for quantifying the performance. Probabilistic forecasts based on operational ensembles are compared to those obtained from a single model run, through a statistical scheme. " Poorman ensembles ", consisting of a few deterministic forecasts run in different operational centres, are evaluated too. The conditions for a realistic estimation of the performance of ensemble based probabilistic forecasts are also investigated. The spatial and interannual variability of the reliability implies a strong stratification of the data, that is not always possible with available samples limited in size. Another, essential issue is the categorization of forecast probabilities, required for achieving the decomposition of the Brier score
Книги з теми "Probabilités – Prévision":
Kaufman, Leonard. Finding groups in data: An introduction to cluster analysis. Hoboken, N.J: Wiley, 2005.
Kaufman, Leonard. Finding groups in data: An introduction to cluster analysis. New York: Wiley, 1990.
Box, George E. P. Time series analysis: Forecasting and control. 4th ed. Hoboken, N.J: John Wiley, 2008.
Box, George E. P. Time series analysis: Forecasting and control. 3rd ed. Englewood Cliffs, N.J: Prentice Hall, 1994.
Box, George E. P. Time series analysis: Forecasting and control. 4th ed. Hoboken, N.J: John Wiley, 2008.
Taleb, Nassim. The black swan: The impact of the highly improbable. New York: Random House, 2007.
Taleb, Nassim. The black swan: The impact of the highly improbable. 2nd ed. New York: Random House Trade Paperbacks, 2010.
Taleb, Nassim. The black swan: The impact of the highly improbable. New York: Random House, 2007.
Taleb, Nassim. The black swan: The impact of the highly improbable. New York, NY: Random House, 2005.
Cohler, Bertram J., and E. James Anthony. The Invulnerable child. New York: Guilford Press, 1987.
Частини книг з теми "Probabilités – Prévision":
BARBU, Vlad Stefan, Alex KARAGRIGORIOU, and Andreas MAKRIDES. "Processus semi-markoviens pour la prévision des tremblements de terre." In Méthodes et modèles statistiques pour la sismogenèse, 315–25. ISTE Group, 2023. http://dx.doi.org/10.51926/iste.9037.ch11.