Dissertations / Theses on the topic 'Prévision des séries temporelles'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Prévision des séries temporelles.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Gagnon, Jean-François. "Prévision humaine de séries temporelles." Doctoral thesis, Université Laval, 2014. http://hdl.handle.net/20.500.11794/25243.
Boné, Romuald. "Réseaux de neurones récurrents pour la prévision de séries temporelles." Tours, 2000. http://www.theses.fr/2000TOUR4003.
Cherif, Aymen. "Réseaux de neurones, SVM et approches locales pour la prévision de séries temporelles." Thesis, Tours, 2013. http://www.theses.fr/2013TOUR4003/document.
Time series forecasting is a widely discussed issue for many years. Researchers from various disciplines have addressed it in several application areas : finance, medical, transportation, etc. In this thesis, we focused on machine learning methods : neural networks and SVM. We have also been interested in the meta-methods to push up the predictor performances, and more specifically the local models. In a divide and conquer strategy, the local models perform a clustering over the data sets before different predictors are affected into each obtained subset. We present in this thesis a new algorithm for recurrent neural networks to use them as local predictors. We also propose two novel clustering techniques suitable for local models. The first is based on Kohonen maps, and the second is based on binary trees
Huard, Malo. "Apprentissage et prévision séquentiels : bornes uniformes pour le regret linéaire et séries temporelles hiérarchiques." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASM009.
This work presents some theoretical and practical contributions to the prediction of arbitrary sequences. In this domain, forecasting takes place sequentially at the same time as learning. At each step, the model is fitted on the past data in order to predict the next observation. The goal of this model is to make the best possible predictions, i.e. those that minimize their deviations from the observations, which are made a posteriori. Sequential learning methods are evaluated by their regret, which measures how close strategies are to the best possible, known only after all the data is available. In this thesis, we extend the set of weights vectors a method is compared to when doing sequential linear regression. We have adapted an existing algorithm by improving its theoretical guarantees allowing it to be compared to any constant linear combination without restriction on the norm of its mixing weights. A second work consisted in extending sequential forecasting methods when forcasted data is organized in a hierarchy. We tested these hierarchical methods on two practical applications, household power consumption prediction and demand forecasts in e-commerce
Lefieux, Vincent. "Modèles semi-paramétriques appliqués à la prévision des séries temporelles : cas de la consommation d’électricité." Phd thesis, Rennes 2, 2007. https://theses.hal.science/tel-00179866/fr/.
Réseau de Transport d’Electricité (RTE), in charge of operating the French electric transportation grid, needs an accurate forecast of the power consumption in order to operate it correctly. The forecasts used everyday result from a model combining a nonlinear parametric regression and a SARIMA model. In order to obtain an adaptive forecasting model, nonparametric forecasting methods have already been tested without real success. In particular, it is known that a nonparametric predictor behaves badly with a great number of explanatory variables, what is commonly called the curse of dimensionality. Recently, semiparametric methods which improve the pure nonparametric approach have been proposed to estimate a regression function. Based on the concept of ”dimension reduction”, one those methods (called MAVE : Moving Average -conditional- Variance Estimate) can apply to time series. We study empirically its effectiveness to predict the future values of an autoregressive time series. We then adapt this method, from a practical point of view, to forecast power consumption. We propose a partially linear semiparametric model, based on the MAVE method, which allows to take into account simultaneously the autoregressive aspect of the problem and the exogenous variables. The proposed estimation procedure is practicaly efficient
Lefieux, Vincent. "Modèles semi-paramétriques appliqués à la prévision des séries temporelles. Cas de la consommation d'électricité." Phd thesis, Université Rennes 2, 2007. http://tel.archives-ouvertes.fr/tel-00179866.
Tatsa, Sylvestre. "Modélisation et prévision de la consommation horaire d'électricité au Québec : comparaison de méthodes de séries temporelles." Thesis, Université Laval, 2014. http://www.theses.ulaval.ca/2014/30329/30329.pdf.
This work explores the dynamics of residential electricity consumption in Quebec using hourly data from January 2006 to December 2010. We estimate three standard autoregressive models in time series analysis: the Holt-Winters exponential smoothing, the seasonal ARIMA model (SARIMA) and the seasonal ARIMA model with exogenous variables (SARIMAX). For the latter model, we focus on the effect of climate variables (temperature, relative humidity and dew point and cloud cover). Climatic factors have a significant impact on the short-term electricity consumption. The intra-sample and out-of-sample predictive performance of each model is evaluated with various adjustment indicators. Three out-of-sample time horizons are tested: 24 hours (one day), 72 hours (three days) and 168 hours (1 week). The SARIMA model provides the best out-of-sample predictive performance of 24 hours. The SARIMAX model reveals the most powerful out-of-sample time horizons of 72 and 168 hours. Additional research is needed to obtain predictive models fully satisfactory from a methodological point of view. Keywords: modeling, electricity, Holt-Winters, SARIMA, SARIMAX.
Vroman, Philippe. "Prédiction des séries temporelles en milieu incertain : application à la prévision de ventes dans la distribution textile." Lille 1, 2000. http://www.theses.fr/2000LIL10207.
Melzi, Fateh. "Fouille de données pour l'extraction de profils d'usage et la prévision dans le domaine de l'énergie." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1123/document.
Nowadays, countries are called upon to take measures aimed at a better rationalization of electricity resources with a view to sustainable development. Smart Metering solutions have been implemented and now allow a fine reading of consumption. The massive spatio-temporal data collected can thus help to better understand consumption behaviors, be able to forecast them and manage them precisely. The aim is to be able to ensure "intelligent" use of resources to consume less and consume better, for example by reducing consumption peaks or by using renewable energy sources. The thesis work takes place in this context and aims to develop data mining tools in order to better understand electricity consumption behaviors and to predict solar energy production, then enabling intelligent energy management.The first part of the thesis focuses on the classification of typical electrical consumption behaviors at the scale of a building and then a territory. In the first case, an identification of typical daily power consumption profiles was conducted based on the functional K-means algorithm and a Gaussian mixture model. On a territorial scale and in an unsupervised context, the aim is to identify typical electricity consumption profiles of residential users and to link these profiles to contextual variables and metadata collected on users. An extension of the classical Gaussian mixture model has been proposed. This allows exogenous variables such as the type of day (Saturday, Sunday and working day,...) to be taken into account in the classification, thus leading to a parsimonious model. The proposed model was compared with classical models and applied to an Irish database including both electricity consumption data and user surveys. An analysis of the results over a monthly period made it possible to extract a reduced set of homogeneous user groups in terms of their electricity consumption behaviors. We have also endeavoured to quantify the regularity of users in terms of consumption as well as the temporal evolution of their consumption behaviors during the year. These two aspects are indeed necessary to evaluate the potential for changing consumption behavior that requires a demand response policy (shift in peak consumption, for example) set up by electricity suppliers.The second part of the thesis concerns the forecast of solar irradiance over two time horizons: short and medium term. To do this, several approaches have been developed, including autoregressive statistical approaches for modelling time series and machine learning approaches based on neural networks, random forests and support vector machines. In order to take advantage of the different models, a hybrid model combining the different models was proposed. An exhaustive evaluation of the different approaches was conducted on a large database including four locations (Carpentras, Brasilia, Pamplona and Reunion Island), each characterized by a specific climate as well as weather parameters: measured and predicted using NWP models (Numerical Weather Predictions). The results obtained showed that the hybrid model improves the results of photovoltaic production forecasts for all locations
Zuo, Jingwei. "Apprentissage de représentations et prédiction pour des séries-temporelles inter-dépendantes." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG038.
Time series is a common data type that has been applied to enormous real-life applications, such as financial analysis, medical diagnosis, environmental monitoring, astronomical discovery, etc. Due to its complex structure, time series raises several challenges in their data processing and mining. The representation of time series plays a key role in data mining tasks and machine learning algorithms for time series. Yet, a few methods consider the interrelation that may exist between different time series when building the representation. Moreover, the time series mining requires considering not only the time series' characteristics in terms of data complexity but also the concrete application scenarios where the data mining task is performed to build task-specific representations.In this thesis, we will study different time series representation approaches that can be used in various time series mining tasks, while capturing the relationships among them. We focus specifically on modeling the interrelations between different time series when building the representations, which can be the temporal relationship within each data source or the inter-variable relationship between various data sources. Accordingly, we study the time series collected from various application contexts under different forms. First, considering the temporal relationship between the observations, we learn the time series in a dynamic streaming context, i.e., time series stream, for which the time series data is continuously generated from the data source. Second, for the inter-variable relationship, we study the multivariate time series (MTS) with data collected from multiple data sources. Finally, we study the MTS in the Smart City context, when each data source is given a spatial position. The MTS then becomes a geo-located time series (GTS), for which the inter-variable relationship requires more modeling efforts with the external spatial information. Therefore, for each type of time series data collected from distinct contexts, the interrelations between the time series observations are emphasized differently, on the temporal or (and) variable axis.Apart from the data complexity from the interrelations, we study various machine learning tasks on time series in order to validate the learned representations. The high-level learning tasks studied in this thesis consist of time series classification, semi-supervised time series learning, and time series forecasting. We show how the learned representations connect with different time series learning tasks under distinct application contexts. More importantly, we conduct the interdisciplinary study on time series by leveraging real-life challenges in machine learning tasks, which allows for improving the learning model's performance and applying more complex time series scenarios.Concretely, for these time series learning tasks, our main research contributions are the following: (i) we propose a dynamic time series representation learning model in the streaming context, which considers both the characteristics of time series and the challenges in data streams. We claim and demonstrate that the Shapelet, a shape-based time series feature, is the best representation in such a dynamic context; (ii) we propose a semi-supervised model for representation learning in multivariate time series (MTS). The inter-variable relationship over multiple data sources is modeled in a real-life context, where the data annotations are limited; (iii) we design a geo-located time series (GTS) representation learning model for Smart City applications. We study specifically the traffic forecasting task, with a focus on the missing-value treatment within the forecasting algorithm
Gaillard, Pierre. "Contributions à l’agrégation séquentielle robuste d’experts : Travaux sur l’erreur d’approximation et la prévision en loi. Applications à la prévision pour les marchés de l’énergie." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112133/document.
We are interested in online forecasting of an arbitrary sequence of observations. At each time step, some experts provide predictions of the next observation. Then, we form our prediction by combining the expert forecasts. This is the setting of online robust aggregation of experts. The goal is to ensure a small cumulative regret. In other words, we want that our cumulative loss does not exceed too much the one of the best expert. We are looking for worst-case guarantees: no stochastic assumption on the data to be predicted is made. The sequence of observations is arbitrary. A first objective of this work is to improve the prediction accuracy. We investigate several possibilities. An example is to design fully automatic procedures that can exploit simplicity of the data whenever it is present. Another example relies on working on the expert set so as to improve its diversity. A second objective of this work is to produce probabilistic predictions. We are interested in coupling the point prediction with a measure of uncertainty (i.e., interval forecasts,…). The real world applications of the above setting are multiple. Indeed, very few assumptions are made on the data. Besides, online learning that deals with data sequentially is crucial to process big data sets in real time. In this thesis, we carry out for EDF several empirical studies of energy data sets and we achieve good forecasting performance
Agoua, Xwégnon. "Développement de méthodes spatio-temporelles pour la prévision à court terme de la production photovoltaïque." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEM066/document.
The evolution of the global energy context and the challenges of climate change have led to anincrease in the production capacity of renewable energy. Renewable energies are characterized byhigh variability due to their dependence on meteorological conditions. Controlling this variabilityis an important challenge for the operators of the electricity systems, but also for achieving the Europeanobjectives of reducing greenhouse gas emissions, improving energy efficiency and increasing the share of renewable energies in EU energy consumption. In the case of photovoltaics (PV), the control of the variability of the production requires to predict with minimum errors the future production of the power stations. These forecasts contribute to increasing the level of PV penetration and optimal integration in the power grid, improving PV plant management and participating in electricity markets. The objective of this thesis is to contribute to the improvement of the short-term predictability (less than 6 hours) of PV production. First, we analyze the spatio-temporal variability of PV production and propose a method to reduce the nonstationarity of the production series. We then propose a deterministic prediction model that exploits the spatio-temporal correlations between the power plants of a spatial grid. The power stationsare used as a network of sensors to anticipate sources of variability. We also propose an automaticmethod for selecting variables to solve the dimensionality and sparsity problems of the space-time model. A probabilistic spatio-temporal model has also been developed to produce efficient forecasts not only of the average level of future production but of its entire distribution. Finally, we propose a model that exploits observations of satellite images to improve short-term forecasting of PV production
Sicard, Pierre. "Caractérisation des retombées atmosphériques en France en zone rurale sous forme de précipitations, gaz et aérosols : analyse des tendances spatio-temporelles et des séries chronologiques." Lille 1, 2006. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2006/50376-2006-Sicard.pdf.
Despagne, Wilfried. "Construction, analyse et implémentation d'un modèle de prévision : déploiement sous forme d'un système de prévision chez un opérateur européen du transport et de la logistique." Phd thesis, Université Européenne de Bretagne, 2010. http://tel.archives-ouvertes.fr/tel-00487327.
Mangeas, Morgan. "Propriétés statistiques des modèles paramétriques non-linéaires de prévisions de séries temporelles : application aux réseaux de neurones à propagation directe." Paris 1, 1996. http://www.theses.fr/1996PA010076.
Todo, William. "Maintenance prédictive des systèmes d'air d'avion : une approche basée sur l'apprentissage automatique et les autoencodeurs variationnels." Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30372.
Liebherr-Aerospace, a major player in the aviation industry, specializes in the design, development, and manufacturing of essential systems for aircraft. As an Original Equipment Manufacturer (OEM), the company guarantees the quality of its products, ensures the supply of spare parts, and offers technical support for its products. In addition to these roles, it provides Maintenance, Repair, and Operations (MRO) services and Aircraft On Ground (AOG) services in case of aircraft grounding. In this context, Liebherr Aerospace Toulouse focuses on developing predictive maintenance tools to reduce aircraft downtime and anticipate failures. This work is the subject of a thesis conducted in collaboration with the Toulouse Institute of Mathematics and ANITI. The thesis consists of five chapters. The first chapter addresses the state of the art in predictive maintenance and its link with anomaly detection. It also presents machine learning and deep learning techniques used in this field. The second chapter discusses the operation of air systems (also known as "bleed air systems") and the challenges encountered for the predictive maintenance of these systems. The third chapter highlights the role of Variational Autoencoders (VAE) for dimension reduction, and compares their performance to other techniques. The fourth chapter presents a new method to understand abnormal behaviors of multivariate time series using a contrastive variational autoencoder (CVAE). Finally, the fifth chapter explains how the CVAE method is adapted to solve predictive maintenance problems. It introduces a semi-supervised approach to train the model even in the presence of censored data and compares its effectiveness to other classic models. The "selective kernels" technique, derived from image classification, is adapted to time series classification
Çinar, Yagmur Gizem. "Prédiction de séquences basée sur des réseaux de neurones récurrents dans le contexte des séries temporelles et des sessions de recherche d'information." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM079.
This thesis investigates challenges of sequence prediction in different scenarios such as sequence prediction using recurrent neural networks (RNNs) in the context of time series and information retrieval (IR) search sessions. Predicting the unknown values that follow some previously observed values is basically called sequence prediction.It is widely applicable to many domains where a sequential behavior is observed in the data. In this study, we focus on two different types of sequence prediction tasks: time series forecasting and next query prediction in an information retrieval search session.Time series often display pseudo-periods, i.e. time intervals with strong correlation between values of time series. Seasonal changes in weather time series or electricity usage at day and night time are some examples of pseudo-periods. In a forecasting scenario, pseudo-periods correspond to the difference between the positions of the output being predicted and specific inputs.In order to capture periods in RNNs, one needs a memory of the input sequence. Sequence-to-sequence RNNs (with attention mechanism) reuse specific (representations of) input values to predict output values. Sequence-to-sequence RNNs with an attention mechanism seem to be adequate for capturing periods. In this manner, we first explore the capability of an attention mechanism in that context. However, according to our initial analysis, a standard attention mechanism did not perform well to capture the periods. Therefore, we propose a period-aware content-based attention RNN model. This model is an extension of state-of-the-art sequence-to-sequence RNNs with attention mechanism and it is aimed to capture the periods in time series with or without missing values.Our experimental results with period-aware content-based attention RNNs show significant improvement on univariate and multivariate time series forecasting performance on several publicly available data sets.Another challenge in sequence prediction is the next query prediction. The next query prediction helps users to disambiguate their search query, to explore different aspects of the information they need or to form a precise and succint query that leads to higher retrieval performance. A search session is dynamic, and the information need of a user might change over a search session as a result of the search interactions. Furthermore, interactions of a user with a search engine influence the user's query reformulations. Considering this influence on the query formulations, we first analyze where the next query words come from? Using the analysis of the sources of query words, we propose two next query prediction approaches: a set view and a sequence view.The set view adapts a bag-of-words approach using a novel feature set defined based on the sources of next query words analysis. Here, the next query is predicted using learning to rank. The sequence view extends a hierarchical RNN model by considering the sources of next query words in the prediction. The sources of next query words are incorporated by using an attention mechanism on the interaction words. We have observed using sequence approach, a natural formulation of the problem, and exploiting all sources of evidence lead to better next query prediction
Marsilli, Clément. "Mixed-Frequency Modeling and Economic Forecasting." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA2023/document.
Economic downturn and recession that many countries experienced in the wake of the global financial crisis demonstrate how important but difficult it is to forecast macroeconomic fluctuations, especially within a short time horizon. The doctoral dissertation studies, analyses and develops models for economic growth forecasting. The set of information coming from economic activity is vast and disparate. In fact, time series coming from real and financial economy do not have the same characteristics, both in terms of sampling frequency and predictive power. Therefore short-term forecasting models should both allow the use of mixed-frequency data and parsimony. The first chapter is dedicated to time series econometrics within a mixed-frequency framework. The second chapter contains two empirical works that sheds light on macro-financial linkages by assessing the leading role of the daily financial volatility in macroeconomic prediction during the Great Recession. The third chapter extends mixed-frequency model into a Bayesian framework and presents an empirical study using a stochastic volatility augmented mixed data sampling model. The fourth chapter focuses on variable selection techniques in mixed-frequency models for short-term forecasting. We address the selection issue by developing mixed-frequency-based dimension reduction techniques in a cross-validation procedure that allows automatic in-sample selection based on recent forecasting performances. Our model succeeds in constructing an objective variable selection with broad applicability
Fries, Sébastien. "Anticipative alpha-stable linear processes for time series analysis : conditional dynamics and estimation." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLG005.
In the framework of linear time series analysis, we study a class of so-called anticipative strictly stationary processes potentially depending on all the terms of an independent and identically distributed alpha-stable errors sequence.Focusing first on autoregressive (AR) processes, it is shown that higher order conditional moments than marginal ones exist provided the characteristic polynomials admits at least one root inside the unit circle. The forms of the first and second order moments are obtained in special cases.The least squares method is shown to provide a consistent estimator of an all-pass causal representation of the process, the validity of which can be tested by a portmanteau-type test. A method based on extreme residuals clustering is proposed to determine the original AR representation.The anticipative stable AR(1) is studied in details in the framework of bivariate alpha-stable random vectors and the functional forms of its first four conditional moments are obtained under any admissible parameterisation.It is shown that during extreme events, these moments become equivalent to those of a two-point distribution charging two polarly-opposite future paths: exponential growth or collapse.Parallel results are obtained for the continuous time counterpart of the AR(1), the anticipative stable Ornstein-Uhlenbeck process.For infinite alpha-stable moving averages, the conditional distribution of future paths given the observed past trajectory during extreme events is derived on the basis of a new representation of stable random vectors on unit cylinders relative to semi-norms.Contrary to the case of norms, such representation yield a multivariate regularly varying tails property appropriate for prediction purposes, but not all stable vectors admit such a representation.A characterisation is provided and it is shown that finite length paths of a stable moving average admit such representation provided the process is "anticipative enough".Processes resulting from the linear combination of stable moving averages are encompassed, and the conditional distribution has a natural interpretation in terms of pattern identification
Daouayry, Nassia. "Détection d’évènements anormaux dans les gros volumes de données d’utilisation issues des hélicoptères." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI084.
This thesis addresses the topic of the normality of the helicopter component systems functioning through the exploitation of the usage data coming from the HUMS (Health and Usage Monitoring System) for the maintenance. Helicopters are complex systems and are subject to strict regulatory requirements imposed by the authorities in charge of flight safety. The analysis of monitoring data is therefore a preferred means of improving helicopter maintenance. In addition, the data produced by the HUMS system are an indispensable resource for assessing the health of the systems after each flight. The data collected are numerous and the complexity of the different systems makes it difficult to analyze them on a case-by-case basis.The work of this thesis deals mainly with the issues related to the utilization of multivariate series for the visualization and the implementation of anomaly detection tools within Airbus Helicopters.We have developed different approaches to catch in the flight data a relative normality for a given system.A work on the visualization of time series has been developed to identify the patterns representing the normality of a system's operation.Based on this approach, we have developed a "virtual sensor" allowing to estimate the values of a real sensor from a set of flight parameters in order to detect abnormal events when the values of these two sensors tend to diverge
Salotti, Julien. "Méthodes de sélection de voisinage pour la prévision à court-terme du trafic urbain." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI077.
In the context of Smart Cities, there is a growing need to inform drivers, anticipate congestion and take action to manage the state of the traffic flow on the road network. This need has driven the development of a large number of traffic forecasting methods. The last decades have seen the rise in computing power, in storage capacity and in our ability to process information in real-time. More and more road segments are equipped with traffic sensors. These evolutions are new elements to take into consideration in order to design accurate traffic forecasting algorithms. Despite the large amount of research efforts on this topic, there is still no clear understanding of which criteria are required in order to achieve a high forecasting performance at the network scale. In this thesis, we study two real datasets collected in two main French cities: Lyon and Marseille. The Lyon dataset describes the traffic flow on an urban network. The Marseille dataset descrobes the traffic flow on urban freeways. We evaluate the performance of methods from different fields: time series analysis (autoregressive models), and different subfields of machine learning (support vector machines, neural networks, nearest-neighbors regression). We also study different neighborhood selection strategies in order to improve the forecasting accuracy, while decreasing the complexity of the models. We evaluate a well-known approach (Lasso) and apply for the first time on traffic data a method based on information theory and graphical models (TiGraMITe), which has shown very effective on similar physics applications. Our experimental results confirm the usefulness of neighborhood selection mechanisms in some contexts and illustrate the complementarity of forecasting methods with respect to the type of network (urban, freeway) and the forecasting horizon (from 6 to 30 minutes)
Cifonelli, Antonio. "Probabilistic exponential smoothing for explainable AI in the supply chain domain." Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMIR41.
The key role that AI could play in improving business operations has been known for a long time, but the penetration process of this new technology has encountered certain obstacles within companies, in particular, implementation costs. On average, it takes 2.8 years from supplier selection to full deployment of a new solution. There are three fundamental points to consider when developing a new model. Misalignment of expectations, the need for understanding and explanation, and performance and reliability issues. In the case of models dealing with supply chain data, there are five additionally specific issues: - Managing uncertainty. Precision is not everything. Decision-makers are looking for a way to minimise the risk associated with each decision they have to make in the presence of uncertainty. Obtaining an exact forecast is a advantageous; obtaining a fairly accurate forecast and calculating its limits is realistic and appropriate. - Handling integer and positive data. Most items sold in retail cannot be sold in subunits. This simple aspect of selling, results in a constraint that must be satisfied by the result of any given method or model: the result must be a positive integer. - Observability. Customer demand cannot be measured directly, only sales can be recorded and used as a proxy. - Scarcity and parsimony. Sales are a discontinuous quantity. By recording sales by day, an entire year is condensed into just 365 points. What’s more, a large proportion of them will be zero. - Just-in-time optimisation. Forecasting is a key function, but it is only one element in a chain of processes supporting decision-making. Time is a precious resource that cannot be devoted entirely to a single function. The decision-making process and associated adaptations must therefore be carried out within a limited time frame, and in a sufficiently flexible manner to be able to be interrupted and restarted if necessary in order to incorporate unexpected events or necessary adjustments. This thesis fits into this context and is the result of the work carried out at the heart of Lokad, a Paris-based software company aiming to bridge the gap between technology and the supply chain. The doctoral research was funded by Lokad in collaborationwith the ANRT under a CIFRE contract. The proposed work aims to be a good compromise between new technologies and business expectations, addressing the various aspects presented above. We have started forecasting using the exponential smoothing family which are easy to implement and extremely fast to run. As they are widely used in the industry, they have already won the confidence of users. What’s more, they are easy to understand and explain to an unlettered audience. By exploiting more advanced AI techniques, some of the limitations of the models used can be overcome. Cross-learning proved to be a relevant approach for extrapolating useful information when the number of available data was very limited. Since the common Gaussian assumption is not suitable for discrete sales data, we proposed using a model associatedwith either a Poisson distribution or a Negative Binomial one, which better corresponds to the nature of the phenomena we are seeking to model and predict. We also proposed using Monte Carlo simulations to deal with uncertainty. A number of scenarios are generated, sampled and modelled using a distribution. From this distribution, confidence intervals of different and adapted sizes can be deduced. Using real company data, we compared our approach with state-of-the-art methods such as DeepAR model, DeepSSMs and N-Beats. We deduced a new model based on the Holt-Winter method. These models were implemented in Lokad’s work flow
Fries, Sébastien. "Anticipative alpha-stable linear processes for time series analysis : conditional dynamics and estimation." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLG005/document.
In the framework of linear time series analysis, we study a class of so-called anticipative strictly stationary processes potentially depending on all the terms of an independent and identically distributed alpha-stable errors sequence.Focusing first on autoregressive (AR) processes, it is shown that higher order conditional moments than marginal ones exist provided the characteristic polynomials admits at least one root inside the unit circle. The forms of the first and second order moments are obtained in special cases.The least squares method is shown to provide a consistent estimator of an all-pass causal representation of the process, the validity of which can be tested by a portmanteau-type test. A method based on extreme residuals clustering is proposed to determine the original AR representation.The anticipative stable AR(1) is studied in details in the framework of bivariate alpha-stable random vectors and the functional forms of its first four conditional moments are obtained under any admissible parameterisation.It is shown that during extreme events, these moments become equivalent to those of a two-point distribution charging two polarly-opposite future paths: exponential growth or collapse.Parallel results are obtained for the continuous time counterpart of the AR(1), the anticipative stable Ornstein-Uhlenbeck process.For infinite alpha-stable moving averages, the conditional distribution of future paths given the observed past trajectory during extreme events is derived on the basis of a new representation of stable random vectors on unit cylinders relative to semi-norms.Contrary to the case of norms, such representation yield a multivariate regularly varying tails property appropriate for prediction purposes, but not all stable vectors admit such a representation.A characterisation is provided and it is shown that finite length paths of a stable moving average admit such representation provided the process is "anticipative enough".Processes resulting from the linear combination of stable moving averages are encompassed, and the conditional distribution has a natural interpretation in terms of pattern identification
Rakotoarisoa, Mahefa. "Les risques hydrologiques dans les bassins versants sous contrôle anthropique : modélisation de l'aléa, de la vulnérabilité et des conséquences sur les sociétés. : Cas de la région Sud-ouest de Madagascar." Thesis, Angers, 2017. http://www.theses.fr/2017ANGE0067/document.
Hydrological risks are recurrent on the Fiherenana watershed - Madagascar. The city of Toliara, which is located at the outlet of the river basin, is subject each year to hurricane hazards and floods. The stakes are of major importance in this part of the island. This study begins with the analysis of hazard by collecting all existing hydro-climatic data on the catchment. It then seeks to determine trends, despite the significant lack of data, using statistical models (time series). Then, two approaches are used to assess the vulnerability of the city of Toliara and its surrounding villages. First, a static approach, from surveys of land and the use of GIS are conducted. Then, the second method is based on a multi-agent model. The first step is the mapping of a microscale vulnerability index which is an arrangement of several static criteria. For each House, there are several criteria of vulnerability such as potential water depth or architectural typology. As for the second part, scenes of agents are simulated in order to evaluate the degree of housing vulnerability to flooding. The model aims to estimate the chances of the occupants to escape from a catastrophic flood. For this purpose, we compare various settings and scenarios, some of which are conducted to take into account the effect of various decisions made by the responsible entities (awareness campaign etc.). The simulation consists of two essential parts: the simulation of the rise of water and the simulation of the behaviour of the people facing the occurence of hazard. Indicators and simulations allow to better understand the risks in order to help crisis management. Key Words: Hy
Boulegane, Dihia. "Machine learning algorithms for dynamic Internet of Things." Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAT048.
With the rapid growth of Internet-of-Things (IoT) devices and sensors, sources that are continuously releasing and curating vast amount of data at high pace in the form of stream. The ubiquitous data streams are essential for data driven decisionmaking in different business sectors using Artificial Intelligence (AI) and Machine Learning (ML) techniques in order to extract valuable knowledge and turn it to appropriate actions. Besides, the data being collected is often associated with a temporal indicator, referred to as temporal data stream that is a potentially infinite sequence of observations captured over time at regular intervals, but not necessarily. Forecasting is a challenging tasks in the field of AI and aims at understanding the process generating the observations over time based on past data in order to accurately predict future behavior. Stream Learning is the emerging research field which focuses on learning from infinite and evolving data streams. The thesis tackles dynamic model combination that achieves competitive results despite their high computational costs in terms of memory and time. We study several approaches to estimate the predictive performance of individual forecasting models according to the data and contribute by introducing novel windowing and meta-learning based methods to cope with evolving data streams. Subsequently, we propose different selection methods that aim at constituting a committee of accurate and diverse models. The predictions of these models are then weighted and aggregated. The second part addresses model compression that aims at building a single model to mimic the behavior of a highly performing and complex ensemble while reducing its complexity. Finally, we present the first streaming competition ”Real-time Machine Learning Competition on Data Streams”, at the IEEE Big Data 2019 conference, using the new SCALAR platform
Hmamouche, Youssef. "Prédiction des séries temporelles larges." Electronic Thesis or Diss., Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0480.
Nowadays, storage and data processing systems are supposed to store and process large time series. As the number of variables observed increases very rapidly, their prediction becomes more and more complicated, and the use of all the variables poses problems for classical prediction models.Univariate prediction models are among the first models of prediction. To improve these models, the use of multiple variables has become common. Thus, multivariate models and become more and more used because they consider more information.With the increase of data related to each other, the application of multivariate models is also questionable. Because the use of all existing information does not necessarily lead to the best predictions. Therefore, the challenge in this situation is to find the most relevant factors among all available data relative to a target variable.In this thesis, we study this problem by presenting a detailed analysis of the proposed approaches in the literature. We address the problem of prediction and size reduction of massive data. We also discuss these approaches in the context of Big Data.The proposed approaches show promising and very competitive results compared to well-known algorithms, and lead to an improvement in the accuracy of the predictions on the data used.Then, we present our contributions, and propose a complete methodology for the prediction of wide time series. We also extend this methodology to big data via distributed computing and parallelism with an implementation of the prediction process proposed in the Hadoop / Spark environment
Hmamouche, Youssef. "Prédiction des séries temporelles larges." Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0480.
Nowadays, storage and data processing systems are supposed to store and process large time series. As the number of variables observed increases very rapidly, their prediction becomes more and more complicated, and the use of all the variables poses problems for classical prediction models.Univariate prediction models are among the first models of prediction. To improve these models, the use of multiple variables has become common. Thus, multivariate models and become more and more used because they consider more information.With the increase of data related to each other, the application of multivariate models is also questionable. Because the use of all existing information does not necessarily lead to the best predictions. Therefore, the challenge in this situation is to find the most relevant factors among all available data relative to a target variable.In this thesis, we study this problem by presenting a detailed analysis of the proposed approaches in the literature. We address the problem of prediction and size reduction of massive data. We also discuss these approaches in the context of Big Data.The proposed approaches show promising and very competitive results compared to well-known algorithms, and lead to an improvement in the accuracy of the predictions on the data used.Then, we present our contributions, and propose a complete methodology for the prediction of wide time series. We also extend this methodology to big data via distributed computing and parallelism with an implementation of the prediction process proposed in the Hadoop / Spark environment
Hugueney, Bernard. "Représentations symboliques de longues séries temporelles." Paris 6, 2003. http://www.theses.fr/2003PA066161.
Nowakowski, Samuel. "Détection de défauts dans les séries temporelles." Nancy 1, 1989. http://www.theses.fr/1989NAN10074.
Haykal, Vanessa. "Modélisation des séries temporelles par apprentissage profond." Thesis, Tours, 2019. http://www.theses.fr/2019TOUR4019.
Time series prediction is a problem that has been addressed for many years. In this thesis, we have been interested in methods resulting from deep learning. It is well known that if the relationships between the data are temporal, it is difficult to analyze and predict accurately due to non-linear trends and the existence of noise specifically in the financial and electrical series. From this context, we propose a new hybrid noise reduction architecture that models the recursive error series to improve predictions. The learning process fusessimultaneouslyaconvolutionalneuralnetwork(CNN)andarecurrentlongshort-term memory network (LSTM). This model is distinguished by its ability to capture globally a variety of hybrid properties, where it is able to extract local signal features, to learn long-term and non-linear dependencies, and to have a high noise resistance. The second contribution concerns the limitations of the global approaches because of the dynamic switching regimes in the signal. We present a local unsupervised modification with our previous architecture in order to adjust the results by adapting the Hidden Markov Model (HMM). Finally, we were also interested in multi-resolution techniques to improve the performance of the convolutional layers, notably by using the variational mode decomposition method (VMD)
Jabbari, Ali. "Encodage visuel composite pour les séries temporelles." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM035/document.
Time series are one of the most common types of recorded data in various scientific, industrial, and financial domains. Depending on the context, time series analysis are used for a variety of purposes: forecasting, estimation, classification, and trend and event detection. Thanks to the outstanding capabilities of human visual perception, visualization remains one of the most powerful tools for data analysis, particularly for time series. With the increase in data sets' volume and complexity, new visualization techniques are clearly needed to improve data analysis. They aim to facilitate visual analysis in specified situations, tasks, or for unguided exploratory analysis.Visualization is based upon visual mapping, which consists in association of data values to visual channels, e.g. position, size, and color of the graphical elements. In this regard, the most familiar form of time series visualization, i.e. line charts, consists in a mapping of data values to the vertical position of the line. However, a single visual mapping is not suitable for all situations and analytical objectives.Our goal is to introduce alternatives to the conventional visual mapping and find situations in which, the new approach compensate for the simplicity and familiarity of the existing techniques. We present a review of the existing literature on time series visualization and then, we focus on the existing approaches to visual mapping.Next, we present our contributions. Our first contribution is a systematic study of a "composite" visual mapping which consists in using combinations of visual channels to communicate different facets of a time series. By means of several user studies, we compare our new visual mappings with an existing reference technique and we measure users' speed and accuracy in different analytical tasks. Our results show that the new visual designs lead to analytical performances close to those of the existing techniques without being unnecessarily complex or requiring training. Also, some of the proposed mappings outperform the existing techniques in space constraint situations. Space efficiency is of great importance to simultaneous visualization of large volumes of data or visualization on small screens. Both scenarios are among the current challenges in information visualization
Assaad, Charles. "Découvertes de relations causales entre séries temporelles." Electronic Thesis or Diss., Université Grenoble Alpes, 2021. http://www.theses.fr/2021GRALM019.
This thesis aims to give a broad coverage of central concepts and principles of causation and in particular the ones involved in the emerging approaches to causal discovery from time series.After reviewing concepts and algorithms, we first present a new approach that infer a summary graph of the causal system underlying the observational time series while relaxing the idealized setting of equal sampling rates and discuss the assumptions underlying its validity. The gist of our proposal lies in the introduction of the causal temporal mutual information measure that can detect the independence and the conditional independence between two time series, and in making an apparent connection between entropy and the probability raising principle that can be used for building new rules for the orientation of the direction of causation. Moreover, through the development of this base method, we propose several extensions, namely to handle hidden confounders, to infer a window causal graph given a summary graph, and to consider sequences instead of time series.Secondly, we focus on the discovery of causal relations from a statistical distribution that is not entirely faithful to the real causal graph and on distinguishing a common cause from an intermediate cause even in the absence of a time indicator. The key aspect of our answer to this problem is the reliance on the additive noise principle to infer a directed supergraph that contains the causal graph. To converge toward the causal graph, we use in a second step a new measure called the temporal causation entropy that prunes for each node of the directed supergraph, the parents that are conditionally independent of their child. Furthermore, we explore complementary extensions of our second base method that involve a pairwise strategy which reduces through multitask learning and a denoising technique, the number of functions that need to be estimated. We perform an extensive experimental comparison of the proposed algorithms on both synthetic and real datasets and demonstrate their promising practical performance: gaining in time complexity while preserving accuracy
Frambourg, Cédric. "Apprentissage d'appariements pour la discrimination de séries temporelles." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00948989.
El, Ghini Ahmed. "Contribution à l'identification de modèles de séries temporelles." Lille 3, 2008. http://www.theses.fr/2008LIL30017.
This PhD dissertation consists of two parts dealing with the probelms of identification and selection in econometrics. Two mains topics are considered : (1) time series model identification by using (inverse) autocorrelation and (inverse) partial autocorrelation functions ; (2) estimation of inverse autocorrelation function in the framework of nonlinear tima series. The two parts are summarized below. In the first part of this work, we consider time series model identification y using (inverse) autocorrelation and (inverse) partial autocorrelation functions. We construct statistical tests based on estimators of these functions and establish their asymptotic distribution. Using Bahadur and Pitman approaches, we compare the performance of (inverse) autocorelations and (inverse) partial autocorrelations in detecting the order of moving average and autoregressive model. Next, we study the identification of the inverse process of an ARMA model and their probalistic properties. Finally, we characterize the time reversibility by means of the dual and inverse processes. The second part is devoted to estimation of the inverse autocorrelation function in the framework of nonlinear time series. Undes some regularity conditions, we study the asymptotic properties of empirical inverse autocorrelations for stationary and strongly mixing process. We establish the consistency and the asymptotic normality of the estimators. Next, we consider the case of linear process with GARCH errors and obtain means of some examples that the standard formula can be misleading if the generating process is non linear. Finally, we apply our previous results to prove the asymptotic normality of the parameter estimates of weak moving average. Our results are illustrated by Monte Carlo experiments and real data experiences
Guillemé, Maël. "Extraction de connaissances interprétables dans des séries temporelles." Thesis, Rennes 1, 2019. http://www.theses.fr/2019REN1S102.
Energiency is a company that sells a platform to allow manufacturers to analyze their energy consumption data represented in the form of time series. This platform integrates machine learning models to meet customer needs. The application of such models to time series encounters two problems: on the one hand, some classical machine learning approaches have been designed for tabular data and must be adapted to time series, on the other hand, the results of some approaches are difficult for end users to understand. In the first part, we adapt a method to search for occurrences of temporal rules on time series from machines and industrial infrastructures. A temporal rule captures successional relationships between behaviors in time series . In industrial series, due to the presence of many external factors, these regular behaviours can be disruptive. Current methods for searching the occurrences of a rule use a distance measure to assess the similarity between sub-series. However, these measurements are not suitable for assessing the similarity of distorted series such as those in industrial settings. The first contribution of this thesis is the proposal of a method for searching for occurrences of temporal rules capable of capturing this variability in industrial time series. For this purpose, the method integrates the use of elastic distance measurements capable of assessing the similarity between slightly deformed time series. The second part of the thesis is devoted to the interpretability of time series classification methods, i.e. the ability of a classifier to return explanations for its results. These explanations must be understandable by a human. Classification is the task of associating a time series with a category. For an end user inclined to make decisions based on a classifier’s results, understanding the rationale behind those results is of great importance. Otherwise, it is like having blind confidence in the classifier. The second contribution of this thesis is an interpretable time series classifier that can directly provide explanations for its results. This classifier uses local information on time series to discriminate against them. The third and last contribution of this thesis, a method to explain a posteriori any result of any classifier. We carried out a user study to evaluate the interpretability of our method
Bailly, Adeline. "Classification de séries temporelles avec applications en télédétection." Thesis, Rennes 2, 2018. http://www.theses.fr/2018REN20021/document.
Time Series Classification (TSC) has received an important amount of interest over the past years due to many real-life applications. In this PhD, we create new algorithms for TSC, with a particular emphasis on Remote Sensing (RS) time series data. We first propose the Dense Bag-of-Temporal-SIFT-Words (D-BoTSW) method that uses dense local features based on SIFT features for 1D data. Extensive experiments exhibit that D-BoTSW significantly outperforms nearly all compared standalone baseline classifiers. Then, we propose an enhancement of the Learning Time Series Shapelets (LTS) algorithm called Adversarially-Built Shapelets (ABS) based on the introduction of adversarial time series during the learning process. Adversarial time series provide an additional regularization benefit for the shapelets and experiments show a performance improvementbetween the baseline and our proposed framework. Due to the lack of available RS time series datasets,we also present and experiment on two remote sensing time series datasets called TiSeLaCand Brazilian-Amazon
Gautier, Antony. "Modèles de séries temporelles à coefficients dépendants du temps." Lille 3, 2004. http://www.theses.fr/2004LIL30034.
Dola, Béchir. "Problèmes économétriques d'analyse des séries temporelles à mémoire longue." Phd thesis, Université Panthéon-Sorbonne - Paris I, 2012. http://tel.archives-ouvertes.fr/tel-00794676.
Ahmad, Ali. "Contribution à l'économétrie des séries temporelles à valeurs entières." Thesis, Lille 3, 2016. http://www.theses.fr/2016LIL30059/document.
The framework of this PhD dissertation is the conditional mean count time seriesmodels. We propose the Poisson quasi-maximum likelihood estimator (PQMLE) for the conditional mean parameters. We show that, under quite general regularityconditions, this estimator is consistent and asymptotically normal for a wide classeof count time series models. Since the conditional mean parameters of some modelsare positively constrained, as, for example, in the integer-valued autoregressive (INAR) and in the integer-valued generalized autoregressive conditional heteroscedasticity (INGARCH), we study the asymptotic distribution of this estimator when the parameter lies at the boundary of the parameter space. We deduce a Waldtype test for the significance of the parameters and another Wald-type test for the constance of the conditional mean. Subsequently, we propose a robust and general goodness-of-fit test for the count time series models. We derive the joint distribution of the PQMLE and of the empirical residual autocovariances. Then, we deduce the asymptotic distribution of the estimated residual autocovariances and also of a portmanteau test. Finally, we propose the PQMLE for estimating, equation-by-equation (EbE), the conditional mean parameters of a multivariate time series of counts. By using slightly different assumptions from those given for PQMLE, we show the consistency and the asymptotic normality of this estimator for a considerable variety of multivariate count time series models
Jebreen, Kamel. "Modèles graphiques pour la classification et les séries temporelles." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0248/document.
First, in this dissertation, we will show that Bayesian networks classifiers are very accurate models when compared to other classical machine learning methods. Discretising input variables often increase the performance of Bayesian networks classifiers, as does a feature selection procedure. Different types of Bayesian networks may be used for supervised classification. We combine such approaches together with feature selection and discretisation to show that such a combination gives rise to powerful classifiers. A large choice of data sets from the UCI machine learning repository are used in our experiments, and the application to Epilepsy type prediction based on PET scan data confirms the efficiency of our approach. Second, in this dissertation we also consider modelling interaction between a set of variables in the context of time series and high dimension. We suggest two approaches; the first is similar to the neighbourhood lasso where the lasso model is replaced by Support Vector Machines (SVMs); the second is a restricted Bayesian network for time series. We demonstrate the efficiency of our approaches simulations using linear and nonlinear data set and a mixture of both
Desrues, Mathilde. "Surveillance opérationnelle de mouvements gravitaires par séries temporelles d'images." Thesis, Strasbourg, 2021. http://www.theses.fr/2021STRAH002.
Understanding the dynamics and the behavior of gravitational slope movements is essential to anticipate catastrophic failures and thus to protect lives and infrastructures. Several geodetic techniques already bring some information on the displacement / deformation fields of the unstable slopes. These techniques allow the analysis of the geometrical properties of the moving masses and of the mechanical behavior of the slopes. By combining time series of passive terrestrial imagery and these classical techniques, the amount of collected information is densified and spatially distributed. Digital passive sensors are increasingly used for the detection and the monitoring of gravitational motion. They provide both qualitative information, such as the detection of surface changes, and a quantitative characterization, such as the quantification of the soil displacement by correlation techniques. Our approach consists in analyzing time series of terrestrial images from either a single fixed camera or pair-wise cameras, the latter to obtain redundant and additional information. The time series are processed to detect the areas in which the Kinematic behavior is homogeneous. The slope properties, such as the sliding volume and the thickness of the moving mass, are part of the analysis results to obtain an overview which is as complete as possible. This work is presented around the analysis of four landslides located in the French Alps. It is part of a CIFRE/ANRT agreement between the SAGE Society - Société Alpine de Géotechnique (Gières, France) and the IPGS - Institut de Physique du Globe de Strasbourg / CNRS UMR 7516 (Strasbourg, France)
Buiatti, Marco. "Correlations à longue distance dans les séries temporelles biologiques." Paris 6, 2006. http://www.theses.fr/2006PA066242.
A large number of biological systems exhibit scale-free behaviour of one or more variables. Scale-free behaviour reflects a tendency of complex systems to develop long-range correlations, i. E. Correlations that decay very slowly in time and extend over very large distances in space. However, the properties and the functional role of long-range correlations in biological systems are still poorly understood. The aim of this thesis is to shed new light into this issue with three studies in three different biological domains, both by exploring the relationship between the function of the system and its long-range statistical structure, and by investigating how biological systems adapt to a long-range correlated environment. The first study explores how a reasoning task modulates the temporal long-range correlations of the associated brain electrical activity as recorded by EEG. The task consists in searching a rule in triplets of numbers, and hypothesis are tested on the base of a performance feedback. We demonstrate that negative feedback elicits significantly stronger long-range correlations than positive feedback in wide brain areas. In the second study, we develop a high-order measure to investigate the long-range statistical structure of DNA sequences of prokaryotes. We test the hypothesis that prokaryotic DNA statistics is described by a model consisting in the superposition of a long-range correlated component and random noise. We show that the model fits the long-range statistics of several prokaryotic DNA sequences, and suggest a functional explanation of the result. The main aim of the third study was to investigate how neurons in the retina adapts to the wide range, long-range correlated temporal statistics of natural scenes. Adaptation is modelled as the cascade of the two major mechanisms of adaptation in the retina - light adaptation and contrast adaptation - predicting the mean and the variance of the input from the past input values. By testing the model on time series of natural light intensities, we show that such cascade is indeed sufficient to adapt to the natural stimulus by removing most of its long-range correlations, while no linear filtering alone achieves the same goal. This result suggests that contrast adaptation has efficiently developed to exploit the long-range temporal correlations of natural scenes in an optimal way
Gueguen, Lionel. "Extraction d'information et compression conjointes de Séries Temporelles d'Images Satellitaires." Phd thesis, Télécom ParisTech, 2007. http://pastel.archives-ouvertes.fr/pastel-00003146.
Partouty, S. "Interprétation des séries temporelles altimétriques sur la calotte polaire Antarctique." Phd thesis, Université Paul Sabatier - Toulouse III, 2009. http://tel.archives-ouvertes.fr/tel-01018319.
Khiali, Lynda. "Fouille de données à partir de séries temporelles d’images satellites." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS046/document.
Nowadays, remotely sensed images constitute a rich source of information that can be leveraged to support several applications including risk prevention, land use planning, land cover classification and many other several tasks. In this thesis, Satellite Image Time Series (SITS) are analysed to depict the dynamic of natural and semi-natural habitats. The objective is to identify, organize and highlight the evolution patterns of these areas.We introduce an object-oriented method to analyse SITS that consider segmented satellites images. Firstly, we identify the evolution profiles of the objects in the time series. Then, we analyse these profiles using machine learning methods. To identify the evolution profiles, we explore all the objects to select a subset of objects (spatio-temporal entities/reference objects) to be tracked. The evolution of the selected spatio-temporal entities is described using evolution graphs.To analyse these evolution graphs, we introduced three contributions. The first contribution explores annual SITS. It analyses the evolution graphs using clustering algorithms, to identify similar evolutions among the spatio-temporal entities. In the second contribution, we perform a multi-annual cross-site analysis. We consider several study areas described by multi-annual SITS. We use the clustering algorithms to identify intra and inter-site similarities. In the third contribution, we introduce à semi-supervised method based on constrained clustering. We propose a method to select the constraints that will be used to guide the clustering and adapt the results to the user needs.Our contributions were evaluated on several study areas. The experimental results allow to pinpoint relevant landscape evolutions in each study sites. We also identify the common evolutions among the different sites. In addition, the constraint selection method proposed in the constrained clustering allows to identify relevant entities. Thus, the results obtained using the unsupervised learning were improved and adapted to meet the user needs
Parouty, Soazig. "Interprétation des séries temporelles altimétriques sur la calotte polaire Antartique." Toulouse 3, 2009. http://thesesups.ups-tlse.fr/900/.
This work aims at improving our understanding of the altimetric time series acquired over the Antarctic Ice Sheet. Dual frequency data (S Band - 3. 2GHz and Ku Band - 13. 6GHz) from thealtimeter onboard the ENVISAT satellite are used, during a five year time period from january2003 until december 2007. These data cover around 80% of the surface of the Antarctic continent,up to 82°S. Having data in two different frequencies is valuable when it comes to better estimatethe altimeter sensitivity regarding snow surface property changes. Over the Antarctic ice sheet, snow surface changes with respect to space and time, beingaffected by meteorological conditions close to the surface, and especially winds. The altimetricwave penetrates more or less deeply beneath the surface, depending on snow surface and subsurfaceproperties. As a result, when the wave comes back to the satellite, the recorded signal, namedwaveform, is more or less distorted. The accuracy of the ice sheet topographic changes computedthanks to satellite altimetric techniques depends on our knowledge of the processes inducing thisdistortion. The purpose of the present work is to better understand the effect of changing windconditions on altimetric data. Winds in Antarctica are indeed famous for their strength and theirimpact on the snow surface state. First, spatial and temporal variability of the altimetric data on the one hand, and of wind speedreanalysis fields (from ERA-Interim, NCEP/NCAR and NCEP/DOE projects) on the other handare studied. We estimate spatial and temporal typical length scales for all datasets. As a result, weare able to smooth the data, so that all datasets have the same spatial and temporal caractericticlength scales. Furthermore, we note that our time series are well described by an annual signal. This annual cycle shows that whereas wind speed would always be maximum in austral winter,altimetric seasonal cycles have very different behaviors depending on the location. .
Goldfarb, Bernard. "Etude structurelle des séries temporelles : les moyens de l'analyse spectrale." Paris 9, 1997. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1997PA090007.
Esstafa, Youssef. "Modèles de séries temporelles à mémoire longue avec innovations dépendantes." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCD021.
We first consider, in this thesis, the problem of statistical analysis of FARIMA (Fractionally AutoRegressive Integrated Moving-Average) models endowed with uncorrelated but non-independent error terms. These models are called weak FARIMA and can be used to fit long-memory processes with general nonlinear dynamics. Relaxing the independence assumption on the noise, which is a standard assumption usually imposed in the literature, allows weak FARIMA models to cover a large class of nonlinear long-memory processes. The weak FARIMA models are dense in the set of purely non-deterministic stationary processes, the class of these models encompasses that of FARIMA processes with an independent and identically distributed noise (iid). We call thereafter strong FARIMA models the models in which the error term is assumed to be an iid innovations.We establish procedures for estimating and validating weak FARIMA models. We show, under weak assumptions on the noise, that the least squares estimator of the parameters of weak FARIMA(p,d,q) models is strongly consistent and asymptotically normal. The asymptotic variance matrix of the least squares estimator of weak FARIMA(p,d,q) models has the "sandwich" form. This matrix can be very different from the asymptotic variance obtained in the strong case (i.e. in the case where the noise is assumed to be iid). We propose, by two different methods, a convergent estimator of this matrix. An alternative method based on a self-normalization approach is also proposed to construct confidence intervals for the parameters of weak FARIMA(p,d,q) models.We then pay particular attention to the problem of validation of weak FARIMA(p,d,q) models. We show that the residual autocorrelations have a normal asymptotic distribution with a covariance matrix different from that one obtained in the strong FARIMA case. This allows us to deduce the exact asymptotic distribution of portmanteau statistics and thus to propose modified versions of portmanteau tests. It is well known that the asymptotic distribution of portmanteau tests is correctly approximated by a chi-squared distribution when the error term is assumed to be iid. In the general case, we show that this asymptotic distribution is a mixture of chi-squared distributions. It can be very different from the usual chi-squared approximation of the strong case. We adopt the same self-normalization approach used for constructing the confidence intervals of weak FARIMA model parameters to test the adequacy of weak FARIMA(p,d,q) models. This method has the advantage of avoiding the problem of estimating the asymptotic variance matrix of the joint vector of the least squares estimator and the empirical autocovariances of the noise.Secondly, we deal in this thesis with the problem of estimating autoregressive models of order 1 endowed with fractional Gaussian noise when the Hurst parameter H is assumed to be known. We study, more precisely, the convergence and the asymptotic normality of the generalized least squares estimator of the autoregressive parameter of these models
Gueguen, Lionel. "Extraction d'information et compression conjointes des séries temporelles d'images satellitaires." Paris, ENST, 2007. http://www.theses.fr/2007ENST0025.
Nowadays, new data which contain interesting information can be produced : the Satellite Image Time Series which are observations of Earth’s surface evolution. These series constitute huge data volume and contain complex types of information. For example, numerous spatio-temporal events, such as harvest or urban area expansion, can be observed in these series and serve for remote surveillance. In this framework, this thesis deals with the information extraction from Satellite Image Time Series automatically in order to help spatio-temporal events comprehension and the compression in order to reduce storing space. Thus, this work aims to provide methodologies which extract information and compress jointly these series. This joint processing provides a compact representation which contains an index of the informational content. First, the concept of joint extraction and compression is described where the information extraction is grasped as a lossy compression of the information. Secondly, two methodologies are developed based on the previous concept. The first one provides an informational content index based on the Information Bottleneck principle. The second one provides a code or a compact representation which integrates an informational content index. Finally, both methodologies are validated and compared with synthetic data, then are put into practice successfully with Satellite Image Time Series
Hili, Ouagnina. "Contribution à l'estimation des modèles de séries temporelles non linéaires." Université Louis Pasteur (Strasbourg) (1971-2008), 1995. http://www.theses.fr/1995STR13169.