Dissertationen zum Thema „Classification de séries temporelles biomédicales“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-49 Dissertationen für die Forschung zum Thema "Classification de séries temporelles biomédicales" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Khessiba, Souhir. „Stratégies d’optimisation des hyper-paramètres de réseaux de neurones appliqués aux signaux temporels biomédicaux“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAE003.
Der volle Inhalt der QuelleThis thesis focuses on optimizing the hyperparameters of convolutional neural networks (CNNs) in the medical domain, proposing an innovative approach to improve the performance of decision-making models in the biomedical field. Through the use of a hybrid approach, GS-TPE, to effectively adjust the hyperparameters of complex neural network models, this research has demonstrated significant improvements in the classification of temporal biomedical signals, such as vigilance states, from physiological signals such as electroencephalogram (EEG). Furthermore, by introducing a new DNN architecture, STGCN, for the classification of gestures associated with pathologies such as knee osteoarthritis and Parkinson's disease from video gait analysis, these works offer new perspectives for enhancing medical diagnosis and management through advancements in artificial intelligence
Bailly, Adeline. „Classification de séries temporelles avec applications en télédétection“. Thesis, Rennes 2, 2018. http://www.theses.fr/2018REN20021/document.
Der volle Inhalt der QuelleTime Series Classification (TSC) has received an important amount of interest over the past years due to many real-life applications. In this PhD, we create new algorithms for TSC, with a particular emphasis on Remote Sensing (RS) time series data. We first propose the Dense Bag-of-Temporal-SIFT-Words (D-BoTSW) method that uses dense local features based on SIFT features for 1D data. Extensive experiments exhibit that D-BoTSW significantly outperforms nearly all compared standalone baseline classifiers. Then, we propose an enhancement of the Learning Time Series Shapelets (LTS) algorithm called Adversarially-Built Shapelets (ABS) based on the introduction of adversarial time series during the learning process. Adversarial time series provide an additional regularization benefit for the shapelets and experiments show a performance improvementbetween the baseline and our proposed framework. Due to the lack of available RS time series datasets,we also present and experiment on two remote sensing time series datasets called TiSeLaCand Brazilian-Amazon
Jebreen, Kamel. „Modèles graphiques pour la classification et les séries temporelles“. Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0248/document.
Der volle Inhalt der QuelleFirst, in this dissertation, we will show that Bayesian networks classifiers are very accurate models when compared to other classical machine learning methods. Discretising input variables often increase the performance of Bayesian networks classifiers, as does a feature selection procedure. Different types of Bayesian networks may be used for supervised classification. We combine such approaches together with feature selection and discretisation to show that such a combination gives rise to powerful classifiers. A large choice of data sets from the UCI machine learning repository are used in our experiments, and the application to Epilepsy type prediction based on PET scan data confirms the efficiency of our approach. Second, in this dissertation we also consider modelling interaction between a set of variables in the context of time series and high dimension. We suggest two approaches; the first is similar to the neighbourhood lasso where the lasso model is replaced by Support Vector Machines (SVMs); the second is a restricted Bayesian network for time series. We demonstrate the efficiency of our approaches simulations using linear and nonlinear data set and a mixture of both
Ziat, Ali Yazid. „Apprentissage de représentation pour la prédiction et la classification de séries temporelles“. Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066324/document.
Der volle Inhalt der QuelleThis thesis deals with the development of time series analysis methods. Our contributions focus on two tasks: time series forecasting and classification. Our first contribution presents a method of prediction and completion of multivariate and relational time series. The aim is to be able to simultaneously predict the evolution of a group of time series connected to each other according to a graph, as well as to complete the missing values in these series (which may correspond for example to a failure of a sensor during a given time interval). We propose to use representation learning techniques to forecast the evolution of the series while completing the missing values and taking into account the relationships that may exist between them. Extensions of this model are proposed and described: first in the context of the prediction of heterogeneous time series and then in the case of the prediction of time series with an expressed uncertainty. A prediction model of spatio-temporal series is then proposed, in which the relations between the different series can be expressed more generally, and where these can be learned.Finally, we are interested in the classification of time series. A joint model of metric learning and time-series classification is proposed and an experimental comparison is conducted
Dilmi, Mohamed Djallel. „Méthodes de classification des séries temporelles : application à un réseau de pluviomètres“. Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS087.pdf.
Der volle Inhalt der QuelleThe impact of climat change on the temporal evolution of precipitation as well as the impact of the Parisian heat island on the spatial distribution of précipitation motivate studying the varaibility of the water cycle on a small scale on île-de-france. one way to analyse this varaibility using the data from a rain gauge network is to perform a clustring on time series measured by this network. In this thesis, we have explored two approaches for time series clustring : for the first approach based on the description of series by characteristics, an algorithm for selecting characteristics based on genetic algorithms and topological maps has been proposed. for the second approach based on shape comparaison, a measure of dissimilarity (iterative downscaling time warping) was developed to compare two rainfall time series. Then the limits of the two approaches were discuddes followed by a proposition of a mixed approach that combine the advantages of each approach. The approach was first applied to the evaluation of spatial variability of precipitation on île-de-france. For the evaluation of the temporal variability of the precpitation, a clustring on the precipitation events observed by a station was carried out then extended on the whole rain gauge network. The application on the historical series of Paris-Montsouris (1873-2015) makes it possible to automatically discriminate "remarkable" years from a meteorological point of view
Ziat, Ali Yazid. „Apprentissage de représentation pour la prédiction et la classification de séries temporelles“. Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066324.
Der volle Inhalt der QuelleThis thesis deals with the development of time series analysis methods. Our contributions focus on two tasks: time series forecasting and classification. Our first contribution presents a method of prediction and completion of multivariate and relational time series. The aim is to be able to simultaneously predict the evolution of a group of time series connected to each other according to a graph, as well as to complete the missing values in these series (which may correspond for example to a failure of a sensor during a given time interval). We propose to use representation learning techniques to forecast the evolution of the series while completing the missing values and taking into account the relationships that may exist between them. Extensions of this model are proposed and described: first in the context of the prediction of heterogeneous time series and then in the case of the prediction of time series with an expressed uncertainty. A prediction model of spatio-temporal series is then proposed, in which the relations between the different series can be expressed more generally, and where these can be learned.Finally, we are interested in the classification of time series. A joint model of metric learning and time-series classification is proposed and an experimental comparison is conducted
Rhéaume, François. „Une méthode de machine à état liquide pour la classification de séries temporelles“. Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/28815/28815.pdf.
Der volle Inhalt der QuelleThere are a number of reasons that motivate the interest in computational neuroscience for engineering applications of artificial intelligence. Among them is the speed at which the domain is growing and evolving, promising further capabilities for artificial intelligent systems. In this thesis, a method that exploits the recent advances in computational neuroscience is presented: the liquid state machine. A liquid state machine is a biologically inspired computational model that aims at learning on input stimuli. The model constitutes a promising temporal pattern recognition tool and has shown to perform very well in many applications. In particular, temporal pattern recognition is a problem of interest in military surveillance applications such as automatic target recognition. Until now, most of the liquid state machine implementations for spatiotemporal pattern recognition have remained fairly similar to the original model. From an engineering perspective, a challenge is to adapt liquid state machines to increase their ability for solving practical temporal pattern recognition problems. Solutions are proposed. The first one concentrates on the sampling of the liquid state. In this subject, a method that exploits frequency features of neurons is defined. The combination of different liquid state vectors is also discussed. Secondly, a method for training the liquid is developed. The method implements synaptic spike-timing dependent plasticity to shape the liquid. A new class-conditional approach is proposed, where different networks of neurons are trained exclusively on particular classes of input data. For the suggested liquid sampling methods and the liquid training method, comparative tests were conducted with both simulated and real data sets from different application areas. The tests reveal that the methods outperform the conventional liquid state machine approach. The methods are even more promising in that the results are obtained without optimization of many internal parameters for the different data sets. Finally, measures of the liquid state are investigated for predicting the performance of the liquid state machine.
Petitjean, François. „Dynamic time warping : apports théoriques pour l'analyse de données temporelles : application à la classification de séries temporelles d'images satellites“. Thesis, Strasbourg, 2012. http://www.theses.fr/2012STRAD023.
Der volle Inhalt der QuelleSatellite Image Time Series are becoming increasingly available and will continue to do so in the coming years thanks to the launch of space missions, which aim at providing a coverage of the Earth every few days with high spatial resolution (ESA’s Sentinel program). In the case of optical imagery, it will be possible to produce land use and cover change maps with detailed nomenclatures. However, due to meteorological phenomena, such as clouds, these time series will become irregular in terms of temporal sampling. In order to consistently handle the huge amount of information that will be produced (for instance, Sentinel-2 will cover the entire Earth’s surface every five days, with 10m to 60m spatial resolution and 13 spectral bands), new methods have to be developed. This Ph.D. thesis focuses on the “Dynamic Time Warping” similarity measure, which is able to take the most of the temporal structure of the data, in order to provide an efficient and relevant analysis of the remotely observed phenomena
Benkabou, Seif-Eddine. „Détection d’anomalies dans les séries temporelles : application aux masses de données sur les pneumatiques“. Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1046/document.
Der volle Inhalt der QuelleAnomaly detection is a crucial task that has attracted the interest of several research studies in machine learning and data mining communities. The complexity of this task depends on the nature of the data, the availability of their labeling and the application framework on which they depend. As part of this thesis, we address this problem for complex data and particularly for uni and multivariate time series. The term "anomaly" can refer to an observation that deviates from other observations so as to arouse suspicion that it was generated by a different generation process. More generally, the underlying problem (also called novelty detection or outlier detection) aims to identify, in a set of data, those which differ significantly from others, which do not conform to an "expected behavior" (which could be defined or learned), and which indicate a different mechanism. The "abnormal" patterns thus detected often result in critical information. We focus specifically on two particular aspects of anomaly detection from time series in an unsupervised fashion. The first is global and consists in detecting abnormal time series compared to an entire database, whereas the second one is called contextual and aims to detect locally, the abnormal points with respect to the global structure of the relevant time series. To this end, we propose an optimization approaches based on weighted clustering and the warping time for global detection ; and matrix-based modeling for the contextual detection. Finally, we present several empirical studies on public data to validate the proposed approaches and compare them with other known approaches in the literature. In addition, an experimental validation is provided on a real problem, concerning the detection of outlier price time series on the tyre data, to meet the needs expressed by, LIZEO, the industrial partner of this thesis
Varasteh, Yazdi Saeed. „Représentations parcimonieuses et apprentissage de dictionnaires pour la classification et le clustering de séries temporelles“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM062/document.
Der volle Inhalt der QuelleLearning dictionary for sparse representing time series is an important issue to extract latent temporal features, reveal salient primitives and sparsely represent complex temporal data. This thesis addresses the sparse coding and dictionary learning problem for time series classification and clustering under time warp. For that, we propose a time warp invariant sparse coding and dictionary learning framework where both input samples and atoms define time series of different lengths that involve varying delays.In the first part, we formalize an L0 sparse coding problem and propose a time warp invariant orthogonal matching pursuit based on a new cosine maximization time warp operator. For the dictionary learning stage, a non linear time warp invariant kSVD (TWI-kSVD) is proposed. Thanks to a rotation transformation between each atom and its sibling atoms, a singular value decomposition is used to jointly approximate the coefficients and update the dictionary, similar to the standard kSVD. In the second part, a time warp invariant dictionary learning for time series clustering is formalized and a gradient descent solution is proposed.The proposed methods are confronted to major shift invariant, convolved and kernel dictionary learning methods on several public and real temporal data. The conducted experiments show the potential of the proposed frameworks to efficiently sparse represent, classify and cluster time series under time warp
Cano, Emmanuelle. „Cartographie des formations végétales naturelles à l’échelle régionale par classification de séries temporelles d’images satellitaires“. Thesis, Rennes 2, 2016. http://www.theses.fr/2016REN20024/document.
Der volle Inhalt der QuelleForest cover mapping is an essential tool for forest management. Detailed maps, characterizing forest types at a régional scale, are needed. This need can be fulfilled by médium spatial resolution optical satellite images time sériés. This thesis aims at improving the supervised classification procédure applied to a time sériés, to produce maps detailing forest types at a régional scale. To meet this goal, the improvement of the results obtained by the classification of a MODIS time sériés, performed with a stratification of the study area, was assessed. An improvement of classification accuracy due to stratification built by object-based image analysis was observed, with an increase of the Kappa index value and an increase of the reject fraction rate. These two phenomena are correlated to the classified végétation area. A minimal and a maximal value were identified, respectively related to a too high reject fraction rate and a neutral stratification impact.We carried out a second study, aiming at assessing the influence of the médium spatial resolution time sériés organization and of the algorithm on classification quality. Three distinct classification algorithms (maximum likelihood, Support Vector Machine, Random Forest) and several time sériés were studied. A significant improvement due to temporal and radiométrie effects and the superiority of Random Forest were highlighted by the results. Thematic confusions and low user's and producer's accuracies were still observed for several classes. We finally studied the improvement brought by a spatial resolution change for the images composing the time sériés to discriminate classes of mixed forest species. The conclusions of the former study (MODIS images) were confirmed with DEIMOS images. We can conclude that these effects are independent from input data and their spatial resolution. A significant improvement was also observed with an increase of the Kappa index value from 0,60 with MODIS data to 0,72 with DEIMOS data, due to a decrease of the mixed pixels rate
Bellet, Valentine. „Intelligence artificielle appliquée aux séries temporelles d'images satellites pour la surveillance des écosystèmes“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES013.
Der volle Inhalt der QuelleIn the context of climate change, ecosystem monitoring is a crucial task. It allows to better understand the changes that affect them and also enables decision-making to preserve them for current and future generations. Land use and land cover (LULC) maps are an essential tool in ecosystem monitoring providing information on different types of physical cover of the Earth's surface (e.g. forests, grasslands, croplands). Nowadays, an increasing number of satellite missions generate huge amounts of free and open data. In particular, satellite image time series (SITS), such as the ones produced by Sentinel-2, offer high temporal, spectral and spatial resolutions and provide relevant information about vegetation dynamics. Combined with machine learning algorithms, they allow the production of frequent and accurate LULC maps. This thesis is focused on the development of pixel-based supervised classification algorithms for the production of LULC maps at large scale. Four main challenges arise in an operational context. Firstly, unprecedented amounts of data are available and the algorithms need to be adapted accordingly. Secondly, with the improvement in spatial, spectral and temporal resolutions, the algorithms should be able to take into account correlations between the spectro-temporal features to extract meaningful representations for the purpose of classification. Thirdly, in wide geographical coverage, the problem of non-stationarity of the data arises, therefore the algorithms should be able to take into account this spatial variability. Fourthly, because of the different satellite orbits or meteorological conditions, the acquisition times are irregular and unaligned between pixels, thus, the algorithms should be able to work with irregular and unaligned SITS. This work has been divided into two main parts. The first PhD contribution is the development of stochastic variational Gaussian Processes (SVGP) on massive data sets. The proposed Gaussian Processes (GP) model can be trained with millions of samples, compared to few thousands for traditional GP methods. The spatial and spectro-temporal structure of the data is taken into account thanks to the inclusion of the spatial information in bespoke composite covariance functions. Besides, this development enables to take into account the spatial information and thus to be robust to the spatial variability of the data. However, the time series are linearly resampled independently from the classification. Therefore, the second PhD contribution is the development of an end-to-end learning by combining a time and space informed kernel interpolator with the previous SVGP classifier. The interpolator embeds irregular and unaligned SITS onto a fixed and reduced size latent representation. The obtained latent representation is given to the SVGP classifier and all the parameters are jointly optimized w.r.t. the classification task. Experiments were run with Sentinel-2 SITS of the full year 2018 over an area of 200 000 km^2 (about 2 billion pixels) in the south of France (27 MGRS tiles), which is representative of an operational setting. Results show that both methods (i.e. SVGP classifier with linearly interpolated time series and the spatially kernel interpolator combined with the SVGP classifier) outperform the method used for current operational systems (i.e. Random Forest with linearly interpolated time series using spatial stratification)
Plaud, Angéline. „Classification ensembliste des séries temporelles multivariées basée sur les M-histogrammes et une approche multi-vues“. Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC047.
Der volle Inhalt der QuelleRecording measurements about various phenomena and exchanging information about it, participate in the emergence of a type of data called time series. Today humongous quantities of those data are often collected. A time series is characterized by numerous points and interactions can be observed between those points. A time series is multivariate when multiple measures are recorded at each timestamp, meaning a point is, in fact, a vector of values. Even if univariate time series, one value at each timestamp, are well-studied and defined, it’s not the case of multivariate one, for which the analysis is still challenging. Indeed, it is not possible to apply directly techniques of classification developed on univariate data to the case of multivariate one. In fact, for this latter, we have to take into consideration the interactions not only between points but also between dimensions. Moreover, in industrial cases, as in Michelin company, the data are big and also of different length in terms of points size composing the series. And this brings a new complexity to deal with during the analysis. None of the current techniques of classifying multivariate time series satisfies the following criteria, which are a low complexity of computation, dealing with variation in the number of points and good classification results. In our approach, we explored a new tool, which has not been applied before for MTS classification, which is called M-histogram. A M-histogram is a visualization tool using M axis to project the density function underlying the data. We have employed it here to produce a new representation of the data, that allows us to bring out the interactions between dimensions. Searching for links between dimensions correspond particularly to a part of learning techniques called multi-view learning. A view is an extraction of dimensions of a dataset, which are of same nature or type. Then the goal is to display the links between the dimensions inside each view in order to classify all the data, using an ensemble classifier. So we propose a multi-view ensemble model to classify multivariate time series. The model creates multiple M-histograms from differents groups of dimensions. Then each view allows us to get a prediction which we can aggregate to get a final prediction. In this thesis, we show that the proposed model allows a fast classification of multivariate time series of different sizes. In particular, we applied it on aMichelin use case
Régis, Sébastien. „Segmentation, classification et fusion de séries temporelles multi-sources : application à des signaux dans un bio-procédé“. Antilles-Guyane, 2004. http://www.theses.fr/2004AGUY0121.
Der volle Inhalt der QuelleThis PhD is devoted to knowledge basis discovery using signal analysis and classification tools on time series. The application is the detection of new, known or abnormal physiological states in a alcoholic bioprocess. Analysis, classification and fusion of data from time series are done. First, wavelets transform and Hôlder exponent (linked to the singularities of the time series) are used to detect phenomenon and physiological states of the system. A new approach combining wavelets transform and differential evolutionary methods is proposed and gives better result than other classical evaluation methods of fuis Hôlder exponent. Then the LAMDA method of classification and its tools are presented. Aggregation operators of LAMDA are presented and a new operator is proposed. A comparison with other classifiers shows that LAMDA gives better results for this application. Relevance of data source is studied. A method based on evidence theory is proposed. Experimental results show that the relevance evaluation are quite interesting. This approach using signal processing, classification and evidence theory enables the analysis and the characterisation of the biological systems without using deterministic model. Thus the combination of these tools enables to discover new knowledge and to confirm the knowledge of the expert mainly by using time series describing biological systems
Bergomi, Mattia Giuseppe. „Dynamical and topological tools for (modern) music analysis“. Electronic Thesis or Diss., Paris 6, 2015. http://www.theses.fr/2015PA066465.
Der volle Inhalt der QuelleIn this work, we suggest a collection of novel models for the representation of music. These models are endowed with two main features. First, they originate from a topological and geometrical inspiration; second, their low dimensionality allows to build simple and informative visualisations. We tackle the problem of music representation following three non-orthogonal directions. First, we propose an interpretation of counterpoint as a multivariate time series of partial permutation matrices, whose observations are characterised by a degree of complexity. After providing both a static and a dynamic representation of counterpoint, voice leadings are reinterpreted as a special class of partial singular braids, and their main features are visualised. Thereafter, we give a topological interpretation of the Tonnetz (a graph commonly used in computational musicology), whose vertices are deformed by both a harmonic and a consonance-oriented function. The shapes derived from these deformations are classified using the formalism of persistent homology. Thus, this novel representation of music is evaluated on a collection of heterogenous musical datasets. Finally, a combination of the two approaches is proposed. A model at the crossroad between the signal and symbolic analysis of music uses multiple sequences alignment to provide an encompassing, novel viewpoint on the musical inspiration transfer among compositions belonging to different artists, genres and time. Then, music is represented as a time series of topological fingerprints, allowing the comparison of pairs of time-varying shapes in both topological and musical terms
Bergomi, Mattia Giuseppe. „Dynamical and topological tools for (modern) music analysis“. Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066465/document.
Der volle Inhalt der QuelleIn this work, we suggest a collection of novel models for the representation of music. These models are endowed with two main features. First, they originate from a topological and geometrical inspiration; second, their low dimensionality allows to build simple and informative visualisations. We tackle the problem of music representation following three non-orthogonal directions. First, we propose an interpretation of counterpoint as a multivariate time series of partial permutation matrices, whose observations are characterised by a degree of complexity. After providing both a static and a dynamic representation of counterpoint, voice leadings are reinterpreted as a special class of partial singular braids, and their main features are visualised. Thereafter, we give a topological interpretation of the Tonnetz (a graph commonly used in computational musicology), whose vertices are deformed by both a harmonic and a consonance-oriented function. The shapes derived from these deformations are classified using the formalism of persistent homology. Thus, this novel representation of music is evaluated on a collection of heterogenous musical datasets. Finally, a combination of the two approaches is proposed. A model at the crossroad between the signal and symbolic analysis of music uses multiple sequences alignment to provide an encompassing, novel viewpoint on the musical inspiration transfer among compositions belonging to different artists, genres and time. Then, music is represented as a time series of topological fingerprints, allowing the comparison of pairs of time-varying shapes in both topological and musical terms
Do, Cao Tri. „Apprentissage de métrique temporelle multi-modale et multi-échelle pour la classification robuste de séries temporelles par plus proches voisins“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM028/document.
Der volle Inhalt der QuelleThe definition of a metric between time series is inherent to several data analysis and mining tasks, including clustering, classification or forecasting. Time series data present naturally several characteristics, called modalities, covering their amplitude, behavior or frequential spectrum, that may be expressed with varying delays and at different temporal granularity and localization - exhibited globally or locally. Combining several modalities at multiple temporal scales to learn a holistic metric is a key challenge for many real temporal data applications. This PhD proposes a Multi-modal and Multi-scale Temporal Metric Learning (M2TML) approach for robust time series nearest neighbors classification. The solution is based on the embedding of pairs of time series into a pairwise dissimilarity space, in which a large margin optimization process is performed to learn the metric. The M2TML solution is proposed for both linear and non linear contexts, and is studied for different regularizers. A sparse and interpretable variant of the solution shows the ability of the learned temporal metric to localize accurately discriminative modalities as well as their temporal scales.A wide range of 30 public and challenging datasets, encompassing images, traces and ECG data, that are linearly or non linearly separable, are used to show the efficiency and the potential of M2TML for time series nearest neighbors classification
Leverger, Colin. „Investigation of a framework for seasonal time series forecasting“. Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S033.
Der volle Inhalt der QuelleTo deploy web applications, using web servers is paramount. If there is too few of them, applications performances can quickly deteriorate. However, if they are too numerous, the resources are wasted and the cost increased. In this context, engineers use capacity planning tools to follow the performances of the servers, to collect time series data and to anticipate future needs. The necessity to create reliable forecasts seems clear. Data generated by the infrastructure often exhibit seasonality. The activity cycle followed by the infrastructure is determined by some seasonal cycles (for example, the user’s daily rhythms). This thesis introduces a framework for seasonal time series forecasting. This framework is composed of two machine learning models (e.g. clustering and classification) and aims at producing reliable midterm forecasts with a limited number of parameters. Three instantiations of the framework are presented: one baseline, one deterministic and one probabilistic. The baseline is composed of K-means clustering algorithms and Markov Models. The deterministic version is composed of several clustering algorithms (K-means, K-shape, GAK and MODL) and of several classifiers (naive-bayes, decision trees, random forests and logistic regression). The probabilistic version relies on coclustering to create time series probabilistic grids, that are used to describe the data in an unsupervised way. The performances of the various implementations are compared with several state-of-the-art models, including the autoregressive models, ARIMA and SARIMA, Holt Winters, or even Prophet for the probabilistic paradigm. The results of the baseline are encouraging and confirm the interest for the framework proposed. Good results are observed for the deterministic implementation, and correct results for the probabilistic version. One Orange use case is studied, and the interest and limits of the methodology are discussed
Mure, Simon. „Classification non supervisée de données spatio-temporelles multidimensionnelles : Applications à l’imagerie“. Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI130/document.
Der volle Inhalt der QuelleDue to the dramatic increase of longitudinal acquisitions in the past decades such as video sequences, global positioning system (GPS) tracking or medical follow-up, many applications for time-series data mining have been developed. Thus, unsupervised time-series data mining has become highly relevant with the aim to automatically detect and identify similar temporal patterns between time-series. In this work, we propose a new spatio-temporal filtering scheme based on the mean-shift procedure, a state of the art approach in the field of image processing, which clusters multivariate spatio-temporal data. We also propose a hierarchical time-series clustering algorithm based on the dynamic time warping measure that identifies similar but asynchronous temporal patterns. Our choices have been motivated by the need to analyse magnetic resonance images acquired on people affected by multiple sclerosis. The genetics and environmental factors triggering and governing the disease evolution, as well as the occurrence and evolution of individual lesions, are still mostly unknown and under intense investigation. Therefore, there is a strong need to develop new methods allowing automatic extraction and quantification of lesion characteristics. This has motivated our work on time-series clustering methods, which are not widely used in image processing yet and allow to process image sequences without prior knowledge on the final results
Nicolae, Maria-Irina. „Learning similarities for linear classification : theoretical foundations and algorithms“. Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES062/document.
Der volle Inhalt der QuelleThe notion of metric plays a key role in machine learning problems, such as classification, clustering and ranking. Learning metrics from training data in order to make them adapted to the task at hand has attracted a growing interest in the past years. This research field, known as metric learning, usually aims at finding the best parameters for a given metric under some constraints from the data. The learned metric is used in a machine learning algorithm in hopes of improving performance. Most of the metric learning algorithms focus on learning the parameters of Mahalanobis distances for feature vectors. Current state of the art methods scale well for datasets of significant size. On the other hand, the more complex topic of multivariate time series has received only limited attention, despite the omnipresence of this type of data in applications. An important part of the research on time series is based on the dynamic time warping (DTW) computing the optimal alignment between two time series. The current state of metric learning suffers from some significant limitations which we aim to address in this thesis. The most important one is probably the lack of theoretical guarantees for the learned metric and its performance for classification.The theory of (ℰ , ϓ, τ)-good similarity functions has been one of the first results relating the properties of a similarity to its classification performance. A second limitation in metric learning comes from the fact that most methods work with metrics that enforce distance properties, which are computationally expensive and often not justified. In this thesis, we address these limitations through two main contributions. The first one is a novel general framework for jointly learning a similarity function and a linear classifier. This formulation is inspired from the (ℰ , ϓ, τ)-good theory, providing a link between the similarity and the linear classifier. It is also convex for a broad range of similarity functions and regularizers. We derive two equivalent generalization bounds through the frameworks of algorithmic robustness and uniform convergence using the Rademacher complexity, proving the good theoretical properties of our framework. Our second contribution is a method for learning similarity functions based on DTW for multivariate time series classification. The formulation is convex and makes use of the(ℰ , ϓ, τ)-good framework for relating the performance of the metric to that of its associated linear classifier. Using uniform stability arguments, we prove the consistency of the learned similarity leading to the derivation of a generalization bound
Mousheimish, Raef. „Combinaison de l’Internet des objets, du traitement d’évènements complexes et de la classification de séries temporelles pour une gestion proactive de processus métier“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLV073/document.
Der volle Inhalt der QuelleInternet of things is at the core ofsmart industrial processes thanks to its capacityof event detection from data conveyed bysensors. However, much remains to be done tomake the most out of this recent technologyand make it scale. This thesis aims at filling thegap between the massive data flow collected bysensors and their effective exploitation inbusiness process management. It proposes aglobal approach, which combines stream dataprocessing, supervised learning and/or use ofcomplex event processing rules allowing topredict (and thereby avoid) undesirable events,and finally business process managementextended to these complex rules. The scientificcontributions of this thesis lie in several topics:making the business process more intelligentand more dynamic; automation of complexevent processing by learning the rules; and lastand not least, in datamining for multivariatetime series by early prediction of risks. Thetarget application of this thesis is theinstrumented transportation of artworks
Phan, Thi-Thu-Hong. „Elastic matching for classification and modelisation of incomplete time series“. Thesis, Littoral, 2018. http://www.theses.fr/2018DUNK0483/document.
Der volle Inhalt der QuelleMissing data are a prevalent problem in many domains of pattern recognition and signal processing. Most of the existing techniques in the literature suffer from one major drawback, which is their inability to process incomplete datasets. Missing data produce a loss of information and thus yield inaccurate data interpretation, biased results or unreliable analysis, especially for large missing sub-sequence(s). So, this thesis focuses on dealing with large consecutive missing values in univariate and low/un-correlated multivariate time series. We begin by investigating an imputation method to overcome these issues in univariate time series. This approach is based on the combination of shape-feature extraction algorithm and Dynamic Time Warping method. A new R-package, namely DTWBI, is then developed. In the following work, the DTWBI approach is extended to complete large successive missing data in low/un-correlated multivariate time series (called DTWUMI) and a DTWUMI R-package is also established. The key of these two proposed methods is that using the elastic matching to retrieving similar values in the series before and/or after the missing values. This optimizes as much as possible the dynamics and shape of knowledge data, and while applying the shape-feature extraction algorithm allows to reduce the computing time. Successively, we introduce a new method for filling large successive missing values in low/un-correlated multivariate time series, namely FSMUMI, which enables to manage a high level of uncertainty. In this way, we propose to use a novel fuzzy grades of basic similarity measures and fuzzy logic rules. Finally, we employ the DTWBI to (i) complete the MAREL Carnot dataset and then we perform a detection of rare/extreme events in this database (ii) forecast various meteorological univariate time series collected in Vietnam
Pelletier, Charlotte. „Cartographie de l'occupation des sols à partir de séries temporelles d'images satellitaires à hautes résolutions : identification et traitement des données mal étiquetées“. Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30241/document.
Der volle Inhalt der QuelleLand surface monitoring is a key challenge for diverse applications such as environment, forestry, hydrology and geology. Such monitoring is particularly helpful for the management of territories and the prediction of climate trends. For this purpose, mapping approaches that employ satellite-based Earth Observations at different spatial and temporal scales are used to obtain the land surface characteristics. More precisely, supervised classification algorithms that exploit satellite data present many advantages compared to other mapping methods. In addition, the recent launches of new satellite constellations - Landsat-8 and Sentinel-2 - enable the acquisition of satellite image time series at high spatial and spectral resolutions, that are of great interest to describe vegetation land cover. These satellite data open new perspectives, but also interrogate the choice of classification algorithms and the choice of input data. In addition, learning classification algorithms over large areas require a substantial number of instances per land cover class describing landscape variability. Accordingly, training data can be extracted from existing maps or specific existing databases, such as crop parcel farmer's declaration or government databases. When using these databases, the main drawbacks are the lack of accuracy and update problems due to a long production time. Unfortunately, the use of these imperfect training data lead to the presence of mislabeled training instance that may impact the classification performance, and so the quality of the produced land cover map. Taking into account the above challenges, this Ph.D. work aims at improving the classification of new satellite image time series at high resolutions. The work has been divided into two main parts. The first Ph.D. goal consists in studying different classification systems by evaluating two classification algorithms with several input datasets. In addition, the stability and the robustness of the classification methods are discussed. The second goal deals with the errors contained in the training data. Firstly, methods for the detection of mislabeled data are proposed and analyzed. Secondly, a filtering method is proposed to take into account the mislabeled data in the classification framework. The objective is to reduce the influence of mislabeled data on the classification performance, and thus to improve the produced land cover map
Hedhli, Ihsen. „Modèles de classification hiérarchiques d'images satellitaires multi-résolutions, multi-temporelles et multi-capteurs. Application aux désastres naturels“. Thesis, Nice, 2016. http://www.theses.fr/2016NICE4006/document.
Der volle Inhalt der QuelleThe capabilities to monitor the Earth's surface, notably in urban and built-up areas, for example in the framework of the protection from environmental disasters such as floods or earthquakes, play important roles in multiple social, economic, and human viewpoints. In this framework, accurate and time-efficient classification methods are important tools required to support the rapid and reliable assessment of ground changes and damages induced by a disaster, in particular when an extensive area has been affected. Given the substantial amount and variety of data available currently from last generation very-high resolution (VHR) satellite missions such as Pléiades, COSMO-SkyMed, or RadarSat-2, the main methodological difficulty is to develop classifiers that are powerful and flexible enough to utilize the benefits of multiband, multiresolution, multi-date, and possibly multi-sensor input imagery. With the proposed approaches, multi-date/multi-sensor and multi-resolution fusion are based on explicit statistical modeling. The method combines a joint statistical model of multi-sensor and multi-temporal images through hierarchical Markov random field (MRF) modeling, leading to statistical supervised classification approaches. We have developed novel hierarchical Markov random field models, based on the marginal posterior modes (MPM) criterion, that support information extraction from multi-temporal and/or multi-sensor information and allow the joint supervised classification of multiple images taken over the same area at different times, from different sensors, and/or at different spatial resolutions. The developed methods have been experimentally validated with complex optical multispectral (Pléiades), X-band SAR (COSMO-Skymed), and C-band SAR (RadarSat-2) imagery taken from the Haiti site
Breton, Marc. „Application de méthodes de classification par séries temporelles au diagnostic médical et à la détection de changements statistiques et étude de la robustesse“. Ecole Centrale de Lille, 2004. http://www.theses.fr/2004ECLI0005.
Der volle Inhalt der QuelleMelzi, Fateh. „Fouille de données pour l'extraction de profils d'usage et la prévision dans le domaine de l'énergie“. Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1123/document.
Der volle Inhalt der QuelleNowadays, countries are called upon to take measures aimed at a better rationalization of electricity resources with a view to sustainable development. Smart Metering solutions have been implemented and now allow a fine reading of consumption. The massive spatio-temporal data collected can thus help to better understand consumption behaviors, be able to forecast them and manage them precisely. The aim is to be able to ensure "intelligent" use of resources to consume less and consume better, for example by reducing consumption peaks or by using renewable energy sources. The thesis work takes place in this context and aims to develop data mining tools in order to better understand electricity consumption behaviors and to predict solar energy production, then enabling intelligent energy management.The first part of the thesis focuses on the classification of typical electrical consumption behaviors at the scale of a building and then a territory. In the first case, an identification of typical daily power consumption profiles was conducted based on the functional K-means algorithm and a Gaussian mixture model. On a territorial scale and in an unsupervised context, the aim is to identify typical electricity consumption profiles of residential users and to link these profiles to contextual variables and metadata collected on users. An extension of the classical Gaussian mixture model has been proposed. This allows exogenous variables such as the type of day (Saturday, Sunday and working day,...) to be taken into account in the classification, thus leading to a parsimonious model. The proposed model was compared with classical models and applied to an Irish database including both electricity consumption data and user surveys. An analysis of the results over a monthly period made it possible to extract a reduced set of homogeneous user groups in terms of their electricity consumption behaviors. We have also endeavoured to quantify the regularity of users in terms of consumption as well as the temporal evolution of their consumption behaviors during the year. These two aspects are indeed necessary to evaluate the potential for changing consumption behavior that requires a demand response policy (shift in peak consumption, for example) set up by electricity suppliers.The second part of the thesis concerns the forecast of solar irradiance over two time horizons: short and medium term. To do this, several approaches have been developed, including autoregressive statistical approaches for modelling time series and machine learning approaches based on neural networks, random forests and support vector machines. In order to take advantage of the different models, a hybrid model combining the different models was proposed. An exhaustive evaluation of the different approaches was conducted on a large database including four locations (Carpentras, Brasilia, Pamplona and Reunion Island), each characterized by a specific climate as well as weather parameters: measured and predicted using NWP models (Numerical Weather Predictions). The results obtained showed that the hybrid model improves the results of photovoltaic production forecasts for all locations
Goffinet, Étienne. „Clustering multi-blocs et visualisation analytique de données séquentielles massives issues de simulation du véhicule autonome“. Thesis, Paris 13, 2021. http://www.theses.fr/2021PA131090.
Der volle Inhalt der QuelleAdvanced driving-assistance systems validation remains one of the biggest challenges car manufacturers must tackle to provide safe driverless cars. The reliable validation of these systems requires to assess their reaction’s quality and consistency to a broad spectrum of driving scenarios. In this context, large-scale simulation systems bypass the physical «on-tracks» limitations and produce important quantities of high-dimensional time series data. The challenge is to find valuable information in these multivariate unlabelled datasets that may contain noisy, sometimes correlated or non-informative variables. This thesis propose several model-based tool for univariate and multivariate time series clustering based on a Dictionary approach or Bayesian Non Parametric framework. The objective is to automatically find relevant and natural groups of driving behaviors and, in the multivariate case, to perform a model selection and multivariate time series dimension reduction. The methods are experimented on simulated datasets and applied on industrial use cases from Groupe Renault Coclustering
Morales, quinga Katherine Tania. „Generative Markov models for sequential bayesian classification“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAS019.
Der volle Inhalt der QuelleThis thesis explores and models sequential data by applying various probabilistic models with latent variables, complemented by deep neural networks. The motivation for this research is the development of dynamic models that adeptly capture the complex temporal dynamics inherent in sequential data. Designed to be versatile and adaptable, these models aim to be applicable across domains including classification, prediction, and data generation, and adaptable to diverse data types. The research focuses on several key areas, each detailedin its respective chapter. Initially, the fundamental principles of deep learning, and Bayesian estimation are introduced. Sequential data modeling is then explored, emphasizing the Markov chain models, which set the stage for thegenerative models discussed in subsequent chapters. In particular, the research delves into the sequential Bayesian classificationof data in supervised, semi-supervised, and unsupervised contexts. The integration of deep neural networks with well-established probabilistic models is a key strategic aspect of this research, leveraging the strengths of both approaches to address complex sequential data problems more effectively. This integration leverages the capabilities of deep neural networks to capture complex nonlinear relationships, significantly improving the applicability and performance of the models.In addition to our contributions, this thesis also proposes novel approaches to address specific challenges posed by the Groupe Européen de Recherche sur les Prothèses Appliquées à la Chirurgie Vasculaire (GEPROMED). These proposed solutions reflect the practical and possible impactful application of this research, demonstrating its potential contribution to the field of vascular surgery
Flocon-Cholet, Joachim. „Classification audio sous contrainte de faible latence“. Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S030/document.
Der volle Inhalt der QuelleThis thesis focuses on audio classification under low-latency constraints. Audio classification has been widely studied for the past few years, however, a large majority of the existing work presents classification systems that are not subject to temporal constraints : the audio signal can be scanned freely in order to gather the needed information to perform the decision (in that case, we may refer to an offline classification). Here, we consider audio classification in the telecommunication domain. The working conditions are now more severe : algorithms work in real time and the analysis and processing steps are now operated on the fly, as long as the signal is transmitted. Hence, the audio classification step has to meet the real time constraints, which can modify its behaviour in different ways : only the current and the past observations of the signal are available, and, despite this fact the classification system has to remain reliable and reactive. Thus, the first question that occurs is : what strategy for the classification can we adopt in order to tackle the real time constraints ? In the literature, we can find two main approaches : the frame-level classification and the segment-level classification. In the frame-level classification, the decision is performed using only the information extracted from the current audio frame. In the segment-level classification, we exploit a short-term information using data computed from the current and few past frames. The data fusion here is obtained using the process of temporal feature integration which consists of deriving relevant information based on the temporal evolution of the audio features. Based on that, there are several questions that need to be answered. What are the limits of these two classification framework ? Can an frame-level classification and a segment-level be used efficiently for any classification task ? Is it possible to obtain good performance with these approaches ? Which classification framework may lead to the best trade-off between accuracy and reactivity ? Furthermore, for the segment-level classification framework, the temporal feature integration process is mainly based on statistical models, but would it be possible to propose other methods ? Throughout this thesis, we investigate this subject by working on several concrete case studies. First, we contribute to the development of a novel audio algorithm dedicated to audio protection. The purpose of this algorithm is to detect and suppress very quickly potentially dangerous sounds for the listener. Our method, which relies on the proposition of three features, shows high detection rate and low false alarm rate in many use cases. Then, we focus on the temporal feature integration in a low-latency framework. To that end, we propose and evaluate several methodologies for the use temporal integration that lead to a good compromise between performance and reactivity. Finally, we propose a novel approach that exploits the temporal evolution of the features. This approach is based on the use of symbolic representation that can capture the temporal structure of the features. The idea is thus to find temporal patterns that are specific to each audio classes. The experiments performed with this approach show promising results
Masse, Antoine. „Développement et automatisation de méthodes de classification à partir de séries temporelles d'images de télédétection - Application aux changements d'occupation des sols et à l'estimation du bilan carbone“. Phd thesis, Université Paul Sabatier - Toulouse III, 2013. http://tel.archives-ouvertes.fr/tel-00921853.
Der volle Inhalt der QuelleMasse, Antoine. „Développement et automatisation de méthodes de classification à partir de séries temporelles d'images de télédétection : application aux changements d'occupation des sols et à l'estimation du bilan carbone“. Phd thesis, Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2106/.
Der volle Inhalt der QuelleAs acquisition technology progresses, remote sensing data contains an ever increasing amount of information. Future projects in remote sensing like Copernicus will give a high temporal repeatability of acquisitions and will cover large geographical areas. As part of the Copernicus project, Sentinel-2 combines a large swath, frequent revisit (5 days), and systematic acquisition of all land surfaces at high-spatial resolution and with a large number of spectral bands. The context of my research activities has involved the automation and improvement of classification processes for land use and land cover mapping in application with new satellite characteristics. This research has been focused on four main axes: selection of the input data for the classification processes, improvement of classification systems with introduction of ancillary data, fusion of multi-sensors, multi-temporal and multi-spectral classification image results and classification without ground truth data. These new methodologies have been validated on a wide range of images available: various sensors (optical: Landsat 5/7, Worldview-2, Formosat-2, Spot 2/4/5, Pleiades; and radar: Radarsat, Terrasar-X), various spatial resolutions (30 meters to 0. 5 meters), various time repeatability (up to 46 images per year) and various geographical areas (agricultural area in Toulouse, France, Pyrenean mountains and arid areas in Morocco and Algeria). These methodologies are applicable to a wide range of thematic applications like Land Cover mapping, carbon flux estimation and greenbelt mapping
Soheily-Khah, Saeid. „Generalized k-means-based clustering for temporal data under time warp“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM064/document.
Der volle Inhalt der QuelleTemporal alignment of multiple time series is an important unresolved problem in many scientific disciplines. Major challenges for an accurate temporal alignment include determining and modeling the common and differential characteristics of classes of time series. This thesis is motivated by recent works in extending Dynamic time warping for aligning multiple time series from several applications including speech recognition, curve matching, micro-array data analysis, temporal segmentation or human motion. However these DTW-based works suffer of several limitations: 1) They address the problem of aligning two time series regardless of the remaining time series, 2) They involve uniformly the features of the multiple time series, 3) The time series are aligned globally by including the whole observations. The aim of this thesis is to explore a generalized dynamic time warping for time series clustering. This work includes first the problem of prototype extraction, then the alignment of multiple and multidimensional time series
Shalaeva, Vera. „Arbre de décision temporel multi-opérateur“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM069/document.
Der volle Inhalt der QuelleRising interest in mining and analyzing time series data in many domains motivates designing machine learning (ML) algorithms that are capable of tackling such complex data. Except of the need in modification, improvement, and creation of novel ML algorithms that initially works with static data, criteria of its interpretability, accuracy and computational efficiency have to be fulfilled. For a domain expert, it becomes crucial to extract knowledge from data and appealing when a yielded model is transparent and interpretable. So that, no preliminary knowledge of ML is required to read and understand results. Indeed, an emphasized by many recent works, it is more and more needed for domain experts to get a transparent and interpretable model from the learning tool, thus allowing them to use it, even if they have few knowledge about ML's theories. Decision Tree is an algorithm that focuses on providing interpretable and quite accurate classification model.More precisely, in this research we address the problem of interpretable time series classification by Decision Tree (DT) method. Firstly, we present Temporal Decision Tree, which is the modification of classical DT algorithm. The gist of this change is the definition of a node's split. Secondly, we propose an extension, called Multi-operator Temporal Decision Tree (MTDT), of the modified algorithm for temporal data that is able to capture different geometrical classes structures. The resulting algorithm improves model readability while preserving the classification accuracy.Furthermore, we explore two complementary issues: computational efficiency of extended algorithm and its classification accuracy. We suggest that decreasing of the former is reachable using a Local Search approach to built nodes. And preserving of the latter can be handled by discovering and weighting discriminative time stamps of time series
Karasiak, Nicolas. „Cartographie des essences forestières à partir de séries temporelles d’images satellitaires à hautes résolutions : stabilité des prédictions, autocorrélation spatiale et cohérence avec la phénologie observée in situ“. Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0115.
Der volle Inhalt der QuelleForests have a key role on earth, whether to store carbon and so reducing the global warming or to provide a place for many species. However, the composition of the forest (the location of the tree species or their diversity) has an influence on the ecological services provided. In this context, it seems critical to map tree species that make it up the forest. Remote sensing, especially from satellite images, appears to be the most appropriate way to map large areas. Thanks to the satellite constellations such as Sentinel-2 or Landsat-8 and their free acquisition for the user, the use of time series of satellite images with high spatial, spectral and temporal resolution using automatic learning algorithms is now easy to access. While many works have studied the potential of satellite images to identify tree species, few use time series (several images per year) with high spatial resolution and taking into account the spatial autocorrelation of references, i.e. the spectral similarity of spatially close samples. However, by not taking this phenomenon into account, evaluation biases may occur and thus overestimate the quality of the learning models. It is also a question of better identifying the methodological barriers in order to understand why it can be easy or complicated for an algorithm to identify one species from another. The general objective of the thesis is to study the potential and the obstacles concerning the idenficiation of forest tree species from satellite images time series with high spatial, spectral and temporal resolution. The first objective is to study the temporal stability of predictions from a nine-year archive of the Formosat-2 satellite. More specifically, the work focuses on the implementation of a validation method that is as faithful as possible to the observed quality of the maps. The second objective focuses on the link between in situ phenological events (leaf growth at the beginning of the season, or leaf loss and coloration at the end of the season) and what can be observed by remote sensing. In addition to the ability to detect these events, it will be studied whether what allows the algorithms to identify tree species from each other is related to species-specific behaviors
Hadj, Amor Khaoula. „Classification et inférence de réseaux de gènes à partir de séries temporelles très courtes : application à la modélisation de la mémoire transcriptionnelle végétale associée à des stimulations sonores répétées“. Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES037.
Der volle Inhalt der QuelleAdvancements in new sequencing technologies have paved the way for accessing dynamic gene expression data on a genome-wide scale. Classical ensemble approaches traditionally used in biology fall short of comprehending the underlying the complex molecular mechanisms. Consequently, developing analytical methods to understand further such data poses a significant challenge for current biology. However, the technical and experimental costs associated with transcriptomic data severely limit the dimension of real datasets and their analytical methods. Throughout my thesis, at the intersection of applied mathematics and plant biology, I focused on implementing an inference method for dynamic regulatory networks tailored to a real and original dataset describing the effect of repeated acoustic stimulations on genes expressions of Arabidopsis thaliana. I proposed a clustering method adapted to very-short time series that groups genes based on temporal variations, adjusting the data dimension for network inference. The comparison of this method with classical methods showed that it was the most suitable for very-short time series with irregular time points. For the network inference, I proposed a model that takes into account intra-class variability and integrates a constant term explicitly describing the external stimulation of the system. The evaluation of these classification and inference methods was conducted on simulated and real-time series data, which established their high performance in terms of accuracy, recall, and prediction error. The implementation of these methods to study the priming of the immune response of Arabidopsis thaliana through repeated sound waves. We demonstrated the formation of a transcriptional memory associated with stimulations, transitioning the plant from a naïve state to a primed and more resistant state within 3 days. This resistant state, maintained by stimulations and transcription factor cascades, enhances the plant's immune resistance by triggering the expression of resistance genes in healthy plants, diversifying the number of genes involved in the immune response, and intensifying the expression of numerous resistance genes. The inference of the network describing the transcriptional memory associated with repeated sound stimulations allowed us to identify the properties conferred to plants. Experimentally validated predictions showed that increasing the frequency between stimulations does not result in a more significant resistance gain, and the transcriptional memory lasts only 1.5 days after the last stimulation
Dachraoui, Asma. „Cost-Sensitive Early classification of Time Series“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLA002/document.
Der volle Inhalt der QuelleEarly classification of time series is becoming increasingly a valuable task for assisting in decision making process in many application domains. In this setting, information can be gained by waiting for more evidences to arrive, thus helping to make better decisions that incur lower misclassification costs, but, meanwhile, the cost associated with delaying the decision generally increases, rendering the decision less attractive. Making early predictions provided that are accurate requires then to solve an optimization problem combining two types of competing costs. This thesis introduces a new general framework for time series early classification problem. Unlike classical approaches that implicitly assume that misclassification errors are cost equally and the cost of delaying the decision is constant over time, we cast the the problem as a costsensitive online decision making problem when delaying the decision is costly. We then propose a new formal criterion, along with two approaches that estimate the optimal decision time for a new incoming yet incomplete time series. In particular, they capture the evolutions of typical complete time series in the training set thanks to a segmentation technique that forms meaningful groups, and leverage these complete information to estimate the costs for all future time steps where data points still missing. These approaches are interesting in two ways: (i) they estimate, online, the earliest time in the future where a minimization of the criterion can be expected. They thus go beyond the classical approaches that myopically decide at each time step whether to make a decision or to postpone the call one more time step, and (ii) they are adaptive, in that the properties of the incoming time series are taken into account to decide when is the optimal time to output a prediction. Results of extensive experiments on synthetic and real data sets show that both approaches successfully meet the behaviors expected from early classification systems
Derksen, Dawa. „Classification contextuelle de gros volumes de données d'imagerie satellitaire pour la production de cartes d'occupation des sols sur de grandes étendues“. Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30290.
Der volle Inhalt der QuelleThis work studies the application of supervised classification for the production of land cover maps using time series of satellite images at high spatial, spectral, and temporal resolutions. On this problem, certain classes such as urban cover, depend more on the context of the pixel than its content. The issue of this Ph.D. work is therefore to take into account the neighborhood of the pixel, to improve the recognition rates of these classes. This research first leads to question the definition of the context, and to imagine different possible shapes for it. Then comes describing the context, that is to say to create a representation or a model that allows the target classes to be recognized. The combinations of these two aspects are evaluated on two experimental data sets, one on Sentinel-2 images, and the other on SPOT-7 images
Al, Saleh Mohammed. „SPADAR : Situation-aware and proactive analytics for dynamic adaptation in real time“. Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG060.
Der volle Inhalt der QuelleAlthough radiation level is a serious concern that requires continuous monitoring, many existing systems are designed to perform this task. Radiation Early Warning System (REWS) is one of these systems which monitors the gamma radiation level in the air. Such a system requires high manual intervention, depends totally on experts' analysis, and has some shortcomings that can be risky sometimes. In this thesis, the RIMI (Refining Incoming Monitored Incidents) approach will be introduced, which aims to improve this system while becoming more autonomous while keeping the final decision to the experts. A new method is presented which will help in changing this system to become more intelligent while learning from past incidents of each specific system
Cherdo, Yann. „Détection d'anomalie non supervisée sur les séries temporelle à faible coût énergétique utilisant les SNNs“. Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ4018.
Der volle Inhalt der QuelleIn the context of the predictive maintenance of the car manufacturer Renault, this thesis aims at providing low-power solutions for unsupervised anomaly detection on time-series. With the recent evolution of cars, more and more data are produced and need to be processed by machine learning algorithms. This processing can be performed in the cloud or directly at the edge inside the car. In such a case, network bandwidth, cloud services costs, data privacy management and data loss can be saved. Embedding a machine learning model inside a car is challenging as it requires frugal models due to memory and processing constraints. To this aim, we study the usage of spiking neural networks (SNNs) for anomaly detection, prediction and classification on time-series. SNNs models' performance and energy costs are evaluated in an edge scenario using generic hardware models that consider all calculation and memory costs. To leverage as much as possible the sparsity of SNNs, we propose a model with trainable sparse connections that consumes half the energy compared to its non-sparse version. This model is evaluated on anomaly detection public benchmarks, a real use-case of anomaly detection from Renault Alpine cars, weather forecasts and the google speech command dataset. We also compare its performance with other existing SNN and non-spiking models. We conclude that, for some use-cases, spiking models can provide state-of-the-art performance while consuming 2 to 8 times less energy. Yet, further studies should be undertaken to evaluate these models once embedded in a car. Inspired by neuroscience, we argue that other bio-inspired properties such as attention, sparsity, hierarchy or neural assemblies dynamics could be exploited to even get better energy efficiency and performance with spiking models. Finally, we end this thesis with an essay dealing with cognitive neuroscience, philosophy and artificial intelligence. Diving into conceptual difficulties linked to consciousness and considering the deterministic mechanisms of memory, we argue that consciousness and the self could be constitutively independent from memory. The aim of this essay is to question the nature of humans by contrast with the ones of machines and AI
Olteanu, Madalina. „Modèles à changements de régime : applications aux données financières“. Phd thesis, Université Panthéon-Sorbonne - Paris I, 2006. http://tel.archives-ouvertes.fr/tel-00133132.
Der volle Inhalt der QuelleOn propose d'étudier ces questions à travers deux approches. Dans la première, il s'agit de montrer la consistance faible d'un estimateur de maximum de vraisemblance pénalisée sous des conditions de stationnarité et dépendance faible. Les hypothèses introduites sur l'entropie à crochets de la classe des fonctions scores généralisés sont ensuite vérifiées dans un cadre linéaire et gaussien. La deuxième approche, plutôt empirique, est issue des méthodes de classification non-supervisée et combine les cartes de Kohonen avec une classification hiérarchique pour laquelle une nouvelle dispersion basée sur la somme des carrés résiduelle est introduite.
Benacchio, Véronique. „Etude par imagerie in situ des processus biophysiques en milieu fluvial : éléments méthodologiques et applications“. Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE2056/document.
Der volle Inhalt der QuelleRemote sensing is more and more used in river sciences, mainly using satellite and airborne imagery. Ground imagery constitutes a complementary tool which presents numerous advantages for the study of rivers. For example, it is easy to set up; costs are limited; it allows an oblique angle; etc. It also presents the possibility to set up the triggering with very high frequency, ranging, for instance, from a few seconds to a few hours. The possibility to monitor events at the instant they occur makes ground imagery extremely advantageous compared to aerial or spatial imagery (whose highest acquisition frequency corresponds to a few days). Such frequencies produce huge datasets, which require automated analyses. This is one of the challenges addressed in this thesis. Processing and analysis of data acquired at five study sites located in France and Québec, Canada, facilitated the evaluation of ground imagery potentials, as well as its limitations with respect to the study of fluvial systems. The identification of optimal conditions to set up the cameras and to acquire images is the first step of a global approach, presented as a chain of optional modules. Each one is to be taken into account according to the objectives of the study. The extraction of radiometric information and the subsequent statistical analysis of the signal were tested in several situations. In particular, random forests were applied, as a supervised object-oriented classification method. The datasets were principally exploited using high frequency time series analyses, which allowed demonstrating strengths and weaknesses of this approach, as well as some potential applications. Ground imagery is a powerful tool to monitor fluvial systems, as it facilitates the definition of various kinds of time characteristics linked with fluvial biophysical processes. However, it is necessary to optimize the quality of the data produced. In particular, it is necessary to minimize the acquisition angle and to limit the variability of luminosity conditions between shots in order to acquire fully exploitable datasets
Warembourg, Caroline. „Analyse temporelle du mésozooplancton dans la rade de Villefranche-sur-Mer à l'aide d'un nouveau système automatique d'imagerie numérique, le Zooscan : influence des apports particulaires, de la production primaire et des facteurs environnementaux“. Paris 6, 2005. http://www.theses.fr/2005PA066469.
Der volle Inhalt der QuelleWagner, Nicolas. „Détection des modifications de l’organisation circadienne des activités des animaux en relation avec des états pré-pathologiques, un stress, ou un événement de reproduction“. Thesis, Université Clermont Auvergne (2017-2020), 2020. http://www.theses.fr/2020CLFAC032.
Der volle Inhalt der QuellePrecision livestock farming consists of recording parameters on the animals or their environment using various sensors. In this thesis, the aim is to monitor the behaviour of dairy cows via a real-time localisation system. The data are collected in a sequence of values at regular intervals, a so-called time series. The problems associated with the use of sensors are the large amount of data generated and the quality of this data. The Machine Learning (ML) helps to alleviate this problem. The aim of this thesis is to detect abnormal cow behaviour. The working hypothesis, supported by the biological literature, is that the circadian rhythm of a cow's activity changes if it goes from a normal state to a state of disease, stress or a specific physiological stage (oestrus, farrowing) at a very early stage. The detection of a behavioural anomaly would allow decisions to be taken more quickly in breeding. To do this, there are Time Series Classification (TSC) tools. The problem with behavioural data is that the so-called normal behavioural pattern of the cow varies from cow to cow, day to day, farm to farm, season to season, and so on. Finding a common normal pattern to all cows is therefore impossible. However, most TSC tools rely on learning a global model to define whether a given behaviour is close to this model or not. This thesis is structured around two major contributions. The first one is the development of a new TSC method: FBAT. It is based on Fourier transforms to identify a pattern of activity over 24 hours and compare it to another consecutive 24-hour period, in order to overcome the problem of the lack of a common pattern in a normal cow. The second contribution is the use of fuzzy labels. Indeed, around the days considered abnormal, it is possible to define an uncertain area where the cow would be in an intermediate state. We show that fuzzy logic improves results when labels are uncertain and we introduce a fuzzy variant of FBAT: F-FBAT
Dridi, Aicha. „A novel efficient time series deep learning approach using classification, prediction and reinforcement : energy and telecom use case“. Electronic Thesis or Diss., Institut polytechnique de Paris, 2022. http://www.theses.fr/2022IPPAS010.
Der volle Inhalt der QuelleThe massive growth of sensors (temperature, humidity, accelerometer, position sensor) and mobile devices (smartphones, tablets, smartwatches) increases the amount of data generated explosively. This immense amount of data can be collected and managed. The work carried out during this thesis aims first to propose an approach that deals with a specific type of data, which are time series. First, we used classification methods based on convolutional neural networks and multilayer perceptrons to extract the relevant information. We then used recurrent neural networks to make the predictions. We treated several time series data: energy, cellular, and GPS taxi track data. We also investigated several other methods like as semantic compression and transfer learning. The two described methods above allow us for the first to transmit only the weight of the neural networks, or if an anomaly is detected, send the anomalous data. Transfer learning allows us to make good predictions even if the data is missing or noisy. These methods allowed us to set up dynamic anomaly detection mechanisms. The objective of the last part of the thesis is to develop and implement a resource management solution having as input the result of the previous phases. We used several methods to implement this resource management solution, such as reinforcement learning, exact resolution, or recurrent neural networks. The first application is the implementation of an energy management system. The second application is the management of the deployment of drones to assist cellular networks when an anomaly occurs
Lerogeron, Hugo. „Approximation de Dynamic Time Warping par réseaux de neurones pour la compression de signaux EEG et l'analyse de l'insomnie induite par le COVID long“. Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMR098.
Der volle Inhalt der QuelleThis manuscript presents the work carried out within the framework of the CIFRE thesis conducted in partnership between LITIS and Saagie and which is part of the PANDORE-IA project in association with the VIFASOM sleep center.Electroencephalographic (EEG) signals are very useful in helping experts identify various abnormalities like sleep disorders. Recently, the community has shown great interest in long COVID and its various impacts on sleep. However, these signals are voluminous: compression allows reducing storage and transfer costs. Recent compression approaches are based on autoencoders that use a cost function to learn. It is usually the Mean Squared Error (MSE), but there are metrics more suited to time series, particularly Dynamic Time Warping (DTW). However, DTW is not differentiable and thus can not be used as a loss for end-to-end learning.To solve this problem, we propose in this thesis two approaches to approximate DTW based on neural networks. The first approach uses a Siamese network to project the signals so that the Euclidean distance of the projected signals is as close as possible to the DTW of the original signals. The second approach attempts to predict directly the DTW value. We show that these approaches are faster than other differentiable approximations of DTW while obtaining results similar to DTW in query or classification on sleep data.We then demonstrate that the Siamese approximation can be used as a cost function for learning a sleep signal compression system based on an autoencoder. We justify the choice of the network architecture by the fact that it allows us to vary the compression rate. We evaluate this compression system by classification on the compressed and then reconstructed signals, and show that the usual measures of compression quality do not allow for a proper assessment of a compression system's ability to retain discriminative information. We show that our DTW approximations yield better performance on the reconstructed data than conventional compression algorithms and other reconstruction losses.Finally, to study the impact of long COVID on insomnia, we collect and provide the community with a dataset named COVISLEEP, containing polysomnographies of individuals who developed chronic insomnia after COVID infection, and of those suffering from chronic insomnia but who have not been infected by the virus. We compare various state-of-the-art approaches for sleep staging, and use the best one for learning the detection of long COVID. We highlight the difficulty of the task, especially due to the high variability among patients. This offers a complex dataset to the community that allows for the development of more effective methods
Claeys, Emmanuelle. „Clusterisation incrémentale, multicritères de données hétérogènes pour la personnalisation d’expérience utilisateur“. Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAD039.
Der volle Inhalt der QuelleIn many activity sectors (health, online sales,...) designing from scratch an optimal solution for a defined problem (finding a protocol to increase the cure rate, designing a web page to promote the purchase of one or more products,...) is often very difficult or even impossible. In order to face this difficulty, designers (doctors, web designers, production engineers,...) often work incrementally by successive improvements of an existing solution. However, defining the most relevant changes remains a difficult problem. Therefore, a solution adopted more and more frequently is to compare constructively different alternatives (also called variations) in order to determine the best one by an A/B Test. The idea is to implement these alternatives and compare the results obtained, i.e. the respective rewards obtained by each variation. To identify the optimal variation in the shortest possible time, many test methods use an automated dynamic allocation strategy. Its allocate the tested subjects quickly and automatically to the most efficient variation, through a learning reinforcement algorithms (as one-armed bandit methods). These methods have shown their interest in practice but also limitations, including in particular a latency time (i.e. a delay between the arrival of a subject to be tested and its allocation) too long, a lack of explicitness of choices and the integration of an evolving context describing the subject's behaviour before being tested. The overall objective of this thesis is to propose a understable generic A/B test method allowing a dynamic real-time allocation which take into account the temporals static subjects’s characteristics
Ternynck, Camille. „Contributions à la modélisation de données spatiales et fonctionnelles : applications“. Thesis, Lille 3, 2014. http://www.theses.fr/2014LIL30062/document.
Der volle Inhalt der QuelleIn this dissertation, we are interested in nonparametric modeling of spatial and/or functional data, more specifically based on kernel method. Generally, the samples we have considered for establishing asymptotic properties of the proposed estimators are constituted of dependent variables. The specificity of the studied methods lies in the fact that the estimators take into account the structure of the dependence of the considered data.In a first part, we study real variables spatially dependent. We propose a new kernel approach to estimating spatial probability density of the mode and regression functions. The distinctive feature of this approach is that it allows taking into account both the proximity between observations and that between sites. We study the asymptotic behaviors of the proposed estimates as well as their applications to simulated and real data. In a second part, we are interested in modeling data valued in a space of infinite dimension or so-called "functional data". As a first step, we adapt the nonparametric regression model, introduced in the first part, to spatially functional dependent data framework. We get convergence results as well as numerical results. Then, later, we study time series regression model in which explanatory variables are functional and the innovation process is autoregressive. We propose a procedure which allows us to take into account information contained in the error process. After showing asymptotic behavior of the proposed kernel estimate, we study its performance on simulated and real data.The third part is devoted to applications. First of all, we present unsupervised classificationresults of simulated and real spatial data (multivariate). The considered classification method is based on the estimation of spatial mode, obtained from the spatial density function introduced in the first part of this thesis. Then, we apply this classification method based on the mode as well as other unsupervised classification methods of the literature on hydrological data of functional nature. Lastly, this classification of hydrological data has led us to apply change point detection tools on these functional data
Abidi, Azza. „Investigating Deep Learning and Image-Encoded Time Series Approaches for Multi-Scale Remote Sensing Analysis in the context of Land Use/Land Cover Mapping“. Electronic Thesis or Diss., Université de Montpellier (2022-....), 2024. http://www.theses.fr/2024UMONS007.
Der volle Inhalt der QuelleIn this thesis, the potential of machine learning (ML) in enhancing the mapping of complex Land Use and Land Cover (LULC) patterns using Earth Observation data is explored. Traditionally, mapping methods relied on manual and time-consuming classification and interpretation of satellite images, which are susceptible to human error. However, the application of ML, particularly through neural networks, has automated and improved the classification process, resulting in more objective and accurate results. Additionally, the integration of Satellite Image Time Series(SITS) data adds a temporal dimension to spatial information, offering a dynamic view of the Earth's surface over time. This temporal information is crucial for accurate classification and informed decision-making in various applications. The precise and current LULC information derived from SITS data is essential for guiding sustainable development initiatives, resource management, and mitigating environmental risks. The LULC mapping process using ML involves data collection, preprocessing, feature extraction, and classification using various ML algorithms. Two main classification strategies for SITS data have been proposed: pixel-level and object-based approaches. While both approaches have shown effectiveness, they also pose challenges, such as the inability to capture contextual information in pixel-based approaches and the complexity of segmentation in object-based approaches.To address these challenges, this thesis aims to implement a method based on multi-scale information to perform LULC classification, coupling spectral and temporal information through a combined pixel-object methodology and applying a methodological approach to efficiently represent multivariate SITS data with the aim of reusing the large amount of research advances proposed in the field of computer vision
Douzal-Chouakria, Ahlame. „Contribution à l'analyse de données temporelles“. Habilitation à diriger des recherches, 2012. http://tel.archives-ouvertes.fr/tel-00908426.
Der volle Inhalt der Quelle