Dissertationen zum Thema „Classification and spatiotemporal forecasting“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Classification and spatiotemporal forecasting.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Classification and spatiotemporal forecasting" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Kirchmeyer, Matthieu. „Out-of-distribution Generalization in Deep Learning : Classification and Spatiotemporal Forecasting“. Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS080.

Der volle Inhalt der Quelle
Annotation:
L’apprentissage profond a émergé comme une approche puissante pour la modélisation de données statiques comme les images et, plus récemment, pour la modélisation de systèmes dynamiques comme ceux sous-jacents aux séries temporelles, aux vidéos ou aux phénomènes physiques. Cependant, les réseaux neuronaux ne généralisent pas bien en dehors de la distribution d’apprentissage, en d’autres termes, hors-distribution. Ceci limite le déploiement de l’apprentissage profond dans les systèmes autonomes ou les systèmes de production en ligne, qui sont confrontés à des données en constante évolution. Dans cette thèse, nous concevons de nouvelles stratégies d’apprentissage pour la généralisation hors-distribution. Celles-ci tiennent compte des défis spécifiques posés par deux tâches d’application principales, la classification de données statiques et la prévision de dynamiques spatiotemporelles. Les deux premières parties de cette thèse étudient la classification. Nous présentons d’abord comment utiliser des données d’entraînement en quantité limitée d’un domaine cible pour l’adaptation. Nous explorons ensuite comment généraliser à des domaines non observés sans accès à de telles données. La dernière partie de cette thèse présente diverses tâches de généralisation, spécifiques à la prévision spatiotemporelle
Deep learning has emerged as a powerful approach for modelling static data like images and more recently for modelling dynamical systems like those underlying times series, videos or physical phenomena. Yet, neural networks were observed to not generalize well outside the training distribution, in other words out-of-distribution. This lack of generalization limits the deployment of deep learning in autonomous systems or online production pipelines, which are faced with constantly evolving data. In this thesis, we design new strategies for out-of-distribution generalization. These strategies handle the specific challenges posed by two main application tasks, classification of static data and spatiotemporal dynamics forecasting. The first two parts of this thesis consider the classification problem. We first investigate how we can efficiently leverage some observed training data from a target domain for adaptation. We then explore how to generalize to unobserved domains without access to such data. The last part of this thesis handles various generalization problems specific to spatiotemporal forecasting
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Fu, Kaiqun. „Spatiotemporal Event Forecasting and Analysis with Ubiquitous Urban Sensors“. Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/104165.

Der volle Inhalt der Quelle
Annotation:
The study of information extraction and knowledge exploration in the urban environment is gaining popularity. Ubiquitous sensors and a plethora of statistical reports provide an immense amount of heterogeneous urban data, such as traffic data, crime activity statistics, social media messages, and street imagery. The development of methods for heterogeneous urban data-based event identification and impacts analysis for a variety of event topics and assumptions is the subject of this dissertation. A graph convolutional neural network for crime prediction, a multitask learning system for traffic incident prediction with spatiotemporal feature learning, social media-based transportation event detection, and a graph convolutional network-based cyberbullying detection algorithm are the four methods proposed. Additionally, based on the sensitivity of these urban sensor data, a comprehensive discussion on ethical issues of urban computing is presented. This work makes the following contributions in urban perception predictions: 1) Create a preference learning system for inferring crime rankings from street view images using a bidirectional convolutional neural network (bCNN). 2) Propose a graph convolutional networkbased solution to the current urban crime perception problem; 3) Develop street view image retrieval algorithms to demonstrate real city perception. This work also makes the following contributions in traffic incident effect analysis: 1) developing a novel machine learning system for predicting traffic incident duration using temporal features; 2) modeling traffic speed similarity among road segments using spatial connectivity in feature space; and 3) proposing a sparse feature learning method for identifying groups of temporal features at a higher level. In transportation-related incidents detection, this work makes the following contributions: 1) creating a real-time social media-based traffic incident detection platform; 2) proposing a query expansion algorithm for traffic-related tweets; and 3) developing a text summarization tool for redundant traffic-related tweets. Cyberbullying detection from social media platforms is one of the major focus of this work: 1) Developing an online Dynamic Query Expansion process using concatenated keyword search. 2) Formulating a graph structure of tweet embeddings and implementing a Graph Convolutional Network for fine-grained cyberbullying classification. 3) Curating a balanced multiclass cyberbullying dataset from DQE, and making it publicly available. Additionally, this work seeks to identify ethical vulnerabilities from three primary research directions of urban computing: urban safety analysis, urban transportation analysis, and social media analysis for urban events. Visions for future improvements in the perspective of ethics are addressed.
Doctor of Philosophy
The ubiquitously deployed urban sensors such as traffic speed meters, street-view cameras, and even smartphones in everybody's pockets are generating terabytes of data every hour. How do we refine the valuable intelligence out of such explosions of urban data and information became one of the profitable questions in the field of data mining and urban computing. In this dissertation, four innovative applications are proposed to solve real-world problems with big data of the urban sensors. In addition, the foreseeable ethical vulnerabilities in the research fields of urban computing and event predictions are addressed. The first work explores the connection between urban perception and crime inferences. StreetNet is proposed to learn crime rankings from street view images. This work presents the design of a street view images retrieval algorithm to improve the representation of urban perception. A data-driven, spatiotemporal algorithm is proposed to find unbiased label mappings between the street view images and the crime ranking records. The second work proposes a traffic incident duration prediction model that simultaneously predicts the impact of the traffic incidents and identifies the critical groups of temporal features via a multi-task learning framework. Such functionality provided by this model is helpful for the transportation operators and first responders to judge the influences of traffic incidents. In the third work, a social media-based traffic status monitoring system is established. The system is initiated by a transportation-related keyword generation process. A state-of-the-art tweets summarization algorithm is designed to eliminate the redundant tweets information. In addition, we show that the proposed tweets query expansion algorithm outperforms the previous methods. The fourth work aims to investigate the viability of an automatic multiclass cyberbullying detection model that is able to classify whether a cyberbully is targeting a victim's age, ethnicity, gender, religion, or other quality. This work represents a step forward for establishing an active anti-cyberbullying presence in social media and a step forward towards a future without cyberbullying. Finally, a discussion of the ethical issues in the urban computing community is addressed. This work seeks to identify ethical vulnerabilities from three primary research directions of urban computing: urban safety analysis, urban transportation analysis, and social media analysis for urban events. Visions for future improvements in the perspective of ethics are pointed out.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Khalid, Shehzad. „Motion classification using spatiotemporal approximation of object trajectories“. Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492915.

Der volle Inhalt der Quelle
Annotation:
Techniques for understanding video object motion activity are becoming increasingly important with the widespread adoption of CCTV surveillance systems. Motion trajectories provide rich spatiotemporal information about an object's activity. This thesis presents a novel technique for clustering and classification of object trajectory based video motion clips using basis function approximation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lau, Ada. „Probabilistic wind power forecasts : from aggregated approach to spatiotemporal models“. Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:f5a66568-baac-4f11-ab1e-dc79061cfb0f.

Der volle Inhalt der Quelle
Annotation:
Wind power is one of the most promising renewable energy resources to replace conventional generation which carries high carbon footprints. Due to the abundance of wind and its relatively cheap installation costs, it is likely that wind power will become the most important energy resource in the near future. The successful development of wind power relies heavily on the ability to integrate wind power effciently into electricity grids. To optimize the value of wind power through careful power dispatches, techniques in forecasting the level of wind power and the associated variability are critical. Ideally, one would like to obtain reliable probability density forecasts for the wind power distributions. As wind is intermittent and wind turbines have non-linear power curves, this is a challenging task and many ongoing studies relate to the topic of wind power forecasting. For this reason, this thesis aims at contributing to the literature on wind power forecasting by constructing and analyzing various time series models and spatiotemporal models for wind power production. By exploring the key features of a portfolio of wind power data from Ireland and Denmark, we investigate different types of appropriate models. For instance, we develop anisotropic spatiotemporal correlation models to account for the propagation of weather fronts. We also develop twostage models to accommodate the probability masses that occur in wind power distributions due to chains of zeros. We apply the models to generate multi-step probability forecasts for both the individual and aggregated wind power using extensive data sets from Ireland and Denmark. From the evaluation of probability forecasts, valuable insights are obtained and deeper understanding of the strengths of various models could be applied to improve wind power forecasts in the future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Leasor, Zachary T. „Spatiotemporal Variations of Drought Persistence in the South-Central United States“. The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1497444478957738.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Rosswog, James. „Improving classification of spatiotemporal data using adaptive history filtering“. Diss., Online access via UMI:, 2007.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lo, Shin-Lian. „High-dimensional classification and attribute-based forecasting“. Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37193.

Der volle Inhalt der Quelle
Annotation:
This thesis consists of two parts. The first part focuses on high-dimensional classification problems in microarray experiments. The second part deals with forecasting problems with a large number of categories in predictors. Classification problems in microarray experiments refer to discriminating subjects with different biologic phenotypes or known tumor subtypes as well as to predicting the clinical outcomes or the prognostic stages of subjects. One important characteristic of microarray data is that the number of genes is much larger than the sample size. The penalized logistic regression method is known for simultaneous variable selection and classification. However, the performance of this method declines as the number of variables increases. With this concern, in the first study, we propose a new classification approach that employs the penalized logistic regression method iteratively with a controlled size of gene subsets to maintain variable selection consistency and classification accuracy. The second study is motivated by a modern microarray experiment that includes two layers of replicates. This new experimental setting causes most existing classification methods, including penalized logistic regression, not appropriate to be directly applied because the assumption of independent observations is violated. To solve this problem, we propose a new classification method by incorporating random effects into penalized logistic regression such that the heterogeneity among different experimental subjects and the correlations from repeated measurements can be taken into account. An efficient hybrid algorithm is introduced to tackle computational challenges in estimation and integration. Applications to a breast cancer study show that the proposed classification method obtains smaller models with higher prediction accuracy than the method based on the assumption of independent observations. The second part of this thesis develops a new forecasting approach for large-scale datasets associated with a large number of predictor categories and with predictor structures. The new approach, beyond conventional tree-based methods, incorporates a general linear model and hierarchical splits to make trees more comprehensive, efficient, and interpretable. Through an empirical study in the air cargo industry and a simulation study containing several different settings, the new approach produces higher forecasting accuracy and higher computational efficiency than existing tree-based methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Haensly, Paul J. „The Application of Statistical Classification to Business Failure Prediction“. Thesis, University of North Texas, 1994. https://digital.library.unt.edu/ark:/67531/metadc278187/.

Der volle Inhalt der Quelle
Annotation:
Bankruptcy is a costly event. Holders of publicly traded securities can rely on security prices to reflect their risk. Other stakeholders have no such mechanism. Hence, methods for accurately forecasting bankruptcy would be valuable to them. A large body of literature has arisen on bankruptcy forecasting with statistical classification since Beaver (1967) and Altman (1968). Reported total error rates typically are 10%-20%, suggesting that these models reveal information which otherwise is unavailable and has value after financial data is released. This conflicts with evidence on market efficiency which indicates that securities markets adjust rapidly and actually anticipate announcements of financial data. Efforts to resolve this conflict with event study methodology have run afoul of market model specification difficulties. A different approach is taken here. Most extant criticism of research design in this literature concerns inferential techniques but not sampling design. This paper attempts to resolve major sampling design issues. The most important conclusion concerns the usual choice of the individual firm as the sampling unit. While this choice is logically inconsistent with how a forecaster observes financial data over time, no evidence of bias could be found. In this paper, prediction performance is evaluated in terms of expected loss. Most authors calculate total error rates, which fail to reflect documented asymmetries in misclassification costs and prior probabilities. Expected loss overcomes this weakness and also offers a formal means to evaluate forecasts from the perspective of stakeholders other than investors. This study shows that cost of misclassifying bankruptcy must be at least an order of magnitude greater than cost of misclassifying nonbankruptcy before discriminant analysis methods have value. This conclusion follows from both sampling experiments on historical financial data and Monte Carlo experiments on simulated data. However, the Monte Carlo experiments reveal that as the cost ratio increases, robustness of linear discriminant rules improves; performance appears to depend more on the cost ratio than form of the distributions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Albanwan, Hessah AMYM. „Remote Sensing Image Enhancement through Spatiotemporal Filtering“. The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1492011122078055.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wei, Xinyu. „Modelling and predicting adversarial behaviour using large amounts of spatiotemporal data“. Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/101959/1/Xinyu_Wei_Thesis.pdf.

Der volle Inhalt der Quelle
Annotation:
This research represents pioneering work to exploit new and rich data from tracking system to model player behaviour in sports. Novel methods for understanding and predicting player behaviour were proposed. The key contribution is the development of an algorithm that capture the “style” of players from trajectory data. Experimental results show improved prediction performance in various sports including tennis, basketball and soccer.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Worthy, Paul James. „Investigation of artificial neural networks for forecasting and classification“. Thesis, City University London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264247.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Lopez, Farias Rodrigo. „Time series forecasting based on classification of dynamic patterns“. Thesis, IMT Alti Studi Lucca, 2015. http://e-theses.imtlucca.it/187/1/Farias_phdthesis.pdf.

Der volle Inhalt der Quelle
Annotation:
This thesis addresses the problem of designing short-term forecasting models for water demand time series presenting nonlinear behaviour difficult to be fitted with single linear models. These behaviours can be identified and classified to build specialised models for performing local predictions given an estimated operational regime. Each behavior class is seen as a forecasting operation mode that activates a forecasting model. For this purpose we developed a general modular framework with three different implementations: An implementation of a Multi-Model predictor that works with Machine Learning regressors, clustering algorithms, classification, and function approximations with the objective of producing accurate forecasts for short horizons. The second and third implementations are hybrid algorithms that use qualitative and quantitative information from time series. The quantitative component contains the aggregated magnitude of each period of time and the qualitative component contains the patterns associated with modes. For the qualitative component we used a low order Seasonal ARIMA model and for the qualitative component a k-Nearest Neighbours that predicts the next pattern used to distribute the aggregated magnitude given by the Seasonal ARIMA. The third implementation is based on the same architecture, assuming the existence of an accurate activity calendar with a sequence of working and rest days, related to the forecast patterns. This scheme is extended with a nonlinear filter module for the prediction of pattern mismatches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Erler, Frido. „Spatiotemporal calcium-dynamics in presynaptic terminals“. Doctoral thesis, Technische Universität Dresden, 2004. https://tud.qucosa.de/id/qucosa%3A24527.

Der volle Inhalt der Quelle
Annotation:
This thesis deals with a newly-developed model for the spatiotemporal calcium dynamics within presynaptic terminals. The model is based on single-protein kinetics and has been used to successfully describe different neuron types such as pyramidal neurons in the rat neocortex and the Calyx of Held of neurons from the rat brainstem. A limited number of parameters had to be adjusted to fluorescence measurements of the calcium concentration. These values can be interpreted as a prediction of the model, and in particular the protein densities can be compared to independent experiments. The contribution of single proteins to the total calcium dynamics has been analysed in detail for voltage-dependent calcium channel, plasma-membrane calcium ATPase, sodium-calcium exchanger, and endogenous as well as exogenous buffer proteins. The model can be used to reconstruct the unperturbed calcium dynamics from measurements using fluorescence indicators. The calcium response to different stimuli has been investigated in view of its relevance for synaptic plasticity. This work provides a first step towards a description of the complete synaptic transmission using single-protein data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Strasser, Klaus-Peter. „Kinetic oscillations and spatiotemporal self-organization in electrocatalytic reactions experimental analysis, modeling and classification /“. [S.l. : s.n.], 1999. http://www.diss.fu-berlin.de/1999/25/index.html.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Lundkvist, Emil. „Decision Tree Classification and Forecasting of Pricing Time Series Data“. Thesis, KTH, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-151017.

Der volle Inhalt der Quelle
Annotation:
Many companies today, in different fields of operations and sizes, have access to a vast amount of data which was not available only a couple of years ago. This situation gives rise to questions regarding how to organize and use the data in the best way possible. In this thesis a large database of pricing data for products within various market segments is analysed. The pricing data is from both external and internal sources and is therefore confidential. Because of the confidentiality, the labels from the database are in this thesis substituted with generic ones and the company is not referred to by name, but the analysis is carried out on the real data set. The data is from the beginning unstructured and difficult to overlook. Therefore, it is first classified. This is performed by feeding some manual training data into an algorithm which builds a decision tree. The decision tree is used to divide the rest of the products in the database into classes. Then, for each class, a multivariate time series model is built and each product’s future price within the class can be predicted. In order to interact with the classification and price prediction, a front end is also developed. The results show that the classification algorithm both is fast enough to operate in real time and performs well. The time series analysis shows that it is possible to use the information within each class to do predictions, and a simple vector autoregressive model used to perform it shows good predictive results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Leverger, Colin. „Investigation of a framework for seasonal time series forecasting“. Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S033.

Der volle Inhalt der Quelle
Annotation:
Pour déployer des applications web, l'utilisation de serveurs informatique est primordiale. S'ils sont peu nombreux, les performances des applications peuvent se détériorer. En revanche, s'ils sont trop nombreux, les ressources sont gaspillées et les coûts argumentés. Dans ce contexte, les ingénieurs utilisent des outils de planning capacitaire qui leur permettent de suivre les performances des serveurs, de collecter les données temporelles générées par les infrastructures et d’anticiper les futurs besoins. La nécessité de créer des prévisions fiables apparaît évidente. Les données des infrastructures présentent souvent une saisonnalité évidente. Le cycle d’activité suivi par l’infrastructure est déterminé par certains cycles saisonniers (par exemple, le rythme quotidien de l’activité des utilisateurs). Cette thèse présente un framework pour la prévision de séries temporelles saisonnières. Ce framework est composé de deux modèles d’apprentissage automatique (e.g. clustering et classification) et vise à fournir des prévisions fiables à moyen terme avec un nombre limité de paramètres. Trois implémentations du framework sont présentées : une baseline, une déterministe et une probabiliste. La baseline est constituée d'un algorithme de clustering K-means et de modèles de Markov. La version déterministe est constituée de plusieurs algorithmes de clustering (K-means, K-shape, GAK et MODL) et de plusieurs classifieurs (classifieurs bayésiens, arbres de décisions, forêt aléatoire et régression logistique). La version probabiliste repose sur du coclustering pour créer des grilles probabilistes de séries temporelles, afin de décrire les données de manière non supervisée. Les performances des différentes implémentations du framework sont comparées avec différents modèles de l’état de l’art, incluant les modèles autorégressifs, les modèles ARIMA et SARIMA, les modèles Holts Winters, ou encore Prophet pour la partie probabiliste. Les résultats de la baseline sont encourageants, et confirment l'intérêt pour le framework proposé. De bons résultats sont constatés pour la version déterministe du framework, et des résultats corrects pour la version probabiliste. Un cas d’utilisation d’Orange est étudié, et l’intérêt et les limites de la méthodologie sont montrés
To deploy web applications, using web servers is paramount. If there is too few of them, applications performances can quickly deteriorate. However, if they are too numerous, the resources are wasted and the cost increased. In this context, engineers use capacity planning tools to follow the performances of the servers, to collect time series data and to anticipate future needs. The necessity to create reliable forecasts seems clear. Data generated by the infrastructure often exhibit seasonality. The activity cycle followed by the infrastructure is determined by some seasonal cycles (for example, the user’s daily rhythms). This thesis introduces a framework for seasonal time series forecasting. This framework is composed of two machine learning models (e.g. clustering and classification) and aims at producing reliable midterm forecasts with a limited number of parameters. Three instantiations of the framework are presented: one baseline, one deterministic and one probabilistic. The baseline is composed of K-means clustering algorithms and Markov Models. The deterministic version is composed of several clustering algorithms (K-means, K-shape, GAK and MODL) and of several classifiers (naive-bayes, decision trees, random forests and logistic regression). The probabilistic version relies on coclustering to create time series probabilistic grids, that are used to describe the data in an unsupervised way. The performances of the various implementations are compared with several state-of-the-art models, including the autoregressive models, ARIMA and SARIMA, Holt Winters, or even Prophet for the probabilistic paradigm. The results of the baseline are encouraging and confirm the interest for the framework proposed. Good results are observed for the deterministic implementation, and correct results for the probabilistic version. One Orange use case is studied, and the interest and limits of the methodology are discussed
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Tang, Adelina Lai Toh. „Application of the tree augmented naive Bayes network to classification and forecasting /“. [St. Lucia, Qld.], 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Hovey, Erik P. „Forecasting the Marine Corps Enlisted Classification Plan Assessment of An Alternative Model“. Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/6810.

Der volle Inhalt der Quelle
Annotation:
In a given fiscal year, the United States Marine Corps accesses approximately 30,000 enlisted personnel into its ranks. This labor supply of recruits is classified into various Military Occupational Specialties (MOSs) according to the forecasted requirement for new personnel into a particular MOS. The Classification Plan is the primary initial training input into the Training Input Plan, which allocates all training resources for Training and Education Command. The current Classification Model is based on a steady-state Markov Model that estimates the first-term inventory of each initial training MOS inventory of personnel. A performance comparison was made against a transient Markov Model that solves for an optimal classification plan over the course of a four-year planning horizon. First, the validity of the steady-state assumption is tested and found to produce a variance of annual targets for each MOS throughout the Future Years Defense Plan that is prohibitively high. Next, a comparison of each models ability to forecast annual attrition by MOS between the years 2001 and 2011 is tested. Results indicate that the transient model produced a more accurate forecast for 5,321 out of 7,379 design points (approximately 72% of the observations). The transient model achieved a Mean Absolute Proportional Error that was on average 14 percentage points smaller than that of the steady-state model. In over 25% of the cases, this difference exceeded 20 percentage points. Based upon this improved performance, it is recommended the Marine Corps adopt the enhanced transient Markov Model as the foundation for forecasting its annual Enlisted Classification Plan.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Afolabi, David Olalekan. „Interference reduction in classification and forecasting tasks through cluster and trend analysis“. Thesis, University of Liverpool, 2018. http://livrepository.liverpool.ac.uk/3027593/.

Der volle Inhalt der Quelle
Annotation:
In order to classify or predict accurately in classification or time series tasks, the model building process is substantially dependent on the quality of data which is available for training such models. Consequently, reduced performance can be observed when input attributes/patterns have a conflicting influence on the learning process either due to intrinsic weak discrimination factor of some specific input attributes or as a result of outliers/anomaly picked up during data acquisition/entry. Several hypotheses are proposed, defined, and empirically tested to achieve an interference-less machine learning process using meta-assisted learning in data classification and time series forecasting. Meta-learning is a branch of machine learning that focuses on the automatic and flexible learning of informative concepts/knowledge mined from given data in an efficient manner to improve performance whereby such a system includes a process to monitor the learning progress. The two domains in which this research is focused on are classification tasks and time series forecasting tasks. Within these two domains, two further learning methods are explored whereby both the traditionally flat artificial neural network models and hierarchical structured artificial neural network models are modified to tackle the machine learning interference problem by using derived meta-information to reduce classification and forecasting error. The simulation experiments are performed with the multi-layer perceptron and a variant known as the constructive backpropagation artificial neural network for classification tasks; similarly, the nonlinear autoregressive exogenous model and long short-term memory artificial neural networks are used in time series forecasting tasks. This thesis is established on the following key hypotheses: i. Utilising the 'cluster assumption' for noise identification and extraction based on the intuition that samples of the dataset with higher similarity are inextricable and therefore should be clustered with other neighbouring samples that have similar labels. Clustered data from algorithms such as density based spatial clustering application with noise are analysed and are essential for the derivation of metainformation. ii. Detection of repeating trend patterns by decomposing input signal into several building-block components over a range of frequencies enables distinction between information and error/noise/anomaly. To filter or decompose time series trends, we apply the moving average and empirical mode decomposition respectively. iii. The guided meta-learning process; in which techniques are derived and introduced into the traditional learning process based on the inherent structure/distribution of pattern clusters or component signal trends within the data to tackle the problem of interference and noise within input attributes as the modified machine learner builds an accurate model. iv. Hierarchical learning of local and global clusters/trends as real-world information tends to be structured in a hierarchy of concepts. Therefore, it is intuitive to learn on small/uncomplicated clusters before tackling a complex/encompassing cluster; or in the case of time series, learning short-term patterns before long-term trends. This novel approach to noise elimination is shown to statistically increase the performance of a machine learning algorithm which is modified to carry out metalearning on the training data. It is applicable to various benchmark and real-world datasets with significant improvement on data that contains known/unknown structure or patterns. Therefore the methods put forward in this thesis have the potential to complement or reinforce existing machining learning algorithms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Karimi, Ahmad Maroof. „DATA SCIENCE AND MACHINE LEARNING TO PREDICT DEGRADATION AND POWER OF PHOTOVOLTAIC SYSTEMS: CONVOLUTIONAL AND SPATIOTEMPORAL GRAPH NEURAL NETWORK“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1601082841477951.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Zhou, Enwang. „Evolutionary intelligent systems for pattern classification and price based electric load forecasting applications“. Ann Arbor, Mich. : ProQuest, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3258041.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D. in Electrical Engineering)--S.M.U., 2007.
Title from PDF title page (viewed Mar. 18, 2008). Source: Dissertation Abstracts International, Volume: 68-03, Section: B, page: 1852. Adviser: Alireza Khotanzad. Includes bibliographical references.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Oliveira, Adriano Lorena Inácio de. „Neural networks forecasting and classification-based techniques for novelty detection in time series“. Universidade Federal de Pernambuco, 2011. https://repositorio.ufpe.br/handle/123456789/1825.

Der volle Inhalt der Quelle
Annotation:
Made available in DSpace on 2014-06-12T15:52:37Z (GMT). No. of bitstreams: 2 arquivo4525_1.pdf: 1657788 bytes, checksum: 5abba3555b6cbbc4fa073f1b718d6579 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2011
O problema da detecção de novidades pode ser definido como a identificação de dados novos ou desconhecidos aos quais um sistema de aprendizagem de máquina não teve acesso durante o treinamento. Os algoritmos para detecção de novidades são projetados para classificar um dado padrão de entrada como normal ou novidade. Esses algoritmos são usados em diversas areas, como visão computacional, detecçãao de falhas em máquinas, segurança de redes de computadores e detecção de fraudes. Um grande número de sistemas pode ter seu comportamento modelado por séries temporais. Recentemente o pro oblema de detecção de novidades em séries temporais tem recebido considerável atenção. Várias técnicas foram propostas, incluindo téecnicas baseadas em previsão de séries temporais com redes neurais artificiais e em classificação de janelas das s´eries temporais. As t´ecnicas de detec¸c ao de novidades em s´eries temporais atrav´es de previs ao t em sido criticadas devido a seu desempenho considerado insatisfat´orio. Em muitos problemas pr´aticos, a quantidade de dados dispon´ıveis nas s´eries ´e bastante pequena tornando a previs ao um problema ainda mais complexo. Este ´e o caso de alguns problemas importantes de auditoria, como auditoria cont´abil e auditoria de folhas de pagamento. Como alternativa aos m´etodos baseados em previs ao, alguns m´etodos baseados em classificação foram recentemente propostos para detecção de novidades em séries temporais, incluindo m´etodos baseados em sistemas imunol´ogicos artificiais, wavelets e m´aquinas de vetor de suporte com uma ´unica classe. Esta tese prop oe um conjunto de m´etodos baseados em redes neurais artificiais para detecção de novidades em séries temporais. Os métodos propostos foram projetados especificamente para detec¸c ao de fraudes decorrentes de desvios relativamente pequenos, que s ao bastante importantes em aplica¸c oes de detec¸c ao de fraudes em sistemas financeiros. O primeiro m´etodo foi proposto para melhorar o desempenho de detec¸c ao de novidades baseada em previs ao. Este m´etodo ´e baseado em intervalos de confian¸ca robustos, que s ao usados para definir valores adequados para os limiares a serem usados para detec¸c ao de novidades. O m´etodo proposto foi aplicado a diversas s´eries temporais financeiras e obteve resultados bem melhores que m´etodos anteriores baseados em previs ao. Esta tese tamb´em prop oe dois diferentes m´etodos baseados em classifica¸c ao para detec ¸c ao de novidades em s´eries temporais. O primeiro m´etodo ´e baseado em amostras negativas, enquanto que o segundo m´etodo ´e baseado em redes neurais artificiais RBFDDA e n ao usa amostras negativas na fase de treinamento. Resultados de simula¸c ao usando diversas s´eries temporais extra´ıdas de aplica¸c oes reais mostraram que o segundo m´etodo obt´em melhor desempenho que o primeiro. Al´em disso, o desempenho do segundo m´etodo n ao depende do tamanho do conjunto de teste, ao contr´ario do que acontece com o primeiro m´etodo. Al´em dos m´etodos para detec¸c ao de novidades em s´eries temporais, esta tese prop oe e investiga quatro diferentes m´etodos para melhorar o desempenho de redes neurais RBF-DDA. Os m´etodos propostos foram avaliados usando seis conjuntos de dados do reposit´orio UCI e os resultados mostraram que eles melhoram consideravelmente o desempenho de redes RBF-DDA e tamb´em que eles obt em melhor desempenho que redes MLP e que o m´etodo AdaBoost. Al´em disso, mostramos que os m´etodos propostos obt em resultados similares a k-NN. Os m´etodos propostos para melhorar RBF-DDA foram tamb´em usados em conjunto com o m´etodo proposto nesta tese para detec¸c ao de novidades em s´eries temporais baseado em amostras negativas. Os resultados de diversos experimentos mostraram que esses m´etodos tamb´em melhoram bastante o desempenho da detec¸c ao de fraudes em s´eries temporais, que ´e o foco principal desta tese.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Lack, Steven A. „Cell identification, verification, and classification using shape analysis techniques“. Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/6017.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on March 11, 2008) Includes bibliographical references.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Krauße, Thomas. „Development of a Class Framework for Flood Forecasting“. Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A26441.

Der volle Inhalt der Quelle
Annotation:
Aus der Einleitung: The calculation and prediction of river flow is a very old problem. Especially extremely high values of the runoff can cause enormous economic damage. A system which precisely predicts the runoff and warns in case of a flood event can prevent a high amount of the damages. On the basis of a good flood forecast, one can take action by preventive methods and warnings. An efficient constructional flood retention can reduce the effects of a flood event enormously.With a precise runoff prediction with longer lead times (>48h), the dam administration is enabled to give order to their gatekeepers to empty dams and reservoirs very fast, following a smart strategy. With a good timing, that enables the dams later to store and retain the peak of the flood and to reduce all effects of damage in the downstream. A warning of people in possible flooded areas with greater lead time, enables them to evacuate not fixed things like cars, computers, important documents and so on. Additionally it is possible to use the underlying rainfall-runoff model to perform runoff simulations to find out which areas are threatened at which precipitation events and associated runoff in the river. Altogether these methods can avoid a huge amount of economic damage.:List of Symbols and Abbreviations S. III 1 Introduction S. 1 2 Process based Rainfall-Runoff Modelling S. 5 2.1 Basics of runoff processes S. 5 2.2 Physically based rainfall-runoff and hydrodynamic river models S. 15 3 Portraying Rainfall-Runoff Processes with Neural Networks S. 21 3.1 The Challenge in General S. 22 3.2 State-of-the-art Approaches S. 24 3.3 Architectures of neural networks for time series prediction S. 26 4 Requirements specification S. 33 5 The PAI-OFF approach as the base of the system S. 35 5.1 Pre-Processing of the Input Data S. 37 5.2 Operating and training the PoNN S. 47 5.3 The PAI-OFF approach - an Intelligent System S. 52 6 Design and Implementation S. 55 6.1 Design S. 55 6.2 Implementation S. 58 6.3 Exported interface definition S. 62 6.4 Displaying output data with involvement of uncertainty S. 64 7 Results and Discussion S. 69 7.1 Evaluation of the Results S. 69 7.2 Discussion of the achieved state S. 75 8 Conclusion and FutureWork S. 77 8.1 Access to real-time meteorological input data S. 77 8.2 Using further developed prediction methods S. 79 8.3 Development of a graphical user interface S. 80 Bibliography S. 83
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Al, Nasseri Alya Ali Mansoor. „The predictive power of stock micro-blogging sentiment in forecasting stock market behaviour“. Thesis, Brunel University, 2016. http://bura.brunel.ac.uk/handle/2438/13575.

Der volle Inhalt der Quelle
Annotation:
Online stock forums have become a vital investing platform on which to publish relevant and valuable user-generated content (UGC) data such as investment recommendations and other stock-related information that allow investors to view the opinions of a large number of users and share-trading ideas. This thesis applies methods from computational linguistics and text-mining techniques to analyse and extract, on a daily basis, sentiments from stock-related micro-blogging messages called “StockTwits”. The primary aim of this research is to provide an understanding of the predictive ability of stock micro-blogging sentiments to forecast future stock price behavioural movements by investigating the various roles played by investor sentiments in determining asset pricing on the stock market. The empirical analysis in this thesis consists of four main parts based on the predictive power and the role of investor sentiment in the stock market. The first part discusses the findings of the text-mining procedure for extracting and predicting sentiments from stock-related micro-blogging data. The purpose is to provide a comparative textual analysis of different machine learning algorithms for the purpose of selecting the most accurate text-mining techniques for predicting sentiment analysis on StockTwits through the provision of two different applications of feature selection, namely filter and wrapper approaches. The second part of the analysis focuses on investigating the predictive correlations between StockTwits features and the stock market indicators. It aims to examine the explanatory power of StockTwits variables in explaining the dynamic nature of different financial market indicators. The third part of the analysis investigates the role played by noise traders in determining asset prices. The aim is to show that stock returns, volatility and trading volumes are affected by investor sentiment; it also seeks to investigate whether changes in sentiment (bullish or bearish) will have different effects on stock market prices. The fourth part offers an in-depth analysis of some tweet-market relationships which represent an open problem in the empirical literature (e.g. sentiment-return relations and volume-disagreement relations). The results suggest that StockTwits sentiments exhibit explanatory power in explaining the dynamics of stock prices in the U.S. market. Taking different approaches by combining text-mining techniques with feature selection methods has proved successful in predicting StockTwits sentiments. The applications of the approach presented in this thesis offer real-time investment ideas that may provide investors and their peers with a decision support mechanism. Investor sentiment plays a critical role in determining asset prices in capital markets. Overall, the findings suggest that investor sentiment among noise traders is a priced factor. The findings confirm the existence of asymmetric spillover effects of bullish and bearish sentiments on the stock market. They also suggest that sentiment is a significant factor in explaining stock price behaviour in the capital market and imply the positive role of the stock market in the formation of investor sentiment in stock markets. Furthermore, the research findings demonstrate that disagreement is not only an important factor in determining trading volumes but it is also considered a very significant factor in influencing asset prices and returns in capital markets. Overall, the findings of the thesis provide empirical evidence that failure to consider the role of investor sentiment in traditional finance theory could lead to an imperfect picture when explaining the behaviour of stock prices in stock markets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Wang, Jian. „From local to global: Complex behavior of spatiotemporal systems with fluctuating delay times: From local to global: Complex behavior of spatiotemporal systemswith fluctuating delay times“. Doctoral thesis, Universitätsverlag der Technischen Universität Chemnitz, 2013. https://monarch.qucosa.de/id/qucosa%3A20006.

Der volle Inhalt der Quelle
Annotation:
The aim of this thesis is to investigate the dynamical behaviors of spatially extended systems with fluctuating time delays. In recent years, the study of spatially extended systems and systems with fluctuating delays has experienced a fast growth. In ubiquitous natural and laboratory situations, understanding the action of time-delayed signals is a crucial for understanding the dynamical behavior of these systems. Frequently, the length of the delay is found to change with time. Spatially extended systems are widely studied in many fields, such as chemistry, ecology, and biology. Self-organization, turbulence, and related nonlinear dynamic phenomena in spatially extended systems have developed into one of the most exciting topics in modern science. The first part of this thesis considers the discrete system. Diffusively coupled map lattices with a fluctuating delay are used in the study. The uncoupled local dynamics of the considered system are represented by the delayed logistic map. In particular, the influences of diffusive coupling and fluctuating delay are studied. To observe and understand the influences, the results for the considered system are compared with coupled map lattices without delay and with a constant delay as well as with the uncoupled logistic map with fluctuating delays. Identifying different patterns, determining the existence of traveling wave solutions, and specifying the fully synchronized stable state are the focus of this part of the study. The Lyapunov exponent, the master stability function, spectrum analysis, and the structure factor are used to characterize the different states and the transitions between them. The second part examines the continuous system. The delay is introduced into the reactionterm of the Fisher-KPP equation. The focus of this part of study is the time-delay-induced Turing instability in one-component reaction-diffusion systems. Turing instability has previously only been found in multiple-component reaction-diffusion systems. However, this work demonstrates with the help of the stability exponent that fluctuating delay can result in Turing instability in one-component reaction-diffusion systems as well.
Ziel der vorliegenden Arbeit ist die Untersuchung der Einflüsse der zeitlich fluktuierenden Verzögerungen in räumlich ausgedehnten diffusiven Systemen. Durch den Vergleich von Systemen mit konstanter Verzögerung bzw. Systemen ohne räumliche Kopplung erhält man ein tieferes Verständnis und eine bessere Beschreibungsweise der Dynamik des räumlich ausgedehnten diffusiven Systems mit fluktuierenden Verzögerungen. Im ersten Teil werden diskrete Systeme in Form von diffusiven Coupled Map Lattices untersucht. Als die lokale iterierte Abbildung des betrachteten Systems wird die logistische Abbildung mit Verzögerung gewählt. In diesem Teil liegt der Fokus auf Musterbildung, Existenz von Multiattraktoren und laufenden Wellen sowie der Möglichkeit der vollen Synchronisation. Masterstabilitätsfunktion, Lyapunov Exponent und Spektrumsanalyse werden benutzt, um das dynamische Verhalten zu verstehen. Im zweiten Teil betrachten wir kontinuierliche Systeme. Hier wird die Fisher-KPP Gleichung mit Verzögerungen im Reaktionsteil untersucht. In diesem Teil liegt der Fokus auf der Existenz der Turing Instabilität. Mit Hilfe von analytischen und numerischen Berechnungen wird gezeigt, dass bei fluktuierenden Verzögerungen eine Turing Instabilität auch in 1-Komponenten-Reaktions-Diffusionsgleichungen gefunden werden kann
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Diez, Franziska [Verfasser], und Ralf [Akademischer Betreuer] Korn. „Yield Curves and Chance-Risk Classification: Modeling, Forecasting, and Pension Product Portfolios / Franziska Diez ; Betreuer: Ralf Korn“. Kaiserslautern : Technische Universität Kaiserslautern, 2021. http://d-nb.info/1238074472/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Nguyen, Van O. „Analysis of the U.S. Marine Corps' steady state Markov model for forecasting annual first-term enlisted classification requirements“. Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/25685.

Der volle Inhalt der Quelle
Annotation:
Approved for public release; distribution is unlimited.
The Marine Corps accesses approximately 29,000 to 36,000 new recruits annually. Determining how to classify these new enlistees into more than 200 Military Occupational Specialties is a critical task. These classification estimates must be precise, so the units within the Fleet Marine Force will have the necessary personnel to accomplish their mission. At the same time, these manpower planners must also balance the force structure to minimize personnel overages which could lead to excessive labor and training costs as well as promotion delays. The purpose of this research is to validate and, if necessary, improve the steady state Markov model currently being utilized by the manpower planners at Headquarters, U.S. Marine Corps (Code MPP-23) to forecast the annual personnel classification requirements of new recruits. From a mathematical perspective, all the essential elements of their model were present; however, some of the components like the year 1 continuation rate were not computed according to standard practice, and their estimates of the classification stocks are imprecise due to rounding errors inherent in their forecasting procedure. As a result, a revised model was developed to improve the accuracy and timeliness of the personnel classification forecasts. The recommendations were to implement the revised model and to review the computation of the continuation rates
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Nepali, Anjeev. „County Level Population Estimation Using Knowledge-Based Image Classification and Regression Models“. Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc30498/.

Der volle Inhalt der Quelle
Annotation:
This paper presents methods and results of county-level population estimation using Landsat Thematic Mapper (TM) images of Denton County and Collin County in Texas. Landsat TM images acquired in March 2000 were classified into residential and non-residential classes using maximum likelihood classification and knowledge-based classification methods. Accuracy assessment results from the classified image produced using knowledge-based classification and traditional supervised classification (maximum likelihood classification) methods suggest that knowledge-based classification is more effective than traditional supervised classification methods. Furthermore, using randomly selected samples of census block groups, ordinary least squares (OLS) and geographically weighted regression (GWR) models were created for total population estimation. The overall accuracy of the models is over 96% at the county level. The results also suggest that underestimation normally occurs in block groups with high population density, whereas overestimation occurs in block groups with low population density.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Ricci, Lorenzo. „Essays on tail risk in macroeconomics and finance: measurement and forecasting“. Doctoral thesis, Universite Libre de Bruxelles, 2017. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/242122.

Der volle Inhalt der Quelle
Annotation:
This thesis is composed of three chapters that propose some novel approaches on tail risk for financial market and forecasting in finance and macroeconomics. The first part of this dissertation focuses on financial market correlations and introduces a simple measure of tail correlation, TailCoR, while the second contribution addresses the issue of identification of non- normal structural shocks in Vector Autoregression which is common on finance. The third part belongs to the vast literature on predictions of economic growth; the problem is tackled using a Bayesian Dynamic Factor model to predict Norwegian GDP.Chapter I: TailCoRThe first chapter introduces a simple measure of tail correlation, TailCoR, which disentangles linear and non linear correlation. The aim is to capture all features of financial market co- movement when extreme events (i.e. financial crises) occur. Indeed, tail correlations may arise because asset prices are either linearly correlated (i.e. the Pearson correlations are different from zero) or non-linearly correlated, meaning that asset prices are dependent at the tail of the distribution.Since it is based on quantiles, TailCoR has three main advantages: i) it is not based on asymptotic arguments, ii) it is very general as it applies with no specific distributional assumption, and iii) it is simple to use. We show that TailCoR also disentangles easily between linear and non-linear correlations. The measure has been successfully tested on simulated data. Several extensions, useful for practitioners, are presented like downside and upside tail correlations.In our empirical analysis, we apply this measure to eight major US banks for the period 2003-2012. For comparison purposes, we compute the upper and lower exceedance correlations and the parametric and non-parametric tail dependence coefficients. On the overall sample, results show that both the linear and non-linear contributions are relevant. The results suggest that co-movement increases during the financial crisis because of both the linear and non- linear correlations. Furthermore, the increase of TailCoR at the end of 2012 is mostly driven by the non-linearity, reflecting the risks of tail events and their spillovers associated with the European sovereign debt crisis. Chapter II: On the identification of non-normal shocks in structural VARThe second chapter deals with the structural interpretation of the VAR using the statistical properties of the innovation terms. In general, financial markets are characterized by non- normal shocks. Under non-Gaussianity, we introduce a methodology based on the reduction of tail dependency to identify the non-normal structural shocks.Borrowing from statistics, the methodology can be summarized in two main steps: i) decor- relate the estimated residuals and ii) the uncorrelated residuals are rotated in order to get a vector of independent shocks using a tail dependency matrix. We do not label the shocks a priori, but post-estimate on the basis of economic judgement.Furthermore, we show how our approach allows to identify all the shocks using a Monte Carlo study. In some cases, the method can turn out to be more significant when the amount of tail events are relevant. Therefore, the frequency of the series and the degree of non-normality are relevant to achieve accurate identification.Finally, we apply our method to two different VAR, all estimated on US data: i) a monthly trivariate model which studies the effects of oil market shocks, and finally ii) a VAR that focuses on the interaction between monetary policy and the stock market. In the first case, we validate the results obtained in the economic literature. In the second case, we cannot confirm the validity of an identification scheme based on combination of short and long run restrictions which is used in part of the empirical literature.Chapter III :Nowcasting NorwayThe third chapter consists in predictions of Norwegian Mainland GDP. Policy institutions have to decide to set their policies without knowledge of the current economic conditions. We estimate a Bayesian dynamic factor model (BDFM) on a panel of macroeconomic variables (all followed by market operators) from 1990 until 2011.First, the BDFM is an extension to the Bayesian framework of the dynamic factor model (DFM). The difference is that, compared with a DFM, there is more dynamics in the BDFM introduced in order to accommodate the dynamic heterogeneity of different variables. How- ever, in order to introduce more dynamics, the BDFM requires to estimate a large number of parameters, which can easily lead to volatile predictions due to estimation uncertainty. This is why the model is estimated with Bayesian methods, which, by shrinking the factor model toward a simple naive prior model, are able to limit estimation uncertainty.The second aspect is the use of a small dataset. A common feature of the literature on DFM is the use of large datasets. However, there is a literature that has shown how, for the purpose of forecasting, DFMs can be estimated on a small number of appropriately selected variables.Finally, through a pseudo real-time exercise, we show that the BDFM performs well both in terms of point forecast, and in terms of density forecasts. Results indicate that our model outperforms standard univariate benchmark models, that it performs as well as the Bloomberg Survey, and that it outperforms the predictions published by the Norges Bank in its monetary policy report.
Doctorat en Sciences économiques et de gestion
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Zhao, Tao. „A new method for detection and classification of out-of-control signals in autocorrelated multivariate processes“. Morgantown, W. Va. : [West Virginia University Libraries], 2008. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=5615.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--West Virginia University, 2008.
Title from document title page. Document formatted into pages; contains x, 111 p. : ill. Includes abstract. Includes bibliographical references (p. 102-106).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

BOZIC, Maja. „Impact of the retail environment drivers on sales and demand forecasting“. Doctoral thesis, Università degli studi di Cassino, 2021. http://hdl.handle.net/11580/84146.

Der volle Inhalt der Quelle
Annotation:
The Ph.D. dissertation, Impact of the retail environment drivers on sales and demand forecasting, aims to explore and study the influence of the retail environment patterns on sales and demand forecasting. The uncertainty of the retail environment influenced by the complexity of the supply chain and demand management put out the range of drivers that can influence the dynamics of demand and sales and the tendency to develop more functional tools to capture them. In the first chapter, The concept of the environment and external impact on the forecasting in retailing, it is outlined the main research on the environment in the literature, focusing on the concept of uncertainty, complexity especially in the domain of retailing. The network analysis on citation data benefits the research on certain trends in the literature about environmental patterns on forecasting activities. Results are outlined in the theoretical framework of research trends and the concept of the retail environment for the decision-making process in forecasting. In the second chapter, Influence of the internal and external drivers on consumers’ visits and sales productivity, the explorative analysis focuses on analyzing the effect of the internal retail environment, as the retail format and assortment level, on the consumers’ visits and it is hypothesized whether the external (competition distance, public holidays) and internal drivers (promotions, assortment level, customer visits) affect the sales productivity in different store’s formats. Results show that both promotions and competition distance positively influence sales productivity and it variates based on the store format. The third chapter, The causal effect of the environmental and promotional variables on the sales forecasting, studied the type of relationships between the environmental and internal drivers on the sales forecasting, looking for the causality and functional links. Using the linear and additive models shows that, additive models can allow the together nonlinear and linear functions between the macro variables (CPI, Fuel Price, Unemployment and Temperature) and internal micro (promotions) capturing better the dynamics of effects on sales and perform better in terms of prediction. In the previous explorative analysis, the focus is on looking at possible causal links between the external, internal drivers and sales. In the fourth chapter, The added value of the competition information in demand forecasting, matching closer the business reality, it has been studied the effect of the external variables in the short-term demand forecasting. Using DIY (Do-It-Yourself) stores’ demand and sales data it is analyzed the influence of the competitions’ promotional discounts on the weekly demand at SKUs (Stock Keeping Unit) level. Results show that if the SKUs have high sold quantity and promotions more weeks in the year the inclusion of competitors’ promotional discounts may improve the demand-forecasting model, while SKUs that have the stable demand and are not often discounted the simple linear model without external competition’s variables works better. The final chapter, Conclusions, limitations and future research, discuss the main conclusions, academic, practical implications and future research. The research contributes both to the literature and retail practice. There is, still, the lack of studies of the external influence on the demand and sales forecasting. Retail managers may use these insights to extract important information from the environment and to apply different solutions based on the complexity of the business problem. Focal promotions and competitive promotional actions create a significant impact on consumer demand. The complex nonlinear model with the external variables may work better in a situation of enlarging retail business and where its high impact on the market creates dynamics interactions, while the simple linear models can provide efficient solutions in an already stable and easily predicted competitive market. The limitation of this research is the missing variety of external and internal data that can help to find the optimal model. The future research will be in direction of creating the optimal model with external and internal variables that may be the efficient solutions to the large amount of SKUs and useful managerial tool for scanning the environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Sarmadi, Soheil. „On the Feasibility of Profiling, Forecasting and Authenticating Internet Usage Based on Privacy Preserving NetFlow Logs“. Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7568.

Der volle Inhalt der Quelle
Annotation:
Understanding Internet user behavior and Internet usage patterns is fundamental in developing future access networks and services that meet technical as well as Internet user needs. User behavior is routinely studied and measured, but with different methods depending on the research discipline of the investigator, and these disciplines rarely cross. We tackle this challenge by developing frameworks that the Internet usage statistics used as the main features in understanding Internet user behaviors, with the purpose of finding a complete picture of the user behavior and working towards a unified analysis methodology. In this dissertation we collected Internet usage statistics via privacy-preserving NetFlow logs of 66 student subjects in a college campus was recorded for a month long period. Once the data is cleaned and split into different groups based on different time windows, we have used Statistical Analysis and we found that Internet usage of each user exhibits statistically-strong correlation with the same user's Internet usage for the same day over multiple weeks while it is statistically different from that of other Internet users. In another attempt we have used Time Series Forecasting in order to forecast future Internet usage based on the previous statistics. Subsequently, using state-of-the-art Machine Learning algorithms, we demonstrate the feasibility of profiling Internet users by looking at their Internet traffic. Specifically, when profiled over a time window of 227-second, subjects can be classified by 93.21% precision accuracy. We conclude that understanding Internet usage behavior is valuable and can help in developing future access networks and services.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Reinoso, Nicholas L. „Forecasting Harmful Algal Blooms for Western Lake Erie using Data Driven Machine Learning Techniques“. Cleveland State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=csu1494343783463819.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Kotriwala, Arzam Muzaffar. „Load Forecasting for Temporary Power Installations : A Machine Learning Approach“. Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-211554.

Der volle Inhalt der Quelle
Annotation:
Sports events, festivals, construction sites, and film sites are examples of cases where power is required temporarily and often away from the power grid. Temporary Power Installations refer to systems set up for a limited amount of time with power typically generated on-site. Most load forecasting research has centered around settings with a permanent supply of power (such as in residential buildings). On the contrary, this work proposes machine learning approaches to accurately forecast load for Temporary Power Installations. In practice, these systems are typically powered by diesel generators that are over-sized and consequently, operate at low inefficient load levels. In this thesis, a ‘Pre-Event Forecasting’ approach is proposed to address this inefficiency by classifying a new Temporary Power Installation to a cluster of installations with similar load patterns. By doing so, the sizing of generators and power generation planning can be optimized thereby improving system efficiency. Load forecasting for Temporary Power Installations is also useful whilst a Temporary Power Installation is operational. A ‘Real-Time Forecasting’ approach is proposed to use monitored load data streamed to a server to forecast load two hours or more ahead in time. By doing so, practical measures can be taken in real-time to meet unexpected high and low power demands thereby improving system reliability.
Sportevenemang, festivaler, byggarbetsplatser och film platser är exempel på fall där kraften krävs Tillfälligt eller och bort från elnätet. Tillfälliga Kraft Installationer avser system som inrättats för en begränsad tid med Vanligtvis ström genereras på plats. De flesta lastprognoser forskning har kretsat kring inställningar med permanent eller strömförsörjning (zoals i bostadshus). Tvärtom föreslår detta arbete maskininlärning metoder för att noggrant prognos belastning under Tillfälliga anläggningar. I praktiken är thesis Typiskt system drivs med dieselgeneratorer som är överdimensionerad och följaktligen arbetar ineffektivt vid låga belastningsnivåer. I denna avhandling är en ‘Pre-Event Casting’ Föreslagen metod för att ta itu med denna ineffektivitet genom att klassificera ett nytt tillfälligt ström Installation till ett kluster av installationer med liknande lastmönster. Genom att göra så, kan dimensioneringen av generatorer och kraftproduktion planering optimeras därigenom förbättra systemets effektivitet. Load prognoser för Tillfälliga Kraft installationer är ook användbar Medan en tillfällig ström Installationen är i drift. En ‘Prognoser Real-Time’ Föreslagen metod är att använda övervakade lastdata strömmas till en server att förutse belastningen två timmar eller mer i förväg. Genom att göra så, kan praktiska åtgärder vidtas i realtid för att möta oväntade höga och låga effektbehov och därigenom förbättra systemets tillförlitlighet.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Li, Mao Li. „Spatial-temporal classification enhancement via 3-D iterative filtering for multi-temporal Very-High-Resolution satellite images“. The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1514939565470669.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Chhajed, Tejashree Rakumar. „Deploying contrail forecasting service to reduce the impact of aviation on Environment“. Master's thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-202587.

Der volle Inhalt der Quelle
Annotation:
The principal objective of this thesis is to propose a Contrail forecasting Service for Aviation(ConSA) and a ConSA client to demonstrate the service. The subject is motivated by the fact that Contrails have a very harmful effect on the environment and it has been researched thoroughly by scientists. But still we do not have an infrastructure to include these researches in the Aviation industry. This thesis has been conducted at Airbus Defence and Space, Friedrichshafen, Germany. The first part of the thesis investigates the existence of contrails. The object of interest i.e. Contrails is caused by the passage of aeroplanes in ice-saturated areas. The algorithm which is used by the service is a part of the research conducted by Prof. Dr. Ulrich Schumann. The input dataset provided by Meteo France is a 4D weather cube. The algorithm computes a threshold temperature and ice supersaturation condition. When both of these conditions are satisfied Contrail formation is certain. In the second part we explain the architecture and solution used i.e. OGC (Open Geospatial Consortium) and LuciadLightspeed to develop Web Services to deploy contrail forecasting methods. Such an architecture fast forwards and eases the task of developer by having inbuilt methods and interfaces. The OGC web services infrastructure has defined web feature service and client interface which ease development of geoinformatics solutions.We also use certain standards like WXXM for exchange of contrail information which will be stated in this chapter. The third part is about the detailed implementation of the ConSA service and client. In this chapter we explain the service and client with uml diagrams to clarify the development concepts. The final part of the thesis explains the results of ConSA service and client, operational benefits of ConSA and concludes the thesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Utterberg, Oscar, und Martin Rand. „Klassificering av reservdelar för effektivare reservdelshantering“. Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH, Industriell organisation och produktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-41665.

Der volle Inhalt der Quelle
Annotation:
Syfte– Syftet med denna studie var att finna ett klassificeringsverktyg som kan underlätta beslutsfattande vid reservdelsuppläggning. För att besvara syftet formuleradestre frågeställningar;   Vilket analysverktyg kan användas för att systematiskt klassificera reservdelar i olika    grupper? Hur kan de klassificerade grupperna nyttjas vid reservdelsuppläggning?  Hur kan efterfrågan prognostiseras för de olika klassificerade grupperna med hänsyn till servicenivå? Metod– Studien bedrevs deduktivt genom teoribyggande, med både en empiriskt fallstudie och analytisk konceptuell inriktning. De metoder som studien använts sig av är; litteraturstudier, intervjuer och insamling från dokumentstudien. Litteraturstudien har varit inom områdena; reservdelsklassificering och prognostisering.  Resultat– Studiens resultat är att det krävs en multiklassificeringsmodell för att systematiskt klassificera reservdelar på grund av den komplexa naturen av reservdelshanteringen. Klassificeringsmodellen kan sedan utnyttjas till flera ändamål. De användningsområden som studien funnit är; hjälp vid bestämning av servicenivå, hjälp med val av prognostiseringsmetod för reservdelsgrupperna samt att finna reservdelar som har en skiftande efterfrågetrend.    Implikationer– Klassificeringsmodellen ämnar sig till att underlätta företags reservdelsuppläggning. Med hjälp av modellen ska fallföretaget lättare kunna ta beslut så som vilken servicenivå och prognosmetod som kan användas för deras reservdelar. Begränsningar–  Den här studiens begränsning är att bara ett fallföretag studerades på grund av tidsbegränsningen. Detta medförde att modellen som anpassades blev företagsspecifik och rekommenderas att valideras på andra företag.  Nyckelord– Klassificering, Reservdelsklassificering, Beslutstöd, Prognostisering, Reservdelsprognostisering, Servicenivå
Purpose–The purpose of this study was to find a classification tool that can ease the decision-making process of spare parts planning and forecasting. To accomplish this, three research questions were formulated; Which analysis tools can be used for a systematic classification of spare parts into different groups? How can the classified groups be used when planning and forecasting spare parts? How can the forecasting be done for the different classified groupsconsidering thecustomer service level?  Method– The study was deductively through theory-building, with both an empirical case study and analytical conceptual approach. The methods used were; litterateur research, interviews and collection of secondary data. The litterateur research has been conducted in the areas; spare parts classification and forecasting.  Findings– The finding of this study was that a multi criteria method is needed for a systematic classification of spare parts, because of the complex nature of spare part handling. The classification model can then be used for multiple tasks. The tasks that this study found were; help in deciding the customer service level, help in choosing forecast method for the different spare part groups and finding the spare parts that have shifted in demand trend. Implications– The classification model intends to ease companies spare parts planning and forecasting process. With help from the model the case company should have a more effective process now in choosing which customer service level and forecasting method to use for their spare part.   Limitations– This studies limitation was that only one case company was studied because of time constraints. This makes the modified model very company specific and needs to be further validated on other companies.  Keywords– Classification, Spare parts classification, Decision support, Forecasting, Spare parts forecasting, customer service level
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Goehry, Benjamin. „Prévision multi-échelle par agrégation de forêts aléatoires. Application à la consommation électrique“. Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS461/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse comporte deux objectifs. Un premier objectif concerne la prévision d’une charge totale dans le contexte des Smart Grids par des approches qui reposent sur la méthode de prévision ascendante. Le deuxième objectif repose quant à lui sur l’étude des forêts aléatoires dans le cadre d’observations dépendantes, plus précisément des séries temporelles. Nous étendons dans ce cadre les résultats de consistance des forêts aléatoires originelles de Breiman ainsi que des vitesses de convergence pour une forêt aléatoire simplifiée qui ont été tout deux jusqu’ici uniquement établis pour des observations indépendantes et identiquement distribuées. La dernière contribution sur les forêts aléatoires décrit une nouvelle méthodologie qui permet d’incorporer la structure dépendante des données dans la construction des forêts et permettre ainsi un gain en performance dans le cas des séries temporelles, avec une application à la prévision de la consommation d’un bâtiment
This thesis has two objectives. A first objective concerns the forecast of a total load in the context of Smart Grids using approaches that are based on the bottom-up forecasting method. The second objective is based on the study of random forests when observations are dependent, more precisely on time series. In this context, we are extending the consistency results of Breiman’s random forests as well as the convergence rates for a simplified random forest that have both been hitherto only established for independent and identically distributed observations. The last contribution on random forests describes a new methodology that incorporates the time-dependent structure in the construction of forests and thus have a gain in performance in the case of time series, illustrated with an application of load forecasting of a building
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Tran, Thai Thanh, Quang Xuan Ngo, Hieu Hoang Ha und Nhan Phan Nguyen. „Short-term forecasting of salinity intrusion in Ham Luong river, Ben Tre province using Simple Exponential Smoothing method“. Technische Universität Dresden, 2019. https://tud.qucosa.de/id/qucosa%3A70822.

Der volle Inhalt der Quelle
Annotation:
Salinity intrusion in a river may have an adverse effect on the quality of life and can be perceived as a modern-day curse. Therefore, it is important to find technical ways to monitor and forecast salinity intrusion. In this paper, we designed a forecasting model using Simple Exponential Smoothing method (SES) which performs weekly salinity intrusion forecast in Ham Luong river (HLR), Ben Tre province based on historical data obtained from the Center for Hydro-meteorological forecasting of Ben Tre province. The results showed that the SES method provides an adequate predictive model for forecast of salinity intrusion in An Thuan, Son Doc, and Phu Khanh. However, the SES in My Hoa, An Hiep, and Vam Mon could be improved upon by another forecasting technique. This study suggests that the SES model is an easy-to-use modeling tool for water resource managers to obtain a quick preliminary assessment of salinity intrusion.
Xâm nhập mặn có thể gây tác động xấu đến đời sống con người, tuy nhiên nó hoàn toàn có thể dự báo được. Cho nên, một điều quan trọng là tìm được phương pháp kỹ thuật phù hợp để dự báo và giám sát xâm nhập mặn trên sông. Trong bài báo này, chúng tôi sử dụng phương pháp Simple Exponential Smoothing để dự báo xâm nhập mặn trên sông Hàm Luông, tỉnh Bến Tre. Kết quả cho thấy mô hình dự báo phù hợp cho các vị trí An Thuận, Sơn Đốc, và Phú Khánh. Tuy nhiên, các vị trí Mỹ Hóa, An Hiệp, và Vàm Mơn có thể tìm các phương pháp khác phù hợp hơn. Phương pháp Simple Exponential Smoothing rất dễ ứng dụng trong quản lý nguồn nước dựa vào việc cảnh báo xâm nhập mặn.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Köhler, Thomas, Norbert Pengel, Jana Riedel und Werner Wollersheim. „Forecasting EduTech for the next decade. Scenario development teaching patterns in general versus academic education“. TUDpress, 2019. https://tud.qucosa.de/id/qucosa%3A36572.

Der volle Inhalt der Quelle
Annotation:
Learning while studying is an individual process of actively acquiring knowledge through the co-construction of knowledge resources under supervision by teaching mentors. Mentoring activity typically consists of the interaction of two areas, namely the personal relationship between mentor and mentee, as well as individualized guidance on performance at the factual level, i.e. the partial result-based evaluation of the previous and advice on the future learning process. This in-process feedback is considered to be a key impact factor in learning success in international educational research, provided that it is as direct and as accurate as possible (Hattie & Yates, 2014). [... from the Indroduction]
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Dinh, Thi Lan Anh. „Crop yield simulation using statistical and machine learning models. From the monitoring to the seasonal and climate forecasting“. Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS425.

Der volle Inhalt der Quelle
Annotation:
La météo et le climat ont un impact important sur les rendements agricoles. De nombreuses études basées sur différentes approches ont été réalisées pour mesurer cet impact. Cette thèse se concentre sur les modèles statistiques pour mesurer la sensibilité des cultures aux conditions météorologiques sur la base des enregistrements historiques. Lors de l'utilisation d'un modèle statistique, une difficulté critique survient lorsque les données sont rares, ce qui est souvent le cas pour la modélisation des cultures. Il y a un risque élevé de sur-apprentissage si le modèle n'est pas développé avec certaine précautions. Ainsi, la validation et le choix du modèle sont deux préoccupations majeures de cette thèse. Deux approches statistiques sont développées. La première utilise la régression linéaire avec régularisation et validation croisée (c.-à.-d. leave-one-out ou LOO), appliquée au café robusta dans la principale région productrice de café du Vietnam. Le café est une culture rémunératrice, sensible aux intempéries, et qui a une phénologie très complexe en raison de sa nature pérenne. Les résultats suggèrent que les informations sur les précipitations et la température peuvent être utilisées pour prévoir l'anomalie de rendement avec une anticipation de 3 à 6 mois selon la région. Les estimations du rendement du robusta à la fin de la saison montrent que les conditions météorologiques expliquent jusqu'à 36 % des anomalies de rendement historiques. Cette première approche de validation par LOO est largement utilisée dans la littérature, mais elle peut être mal utilisé pour de nombreuses raisons : elle est technique, mal interprétée et nécessite de l'expérience. Une alternative, l'approche “leave-two-out nested cross-validation” (ou LTO), est proposée pour choisir le modèle approprié, évaluer sa véritable capacité de généralisation et choisir la complexité du modèle optimale. Cette méthode est sophistiquée mais simple. Nous démontrons son applicabilité pour le café robusta au Vietnam et le maïs en France. Dans les deux cas, un modèle plus simple avec moins de prédicteurs potentiels et d'entrées est plus approprié. Utiliser uniquement la méthode LOO peut être très trompeur car cela encourage à choisir un modèle qui sur-apprend les données de manière indirecte. L'approche LTO est également utile dans les applications de prévision saisonnière. Les estimations de rendement du maïs en fin de saison suggèrent que les conditions météorologiques peuvent expliquer plus de 40 % de la variabilité de l'anomalie de rendement en France. Les impacts du changement climatique sur la production de café au Brésil et au Vietnam sont également étudiés à l'aide de simulations climatiques et de modèles d'impact ou “suitability models”. Les données climatiques sont cependant biaisées par rapport au climat réel. De nombreuses méthodes de “correction de biais” (appelées ici “calibration”) ont été introduites pour corriger ces biais. Une présentation critique et détaillée de ces calibrations dans la littérature est fournie pour mieux comprendre les hypothèses, les propriétés et les objectifs d'application de chaque méthode. Les simulations climatiques sont ensuite calibrées par une méthode basée sur les quantiles avant d'être utilisées sur nos modèles d'impact. Ces modèles sont développés sur la base des données de recensement des zones caféières, et les variables climatiques potentielles sont basées sur un examen des études précédentes utilisant des modèles d'impact pour le café et des recommandations d'experts. Les résultats montrent que les zones propices à l'arabica au Brésil pourraient diminuer d'environ 26 % d'ici le milieu du siècle dans le scénario à fortes émissions, les régions compatibles avec la culture du robusta vietnamien pourraient quant à elle diminué d'environ 60 %. Les impacts sont significatifs à basse altitude pour les deux types de café, suggérant des déplacements potentiels de la production vers des endroits plus élevés
Weather and climate strongly impact crop yields. Many studies based on different techniques have been done to measure this impact. This thesis focuses on statistical models to measure the sensitivity of crops to weather conditions based on historical records. When using a statistical model, a critical difficulty arises when data is scarce, which is often the case with statistical crop modelling. There is a high risk of overfitting if the model development is not done carefully. Thus, careful validation and selection of statistical models are major concerns of this thesis. Two statistical approaches are developed. The first one uses linear regression with regularization and leave-one-out cross-validation (or LOO), applied to Robusta coffee in the main coffee-producing area of Vietnam (i.e. the Central Highlands). Coffee is a valuable commodity crop, sensitive to weather, and has a very complex phenology due to its perennial nature. Results suggest that precipitation and temperature information can be used to forecast the yield anomaly with 3–6 months' anticipation depending on the location. Estimates of Robusta yield at the end of the season show that weather explains up to 36 % of historical yield anomalies. The first approach using LOO is widely used in the literature; however, it can be misused for many reasons: it is technical, misinterpreted, and requires experience. As an alternative, the “leave-two-out nested cross-validation” (or LTO) approach, is proposed to choose the suitable model and assess its true generalization ability. This method is sophisticated but straightforward; its benefits are demonstrated for Robusta coffee in Vietnam and grain maize in France. In both cases, a simpler model with fewer potential predictors and inputs is more appropriate. Using only the LOO method, without any regularization, can be highly misleading as it encourages choosing a model that overfits the data in an indirect way. The LTO approach is also useful in seasonal forecasting applications. The end-of-season grain maize yield estimates suggest that weather can account for more than 40 % of the variability in yield anomaly. Climate change's impacts on coffee production in Brazil and Vietnam are also studied using climate simulations and suitability models. Climate data are, however, biased compared to the real-world climate. Therefore, many “bias correction” methods (called here instead “calibration”) have been introduced to correct these biases. An up-to-date review of the available methods is provided to better understand each method's assumptions, properties, and applicative purposes. The climate simulations are then calibrated by a quantile-based method before being used in the suitability models. The suitability models are developed based on census data of coffee areas, and potential climate variables are based on a review of previous studies using impact models for coffee and expert recommendations. Results show that suitable arabica areas in Brazil could decrease by about 26 % by the mid-century in the high-emissions scenario, while the decrease is surprisingly high for Vietnamese Robusta coffee (≈ 60 %). Impacts are significant at low elevations for both coffee types, suggesting potential shifts in production to higher locations. The used statistical approaches, especially the LTO technique, can contribute to the development of crop modelling. They can be applied to a complex perennial crop like coffee or more industrialized annual crops like grain maize. They can be used in seasonal forecasts or end-of-season estimations, which are helpful in crop management and monitoring. Estimating the future crop suitability helps to anticipate the consequences of climate change on the agricultural system and to define adaptation or mitigation strategies. Methodologies used in this thesis can be easily generalized to other cultures and regions worldwide
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Cullmann, Johannes. „Online flood forecasting in fast responding catchments on the basis of a synthesis of artificial neural networks and process models“. Doctoral thesis, Technische Universität Dresden, 2006. https://tud.qucosa.de/id/qucosa%3A24948.

Der volle Inhalt der Quelle
Annotation:
A detailed and comprehensive description of the state of the art in the field of flood forecasting opens this work. Advantages and shortcomings of currently available methods are identified and discussed. Amongst others, one important aspect considers the most exigent weak point of today’s forecasting systems: The representation of all the fundamentally different event specific patterns of flood formation with one single set of model parameters. The study exemplarily proposes an alternative for overcoming this restriction by taking into account the different process characteristics of flood events via a dynamic parameterisation strategy. Other fundamental shortcomings in current approaches especially restrict the potential for real time flash flood forecasting, namely the considerable computational requirements together with the rather cumbersome operation of reliable physically based hydrologic models. The new PAI-OFF methodology (Process Modelling and Artificial Intelligence for Online Flood Forecasting) considers these problems and offers a way out of the general dilemma. It combines the reliability and predictive power of physically based, hydrologic models with the operational advantages of artificial intelligence. These operational advantages feature extremely low computation times, absolute robustness and straightforward operation. Such qualities easily allow for predicting flash floods in small catchments taking into account precipitation forecasts, whilst extremely basic computational requirements open the way for online Monte Carlo analysis of the forecast uncertainty. The study encompasses a detailed analysis of hydrological modeling and a problem specific artificial intelligence approach in the form of artificial neural networks, which build the PAI-OFF methodology. Herein, the synthesis of process modelling and artificial neural networks is achieved by a special training procedure. It optimizes the network according to the patterns of possible catchment reaction to rainstorms. This information is provided by means of a physically based catchment model, thus freeing the artificial neural network from its constriction to the range of observed data – the classical reason for unsatisfactory predictive power of netbased approaches. Instead, the PAI-OFF-net learns to portray the dominant process controls of flood formation in the considered catchment, allowing for a reliable predictive performance. The work ends with an exemplary forecasting of the 2002 flood in a 1700 km² East German watershed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Pohlmann, Tobias, und Friedrich Bernhard. „A combined method to forecast and estimate traffic demand in urban networks“. Elsevier, 2013. https://publish.fid-move.qucosa.de/id/qucosa%3A33932.

Der volle Inhalt der Quelle
Annotation:
This paper presents a combined method for short-term forecasting of detector counts in urban networks and subsequent traffic demand estimation using the forecasted counts as constraints to estimate origin-destination (OD) flows, route and link volumes. The method is intended to be used in the framework of an adaptive traffic control strategy with consecutive optimization intervals of 15. min. The method continuously estimates the forthcoming traffic demand that can be used as input data for the optimization. The forecasting uses current and reference space-time-patterns of detector counts. The reference patterns are derived from data collected in the past. The current pattern comprises all detector counts of the last four time intervals. A simple but effective pattern matching is used for forecasting. The subsequent demand estimation is based on the information minimization model that has been integrated into an iterative procedure with repeated traffic assignment and matrix estimation until a stable solution is found. Some enhancements including the improvement of constraints, redundancy elimination of these constraints and a travel time estimation based on a macroscopic simulation using the Cell Transmission Model have been implemented. The overall method, its modules and its performance, which has been assessed using artificially created data for a real sub-network in Hannover, Germany, by means of a microsimulation with Aimsun NG, are presented in this paper.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Engström, Olof. „Deep Learning for Anomaly Detection in Microwave Links : Challenges and Impact on Weather Classification“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-276676.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence is receiving a great deal of attention in various fields of science and engineering due to its promising applications. In today’s society, weather classification models with high accuracy are of utmost importance. An alternative to using conventional weather radars is to use measured attenuation data in microwave links as the input to deep learning-based weather classification models. Detecting anomalies in the measured attenuation data is of great importance as the output of a classification model cannot be trusted if the input to the classification model contains anomalies. Designing an accurate classification model poses some challenges due to the absence of predefined features to discriminate among the various weather conditions, and due to specific domain requirements in terms of execution time and detection sensitivity. In this thesis we investigate the relationship between anomalies in signal attenuation data, which is the input to a weather classification model, and the model’s misclassifications. To this end, we propose and evaluate two deep learning models based on long short-term memory networks (LSTM) and convolutional neural networks (CNN) for anomaly detection in a weather classification problem. We evaluate the feasibility and possible generalizations of the proposed methodology in an industrial case study at Ericsson AB, Sweden. The results show that both proposed methods can detect anomalies that correlate with misclassifications made by the weather classifier. Although the LSTM performed better than the CNN with regards to top performance on one link and average performance across all 5 tested links, the CNN performance is shown to be more consistent.
Artificiell intelligens har fått mycket uppmärksamhet inom olika teknik- och vetenskapsområden på grund av dess många lovande tillämpningar. I dagens samhälle är väderklassificeringsmodeller med hög noggrannhet av yttersta vikt. Ett alternativ till att använda konventionell väderradar är att använda uppmätta dämpningsdata i mikrovågslänkar som indata till djupinlärningsbaserade väderklassificeringsmodeller. Detektering av avvikelser i uppmätta dämpningsdata är av stor betydelse eftersom en klassificeringsmodells pålitlighet minskar om träningsdatat innehåller avvikelser. Att utforma en noggrann klassificeringsmodell är svårt på grund av bristen på fördefinierade kännetecken för olika typer av väderförhållanden, och på grund av de specifika domänkrav som ofta ställs när det gäller exekveringstid och detekteringskänslighet. I det här examensarbetet undersöker vi förhållandet mellan avvikelser i uppmätta dämpningsdata från mikrovågslänkar, och felklassificeringar gjorda av en väderklassificeringsmodell. För detta ändamål utvärderar vi avvikelsedetektering inom ramen för väderklassificering med hjälp av två djupinlärningsmodeller, baserade på long short-term memory-nätverk (LSTM) och faltningsnätverk (CNN). Vi utvärderar genomförbarhet och generaliserbarhet av den föreslagna metodiken i en industriell fallstudie hos Ericsson AB. Resultaten visar att båda föreslagna metoder kan upptäcka avvikelser som korrelerar med felklassificeringar gjorda av väderklassificeringsmodellen. LSTM-modellen presterade bättre än CNN-modellen både med hänsyn till toppprestanda på en länk och med hänsyn till genomsnittlig prestanda över alla 5 testade länkar, men CNNmodellens prestanda var mer konsistent.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Treiber, Martin, und Arne Kesting. „Evidence of Convective Instability in Congested Traffic Flow: A Systematic Empirical and Theoretical Investigation“. Elsevier, 2011. https://publish.fid-move.qucosa.de/id/qucosa%3A33815.

Der volle Inhalt der Quelle
Annotation:
An extended open system such as traffic flow is said to be convectively unstable if perturbations of the stationary state grow but propagate in only one direction, so they eventually leave the system. By means of data analysis, simulations, and analytical calculations, we give evidence that this concept is relevant for instabilities of congested traffic flow. We analyze detector data from several hundred traffic jams and propose estimates for the linear growth rate, the wavelength, the propagation velocity, and the severity of the associated bottleneck that can be evaluated semi-automatically. Scatter plots of these quantities reveal systematic dependencies. On the theoretical side, we derive, for a wide class of microscopic and macroscopic traffic models, analytical criteria for convective and absolute linear instabilities. Based on the relative positions of the stability limits in the fundamental diagram, we divide these models into five stability classes which uniquely determine the set of possible elementary spatiotemporal patterns in open systems with a bottleneck. Only two classes, both dominated by convective instabilities, are compatible wiqth observations. By means of approximate solutions of convectively unstable systems with sustained localized noise, we show that the observed spatiotemporal phenomena can also be described analytically. The parameters of the analytical expressions can be inferred from observations, and also (analytically) derived from the model equations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Hahmann, Martin, Claudio Hartmann, Lars Kegel, Dirk Habich und Wolfgang Lehner. „Big by blocks: Modular Analytics“. De Gruyter, 2016. https://tud.qucosa.de/id/qucosa%3A72848.

Der volle Inhalt der Quelle
Annotation:
Big Data and Big Data analytics have attracted major interest in research and industry and continue to do so. The high demand for capable and scalable analytics in combination with the ever increasing number and volume of application scenarios and data has lead to a large and intransparent landscape full of versions, variants and individual algorithms. As this zoo of methods lacks a systematic way of description, understanding is almost impossible which severely hinders effective application and efficient development of analytic algorithms. To solve this issue we propose our concept of modular analytics that abstracts the essentials of an analytic domain and turns them into a set of universal building blocks. As arbitrary algorithms can be created from the same set of blocks, understanding is eased and development benefits from reusability.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Abdoun, Oussama. „Analyse spatiotemporelle de données MEA pour l'étude de la dynamique de l'activité de la moelle épinière et du tronc cérébral immatures chez la souris“. Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR15266/document.

Der volle Inhalt der Quelle
Annotation:
Tous les réseaux de neurones immatures génèrent une activité dite « spontanée »qui persiste même en l’absence de toute afférence et est impliquée dans de nombreux processus développementaux. Cette activité apparaît in vitro sous formes de vagues calciques ou électriques pouvant se propager sur de grandes distances et embraser toute la préparation. Toutefois, sa dynamique a été assez peu étudiée jusqu’à présent. En vue de combler quelque peu cette lacune, nous avons utilisé des matrices de microélectrodes (MEA) pour caractériser l’activité rythmique spontanée dans la moelle épinière embryonnaire de souris, sur des préparations aigues et en culture incluant également le tronc cérébral.Les enregistrements MEA produisent des volumes de données très importants qui nécessitent des outils d’analyse performants et adaptés à l’information que l’on souhaite extraire. Nous avons donc développé des méthodes pour la détection, la classification et la cartographie des patrons spatiotemporels d’activité dans les données multicanaux. Notre approche cartographique utilise l’interpolation par splines et est orientée vers la production de cartes multimodales combinant l’activité électrique et des données anatomiques ou biochimiques (marquages). Ces méthodes d’analyse nous ont permis de décrire très précisément l’évolution de l’activité spontanée aux stades précoces (E12.5–E15.5). Nous avons également montré que, à E14.5, l’activité est initiée dans le bulbe, plus précisément dans une région riche en neurones 5-HT, suggérant un nouveau rôle des voies sérotoninergiques descendantes dans la maturation des réseaux spinaux.Enfin, nous avons analysé les mouvements embryonnaires à E14.5 et avons découvert des caractéristiques analogues à la dynamiques spatiotemporelle des activités intraspinales
Immature neural networks generate a peculiar type of activity that persists even in the absence of electrical inputs and was termed for this reason “endogenous”or “spontaneous”. This activity is ubiquitous and was found involved in a wide range of developmental events. In vitro, it can be observed as calcium or electrical waves propagating over great distances, often invading the whole preparation,but its dynamics remain poorly described. In order to somewhat fill this gap,we used multielectrode arrays (MEAs) to characterise the spontaneous rhythmic activity in the mouse developing spinal cord, in both acute and cultured isolated hindbrain-spinal cord preparations.To extract relevant information from the massive amounts of data yielded by MEA recordings, adapted analysis tools are needed. Thus, we have developedmethods for the detection, classification and mapping of spatiotemporal patternsof activity in multichannel data. Our mapping approach is based on the thin plates pline interpolation and includes the possibility to combine maps of activity with anatomical or stained data for multimodal imaging.These methods allowed us to analyse in great detail the evolution of spontaneousactivity at early stages (E12.5–E15.5). In addition, we have localised theinitiation site of E14.5 activity in the medulla and shown that it matches a densemidline population of serotoninergic neurons, suggesting a new role for 5-HTpathways in the maturation of spinal networks. Finally, we have recorded andtracked spontaneous limb movements of E14.5 embryos and found that features of motility were consistent with patterns of spinal activity
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Ghibellini, Alessandro. „Trend prediction in financial time series: a model and a software framework“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24708/.

Der volle Inhalt der Quelle
Annotation:
The research has the aim to build an autonomous support for traders which in future can be translated in an Active ETF. My thesis work is characterized for a huge focus on problem formulation and an accurate analysis on the impact of the input and the length of the future horizon on the results. I will demonstrate that using financial indicators already used by professional traders every day and considering a correct length of the future horizon, it is possible to reach interesting scores in the forecast of future market states, considering both accuracy, which is around 90% in all the experiments, and confusion matrices which confirm the good accuracy scores, without an expensive Deep Learning approach. In particular, I used a 1D CNN. I also emphasize that classification appears to be the best approach to address this type of prediction in combination with proper management of unbalanced class weights. In fact, it is standard having a problem of unbalanced class weights, otherwise the model will react for inconsistent trend movements. Finally I proposed a Framework which can be used also for other fields which allows to exploit the presence of the Experts of the sector and combining this information with ML/DL approaches.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Shaif, Ayad. „Predictive Maintenance in Smart Agriculture Using Machine Learning : A Novel Algorithm for Drift Fault Detection in Hydroponic Sensors“. Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42270.

Der volle Inhalt der Quelle
Annotation:
The success of Internet of Things solutions allowed the establishment of new applications such as smart hydroponic agriculture. One typical problem in such an application is the rapid degradation of the deployed sensors. Traditionally, this problem is resolved by frequent manual maintenance, which is considered to be ineffective and may harm the crops in the long run. The main purpose of this thesis was to propose a machine learning approach for automating the detection of sensor fault drifts. In addition, the solution’s operability was investigated in a cloud computing environment in terms of the response time. This thesis proposes a detection algorithm that utilizes RNN in predicting sensor drifts from time-series data streams. The detection algorithm was later named; Predictive Sliding Detection Window (PSDW) and consisted of both forecasting and classification models. Three different RNN algorithms, i.e., LSTM, CNN-LSTM, and GRU, were designed to predict sensor drifts using forecasting and classification techniques. The algorithms were compared against each other in terms of relevant accuracy metrics for forecasting and classification. The operability of the solution was investigated by developing a web server that hosted the PSDW algorithm on an AWS computing instance. The resulting forecasting and classification algorithms were able to make reasonably accurate predictions for this particular scenario. More specifically, the forecasting algorithms acquired relatively low RMSE values as ~0.6, while the classification algorithms obtained an average F1-score and accuracy of ~80% but with a high standard deviation. However, the response time was ~5700% slower during the simulation of the HTTP requests. The obtained results suggest the need for future investigations to improve the accuracy of the models and experiment with other computing paradigms for more reliable deployments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie