To see the other types of publications on this topic, follow the link: PREDICTION MODELS APPLICATIONS.

Dissertations / Theses on the topic 'PREDICTION MODELS APPLICATIONS'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'PREDICTION MODELS APPLICATIONS.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Foley, Kristen Madsen. "Multivariate Spatial Temporal Statistical Models for Applications in Coastal Ocean Prediction." NCSU, 2006. http://www.lib.ncsu.edu/theses/available/etd-07042006-110351/.

Full text
Abstract:
Estimating the spatial and temporal variation of surface wind fields plays an important role in modeling atmospheric and oceanic processes. This is particularly true for hurricane forecasting, where numerical ocean models are used to predict the height of the storm surge and the degree of coastal flooding. We use multivariate spatial-temporal statistical methods to improve coastal storm surge prediction using disparate sources of observation data. An Ensemble Kalman Filter is used to assimilate water elevation into a three dimension primitive equations ocean model. We find that data assimilation is able to improve the estimates for water elevation for a case study of Hurricane Charley of 2004. In addition we investigate the impact of inaccuracies in the wind field inputs which are the main forcing of the numerical model in storm surge applications. A new multivariate spatial statistical framework is developed to improve the estimation of these wind inputs. A spatial linear model of coregionalization (LMC) is used to account for the cross-dependency between the two orthogonal wind components. A Bayesian approach is used for estimation of the parameters of the multivariate spatial model and a physically based wind model while accounting for potential additive and multiplicative bias in the observed wind data. This spatial model consistently improves parameter estimation and prediction for surface wind data for the Hurricane Charley case study when compared to the original physical wind model. These methods are also shown to improve storm surge estimates when used as the forcing fields for the coastal ocean model. Finally we describe a new framework for estimating multivariate nonstationary spatial-temporal processes based on an extension of the LMC model. We compare this approach to other multivariate spatial models and describe an application to surface wind fields from Hurricane Floyd of 1999.
APA, Harvard, Vancouver, ISO, and other styles
2

Dolan, David M. "Spatial statistics using quasi-likelihood methods with applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0029/NQ66201.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bean, Brennan L. "Interval-Valued Kriging Models with Applications in Design Ground Snow Load Prediction." DigitalCommons@USU, 2019. https://digitalcommons.usu.edu/etd/7579.

Full text
Abstract:
One critical consideration in the design of buildings constructed in the western United States is the weight of settled snow on the roof of the structure. Engineers are tasked with selecting a design snow load that ensures that the building is safe and reliable, without making the construction overly expensive. Western states use historical snow records at weather stations scattered throughout the region to estimate appropriate design snow loads. Various mapping techniques are then used to predict design snow loads between the weather stations. Each state uses different mapping techniques to create their snow load requirements, yet these different techniques have never been compared. In addition, none of the current mapping techniques can account for the uncertainty in the design snow load estimates. We address both issues by formally comparing the existing mapping techniques, as well as creating a new mapping technique that allows the estimated design snow loads to be represented as an interval of values, rather than a single value. In the process, we have improved upon existing methods for creating design snow load requirements and have produced a new tool capable of handling uncertain climate data.
APA, Harvard, Vancouver, ISO, and other styles
4

Khondaker, Bidoura. "Transferability of community-based macro-level collision prediction models for use in road safety planning applications." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2867.

Full text
Abstract:
This thesis proposes the methodology and guidelines for community-based macro-level CPM transferability to do road safety planning applications, with models developed in one spatial-temporal region being capable of used in a different spatial-temporal region. In doing this. the macro-level CPMs developed for the Greater Vancouver Regional District (GVRD) by Lovegrove and Sayed (2006, 2007) was used in a model transferability study. Using those models from GVRD and data from Central Okanagan Regional District (CORD), in the Province of British Columbia. Canada. a transferability test has been conducted that involved recalibration of the 1996 GVRD models to Kelowna, in 2003 context. The case study was carried out in three parts. First, macro-level CPMs for the City of Kelowna were developed using 2003 data following the research by GVRD CPM development and use. Next, the 1996 GVRD models were recalibrated to see whether they could yield reliable prediction of the safety estimates for Kelowna, in 2003 context. Finally, a comparison between the results of Kelowna’s own developed models and the transferred models was conducted to determine which models yielded better results. The results of the transferability study revealed that macro-level CPM transferability was possible and no more complicated than micro-level CPM transferability. To facilitate the development of reliable community-based, macro-level collision prediction models, it was recommended that CPMs be transferred rather than developed from scratch whenever and wherever communities lack sufficient data of adequate quality. Therefore, the transferability guidelines in this research, together with their application in the case studies, have been offered as a contribution towards model transferability to do road safety planning applications, with models developed in one spatial-temporal region being capable of used in a different spatial-temporal region.
APA, Harvard, Vancouver, ISO, and other styles
5

MacLellan, Christopher J. "Computational Models of Human Learning: Applications for Tutor Development, Behavior Prediction, and Theory Testing." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1054.

Full text
Abstract:
Intelligent tutoring systems are effective for improving students’ learning outcomes (Bowen et al., 2013; Koedinger & Anderson, 1997; Pane et al., 2013). However, constructing tutoring systems that are pedagogically effective has been widely recognized as a challenging problem (Murray, 1999, 2003). In this thesis, I explore the use of computational models of apprentice learning, or computer models that learn interactively from examples and feedback, to support tutor development. In particular, I investigate their use for authoring expert-models via demonstrations and feedback (Matsuda et al., 2014), predicting student behavior within tutors (VanLehn et al., 1994), and for testing alternative learning theories (MacLellan, Harpstead, Patel, & Koedinger, 2016). To support these investigations, I present the Apprentice Learner Architecture, which posits the types of knowledge, performance, and learning components needed for apprentice learning and enables the generation and testing of alternative models. I use this architecture to create two models: the DECISION TREE model, which non- incrementally learns when to apply its skills, and the TRESTLE model, which instead learns incrementally. Both models both draw on the same small set of prior knowledge for all simulations (six operators and three types of relational knowledge). Despite their limited prior knowledge, I demonstrate their use for efficiently authoring a novel experimental design tutor and show that they are capable of achieving human-level performance in seven additional tutoring systems that teach a wide range of knowledge types (associations, categories, and skills) across multiple domains (language, math, engineering, and science). I show that the models are capable of predicting which versions of a fraction arithmetic and box and arrows tutors are more effective for human students’ learning. Further, I use a mixedeffects regression analysis to evaluate the fit of the models to the available human data and show that across all seven domains the TRESTLE model better fits the human data than the DECISION TREE model, supporting the theory that humans learn the conditions under which skills apply incrementally, rather than non-incrementally as prior work has suggested (Li, 2013; Matsuda et al., 2009). This work lays the foundation for the development of a Model Human Learner— similar to Card, Moran, and Newell’s (1986) Model Human Processor—that encapsulates psychological and learning science findings in a format that researchers and instructional designers can use to create effective tutoring systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Al-Shammari, Dhahi Turki Jadah. "Remote sensing applications for crop type mapping and crop yield prediction for digital agriculture." Thesis, The University of Sydney, 2022. https://hdl.handle.net/2123/29771.

Full text
Abstract:
This thesis addresses important topics in agricultural modelling research. Chapter 1 describes the importance of land productivity and the pressure on the agricultural sector to provide food. In chapter 2, a summer crop type mapping model has been developed to map major cotton fields in-season in the Murray Darling Basin (MDB) in Australia. In chapter 3, a robust crop classification model has been designed to classify two major crops (cereals and canola) in the MDB in Australia. chapter 4 focused on exploring changes in prediction quality with changes in the spatial resolution of predictors and the predictions. More specifically, this study investigated whether inputs should be resampled prior to modelling, or the modelling implemented first with the aggregation of predictions happening as a final step. In chapter 5, a new vegetation index is proposed that exploits the three red-edge bands provided by the Sentinel-2 satellite to capture changes in the transition region between the photosynthetically affected region (red region) and the Near-Infrared region (NIR region) affected by cell structure and leaf layers. Chapter 6 was conducted to test the potential of integration of two mechanistic-type model products (biomass and soil moisture) in the DDMs models. Chapter 7 was dedicated to discussing each technique used in this thesis and the outcomes of each technique, and the relationships between these outcomes. This thesis addressed the topics and questioned asked at the beginning of this research and the outcomes are listed in each chapter.
APA, Harvard, Vancouver, ISO, and other styles
7

Sobhani, Negin. "Applications, performance analysis, and optimization of weather and air quality models." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/5996.

Full text
Abstract:
Atmospheric particulate matter (PM) is linked to various adverse environmental and health impacts. PM in the atmosphere reduces visibility, alters precipitation patterns by acting as cloud condensation nuclei (CCN), and changes the Earth’s radiative balance by absorbing or scattering solar radiation in the atmosphere. The long-range transport of pollutants leads to increase in PM concentrations even in remote locations such as polar regions and mountain ranges. One significant effect of PM on the earth’s climate occurs while light absorbing PM, such as Black Carbon (BC), deposits over snow. In the Arctic, BC deposition on highly reflective surfaces (e.g. glaciers and sea ices) has very intense effects, causing snow to melt more quickly. Thus, characterizing PM sources, identifying long-range transport pathways, and quantifying the climate impacts of PM are crucial in order to inform emission abatement policies for reducing both health and environmental impacts of PM. Chemical transport models provide mathematical tools for better understanding atmospheric system including chemical and particle transport, pollution diffusion, and deposition. The technological and computational advances in the past decades allow higher resolution air quality and weather forecast simulations with more accurate representations of physical and chemical mechanisms of the atmosphere. Due to the significant role of air pollutants on public health and environment, several countries and cities perform air quality forecasts for warning the population about the future air pollution events and taking local preventive measures such as traffic regulations to minimize the impacts of the forecasted episode. However, the costs associated with the complex air quality forecast models especially for simulations with higher resolution simulations make “forecasting” a challenge. This dissertation also focuses on applications, performance analysis, and optimization of meteorology and air quality modeling forecasting models. This dissertation presents several modeling studies with various scales to better understand transport of aerosols from different geographical sources and economic sectors (i.e. transportation, residential, industry, biomass burning, and power) and quantify their climate impacts. The simulations are evaluated using various observations including ground site measurements, field campaigns, and satellite data. The sector-based modeling studies elucidated the importance of various economical sector and geographical regions on global air quality and the climatic impacts associated with BC. This dissertation provides the policy makers with some implications to inform emission mitigation policies in order to target source sectors and regions with highest impacts. Furthermore, advances were made to better understand the impacts of light absorbing particles on climate and surface albedo. Finally, for improving the modeling speed, the performances of the models are analyzed, and optimizations were proposed for improving the computational efficiencies of the models. Theses optimizations show a significant improvement in the performance of Weather Research and Forecasting (WRF) and WRF-Chem models. The modified codes were validated and incorporated back into the WRF source code to benefit all WRF users. Although weather and air quality models are shown to be an excellent means for forecasting applications both for local and hemispheric scale, further studies are needed to optimize the models and improve the performance of the simulations.
APA, Harvard, Vancouver, ISO, and other styles
8

Rose, Peter. "Prediction of Fish Assemblages in Eastern Australian Streams Using Species Distribution Models: Linking Ecological Theory, Statistical Advances and Management Applications." Thesis, Griffith University, 2018. http://hdl.handle.net/10072/384279.

Full text
Abstract:
Rivers and streams are among the most imperilled ecosystems on earth owing to overexploitation; water quality impacts, altered flow regimes, habitat destruction, proliferation of alien species and climate change. There is a pressing need to address these threats through stream bioassessment, stream rehabilitation and species conservation actions. Species distribution models (SDMs) can offer a practical, spatially explicit means to assess the impact of these threats, prioritise stream rehabilitation and direct conservation decisions. However, applications of SDMs for stream bioassessment and real-world conservation outcomes in freshwater ecosystems is still in its infancy. This thesis set out to link conceptual advances in fish ecology with emerging statistical methods applied to stream bioassessment and species conservation issues facing eastern Australian freshwater fish species. One of the primary uses of SDMs in freshwater environments is bioassessment, or assessment of “river health”. A network of reference sites underpins most stream bioassessment programs, however, there is an ongoing challenge of objectively selecting high quality reference sites, particularly in highly modified assessment regions. To address subjectivity associated with ‘best professional judgement’ and similar methods, I developed a novel, data-driven approach using species turnover modelling (generalised dissimilarity modelling) to increase objectivity and transparency in reference site selection. I also tested whether biogeographic legacies of fish assemblages among discrete coastal catchments limited the use of reference sites in southeast Queensland and northeast New South Wales. The data-driven approach was then used to select reference sites and sample fish assemblages to develop freshwater fish SDMs for subsequent data chapters. Another factor potentially limiting the accuracy of SDMs for bioassessment and conservation is the modelling strategy employed. In particular, site-specific models for stream bioassessment usually still use ‘shortcut’ methods such as community classification and discriminant function analysis, despite growing evidence that machine learning algorithms provide greater predictive performance. I tested how reference coastal fish assemblages are structured in relation different species assembly theories (e.g., species arrangement in discrete communities, species sorting independently across environmental gradients, or elements of both) by comparing different modelling approaches reflective of these processes (community level modelling, stacked ‘single species’ models and multi-species response models). Evaluation of the modelling was used to determine which of these modelling paradigms best suit stream bioassessment and other conservation applications such as survey gap analysis, estimating range changes owing to climate or land use change and estimating biodiversity. The taxonomic completeness index is the most commonly used site-specific index for stream bioassessment programs, despite several recognised limitations of this index, including use of an arbitrary threshold; omission of rare taxa that may be responsive to subtle levels of disturbance; and omission of potentially useful information on taxa gained at disturbed sites. I developed and tested an index that incorporated both native species losses, and gains of tolerant and alien species into a unified index of assemblage change for stream bioassessment. This study used a single species ensemble modelling approach to predict species occurrence and combined predictions into an index akin to Bray-Curtis dissimilarity. The resultant index, ‘BCA’, markedly outperformed the widely used taxonomic completeness index derived from community classification (discriminant function analysis) models and has considerable potential for improving stream bioassessment index sensitivities for a range of freshwater indicators (e.g. diatoms, macroinvertebrates, macrophytes). It is recognised that there are very few peer-reviewed SDM studies that have ‘real world’ conservation applications; most are instead academic exercises concerned with addressing methodological challenges, or hypothetical examples of how one might apply a SDM for a conservation problem. To address this gap between modelling and management, I used SDMs to inform a conservation plan for declining southern pygmy perch (Murray-Darling Basin lineage) (Nannoperca australis) in northern Victoria. This study incorporated alien species abundance models as predictors into an ensemble SDM to identify remnant habitats of this declining species. The models indicated that ~ 70% of N. australis habitat has become unsuitable since European settlement owing to anthropogenic pressures and interaction with alien fish species, particularly brown trout (Salmo trutta). Model outputs were used for survey gap analysis and to identify stream segments suitable for targeted management and reintroduction of the species. This study formed the basis for a captive breeding and translocation plan for southern pygmy perch in northern Victoria. The thesis concludes with practical learnings from these modelling studies for freshwater bioassessment and conservation practitioners; namely: (1) that machine learning multispecies response and ensemble models offer improved predictive performance compared with traditional approaches and that model choice depends on the intended use of the model; (2) that a newly developed index, “BCA”, provides a more conceptually sound and sensitive index than the traditionally used taxonomic completeness index for stream bioassessment; and, (3) that SDMs developed using readily available and high quality stream bioassessment datasets provide an excellent foundation for applied freshwater fish species conservation and management. The thesis concludes with future challenges and directions for freshwater fish SDM research.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Environment and Sc
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
9

García, Durán Alberto. "Learning representations in multi-relational graphs : algorithms and applications." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2271/document.

Full text
Abstract:
Internet offre une énorme quantité d’informations à portée de main et dans une telle variété de sujets, que tout le monde est en mesure d’accéder à une énorme variété de connaissances. Une telle grande quantité d’information pourrait apporter un saut en avant dans de nombreux domaines (moteurs de recherche, réponses aux questions, tâches NLP liées) si elle est bien utilisée. De cette façon, un enjeu crucial de la communauté d’intelligence artificielle a été de recueillir, d’organiser et de faire un usage intelligent de cette quantité croissante de connaissances disponibles. Heureusement, depuis un certain temps déjà des efforts importants ont été faits dans la collecte et l’organisation des connaissances, et beaucoup d’informations structurées peuvent être trouvées dans des dépôts appelés Bases des Connaissances (BCs). Freebase, Entity Graph Facebook ou Knowledge Graph de Google sont de bons exemples de BCs. Un grand problème des BCs c’est qu’ils sont loin d’êtres complets. Par exemple, dans Freebase seulement environ 30% des gens ont des informations sur leur nationalité. Cette thèse présente plusieurs méthodes pour ajouter de nouveaux liens entre les entités existantes de la BC basée sur l’apprentissage des représentations qui optimisent une fonction d’énergie définie. Ces modèles peuvent également être utilisés pour attribuer des probabilités à triples extraites du Web. On propose également une nouvelle application pour faire usage de cette information structurée pour générer des informations non structurées (spécifiquement des questions en langage naturel). On pense par rapport à ce problème comme un modèle de traduction automatique, où on n’a pas de langage correct comme entrée, mais un langage structuré. Nous adaptons le RNN codeur-décodeur à ces paramètres pour rendre possible cette traduction
Internet provides a huge amount of information at hand in such a variety of topics, that now everyone is able to access to any kind of knowledge. Such a big quantity of information could bring a leap forward in many areas if used properly. This way, a crucial challenge of the Artificial Intelligence community has been to gather, organize and make intelligent use of this growing amount of available knowledge. Fortunately, important efforts have been made in gathering and organizing knowledge for some time now, and a lot of structured information can be found in repositories called Knowledge Bases (KBs). A main issue with KBs is that they are far from being complete. This thesis proposes several methods to add new links between the existing entities of the KB based on the learning of representations that optimize some defined energy function. We also propose a novel application to make use of this structured information to generate questions in natural language
APA, Harvard, Vancouver, ISO, and other styles
10

Asiri, Aisha. "Applications of Game Theory, Tableau, Analytics, and R to Fashion Design." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2018. http://digitalcommons.auctr.edu/cauetds/146.

Full text
Abstract:
This thesis presents various models to the fashion industry to predict the profits for some products. To determine the expected performance of each product in 2016, we used tools of game theory to help us identify the expected value. We went further and performed a simple linear regression and used scatter plots to help us predict further the performance of the products of Prada. We used tools of game theory, analytics, and statistics to help us predict the performance of some of Prada's products. We also used the Tableau platform to visualize an overview of the products' performances. All of these tools were used to aid in finding better predictions of Prada's product performances.
APA, Harvard, Vancouver, ISO, and other styles
11

Asterios, Geroukis. "Prediction of Linear Models: Application of Jackknife Model Averaging." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-297671.

Full text
Abstract:
When using linear models, a common practice is to find the single best model fit used in predictions. This on the other hand can cause potential problems such as misspecification and sometimes even wrong models due to spurious regression. Another method of predicting models introduced in this study as Jackknife Model Averaging developed by Hansen & Racine (2012). This assigns weights to all possible models one could use and allows the data to have heteroscedastic errors. This model averaging estimator is compared to the Mallows’s Model Averaging (Hansen, 2007) and model selection by Bayesian Information Criterion and Mallows’s Cp. The results show that the Jackknife Model Averaging technique gives less prediction errors compared to the other methods of model prediction. This study concludes that the Jackknife Model Averaging technique might be a useful choice when predicting data.
APA, Harvard, Vancouver, ISO, and other styles
12

Santos, Garcés Eva. "Applications of computed tomography in dry-cured meat products." Doctoral thesis, Universitat de Girona, 2012. http://hdl.handle.net/10803/83672.

Full text
Abstract:
Computed Tomography and Microcomputed Tomography are X-ray based technologies. They were tested in this thesis as potential tools for the optimization of the processing of dry-cured meat products. On one hand, several prediction models and Computed Tomography analytical tools were developed for the non-destructive analysis of water activity, salt content and water content distribution within dry-cured hams during processing and were successfully applied to three case studies. On the other hand, Microcomputed Tomography were used to characterize, evaluate and correlate the changes in the microstructure with the texture of non-acid pork lean fermented sausages. Some Microcomputed Tomography parameters could be correlated with the instrumental texture, although the Microcomputed Tomography was not accurate enough to distinguish between lean and fat when these constituents were emulsified. In conclusion, Computed Tomography and Microcomputed Tomography can be considered as suitable technologies for the non-destructive characterization and for the optimization of dry-cured meat processing.
La Tomografia Computeritzada i la Microtomografia Computeritzada són tecnologies basades en raigs X. Ambdues es varen testar en aquesta tesis com a eines potencials per l'optimització del processat de productes carnis curats. Per una banda, es varen desenvolupar diversos models de predicció i eines analítiques derivades de la Tomografia Computeritzada, per l’anàlisi no destructiu de la distribució de l’activitat d’aigua i els continguts de sal i d’aigua durant el processat de pernils curats, aplicant-se posteriorment de manera satisfactòria a tres casos d’estudi. Per altra banda, la Microtomografia Computeritzada es va utilitzar per caracteritzar, avaluar i correlacionar canvis en microestructura i textura d’embotits crus curats elaborats amb baix contingut de greix. Alguns paràmetres de la Microtomografia Computeritzada es van poder correlacionar amb la textura instrumental, encara que es va observar que la Microtomografia Computeritzada no permetia distingir acuradament entre magre de porc i greix quan aquests components es trobaven emulsionats.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Yutao. "Algorithms and Applications for Nonlinear Model Predictive Control with Long Prediction Horizon." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3421957.

Full text
Abstract:
Fast implementations of NMPC are important when addressing real-time control of systems exhibiting features like fast dynamics, large dimension, and long prediction horizon, as in such situations the computational burden of the NMPC may limit the achievable control bandwidth. For that purpose, this thesis addresses both algorithms and applications. First, fast NMPC algorithms for controlling continuous-time dynamic systems using a long prediction horizon have been developed. A bridge between linear and nonlinear MPC is built using partial linearizations or sensitivity update. In order to update the sensitivities only when necessary, a Curvature-like measure of nonlinearity (CMoN) for dynamic systems has been introduced and applied to existing NMPC algorithms. Based on CMoN, intuitive and advanced updating logic have been developed for different numerical and control performance. Thus, the CMoN, together with the updating logic, formulates a partial sensitivity updating scheme for fast NMPC, named CMoN-RTI. Simulation examples are used to demonstrate the effectiveness and efficiency of CMoN-RTI. In addition, a rigorous analysis on the optimality and local convergence of CMoN-RTI is given and illustrated using numerical examples. Partial condensing algorithms have been developed when using the proposed partial sensitivity update scheme. The computational complexity has been reduced since part of the condensing information are exploited from previous sampling instants. A sensitivity updating logic together with partial condensing is proposed with a complexity linear in prediction length, leading to a speed up by a factor of ten. Partial matrix factorization algorithms are also proposed to exploit partial sensitivity update. By applying splitting methods to multi-stage problems, only part of the resulting KKT system need to be updated, which is computationally dominant in on-line optimization. Significant improvement has been proved by giving floating point operations (flops). Second, efficient implementations of NMPC have been achieved by developing a Matlab based package named MATMPC. MATMPC has two working modes: the one completely relies on Matlab and the other employs the MATLAB C language API. The advantages of MATMPC are that algorithms are easy to develop and debug thanks to Matlab, and libraries and toolboxes from Matlab can be directly used. When working in the second mode, the computational efficiency of MATMPC is comparable with those software using optimized code generation. Real-time implementations are achieved for a nine degree of freedom dynamic driving simulator and for multi-sensory motion cueing with active seat.
Implementazioni rapide di NMPC sono importanti quando si affronta il controllo in tempo reale di sistemi che presentano caratteristiche come dinamica veloce, ampie dimensioni e orizzonte di predizione lungo, poiché in tali situazioni il carico di calcolo dell'MNPC può limitare la larghezza di banda di controllo ottenibile. A tale scopo, questa tesi riguarda sia gli algoritmi che le applicazioni. In primo luogo, sono stati sviluppati algoritmi veloci NMPC per il controllo di sistemi dinamici a tempo continuo che utilizzano un orizzonte di previsione lungo. Un ponte tra MPC lineare e non lineare viene costruito utilizzando linearizzazioni parziali o aggiornamento della sensibilità. Al fine di aggiornare la sensibilità solo quando necessario, è stata introdotta una misura simile alla curva di non linearità (CMoN) per i sistemi dinamici e applicata agli algoritmi NMPC esistenti. Basato su CMoN, sono state sviluppate logiche di aggiornamento intuitive e avanzate per diverse prestazioni numeriche e di controllo. Pertanto, il CMoN, insieme alla logica di aggiornamento, formula uno schema di aggiornamento della sensibilità parziale per NMPC veloce, denominato CMoN-RTI. Gli esempi di simulazione sono utilizzati per dimostrare l'efficacia e l'efficienza di CMoN-RTI. Inoltre, un'analisi rigorosa sull'ottimalità e sulla convergenza locale di CMoN-RTI viene fornita ed illustrata utilizzando esempi numerici. Algoritmi di condensazione parziale sono stati sviluppati quando si utilizza lo schema di aggiornamento della sensibilità parziale proposto. La complessità computazionale è stata ridotta poiché parte delle informazioni di condensazione sono sfruttate da precedenti istanti di campionamento. Una logica di aggiornamento della sensibilità insieme alla condensazione parziale viene proposta con una complessità lineare nella lunghezza della previsione, che porta a una velocità di un fattore dieci. Sono anche proposti algoritmi di fattorizzazione parziale della matrice per sfruttare l'aggiornamento della sensibilità parziale. Applicando metodi di suddivisione a problemi a più stadi, è necessario aggiornare solo parte del sistema KKT risultante, che è computazionalmente dominante nell'ottimizzazione online. Un miglioramento significativo è stato dimostrato dando operazioni in virgola mobile (flop). In secondo luogo, sono state realizzate implementazioni efficienti di NMPC sviluppando un pacchetto basato su Matlab chiamato MATMPC. MATMPC ha due modalità operative: quella si basa completamente su Matlab e l'altra utilizza l'API del linguaggio MATLAB C. I vantaggi di MATMPC sono che gli algoritmi sono facili da sviluppare e eseguire il debug grazie a Matlab e le librerie e le toolbox di Matlab possono essere utilizzate direttamente. Quando si lavora nella seconda modalità, l'efficienza computazionale di MATMPC è paragonabile a quella del software che utilizza la generazione di codice ottimizzata. Le realizzazioni in tempo reale sono ottenute per un simulatore di guida dinamica di nove gradi di libertà e per il movimento multisensoriale con sedile attivo.
APA, Harvard, Vancouver, ISO, and other styles
14

Makhtar, Mokhairi. "Contributions to Ensembles of Models for Predictive Toxicology Applications. On the Representation, Comparison and Combination of Models in Ensembles." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5478.

Full text
Abstract:
The increasing variety of data mining tools offers a large palette of types and representation formats for predictive models. Managing the models then becomes a big challenge, as well as reusing the models and keeping the consistency of model and data repositories. Sustainable access and quality assessment of these models become limited to researchers. The approach for the Data and Model Governance (DMG) makes easier to process and support complex solutions. In this thesis, contributions are proposed towards ensembles of models with a focus on model representation, comparison and usage. Predictive Toxicology was chosen as an application field to demonstrate the proposed approach to represent predictive models linked to data for DMG. Further analysing methods such as predictive models comparison and predictive models combination for reusing the models from a collection of models were studied. Thus in this thesis, an original structure of the pool of models was proposed to represent predictive toxicology models called Predictive Toxicology Markup Language (PTML). PTML offers a representation scheme for predictive toxicology data and models generated by data mining tools. In this research, the proposed representation offers possibilities to compare models and select the relevant models based on different performance measures using proposed similarity measuring techniques. The relevant models were selected using a proposed cost function which is a composite of performance measures such as Accuracy (Acc), False Negative Rate (FNR) and False Positive Rate (FPR). The cost function will ensure that only quality models be selected as the candidate models for an ensemble. The proposed algorithm for optimisation and combination of Acc, FNR and FPR of ensemble models using double fault measure as the diversity measure improves Acc between 0.01 to 0.30 for all toxicology data sets compared to other ensemble methods such as Bagging, Stacking, Bayes and Boosting. The highest improvements for Acc were for data sets Bee (0.30), Oral Quail (0.13) and Daphnia (0.10). A small improvement (of about 0.01) in Acc was achieved for Dietary Quail and Trout. Important results by combining all the three performance measures are also related to reducing the distance between FNR and FPR for Bee, Daphnia, Oral Quail and Trout data sets for about 0.17 to 0.28. For Dietary Quail data set the improvement was about 0.01 though, but this data set is well known as a difficult learning exercise. For five UCI data sets tested, similar results were achieved with Acc improvement between 0.10 to 0.11, closing more the gaps between FNR and FPR. As a conclusion, the results show that by combining performance measures (Acc, FNR and FPR), as proposed within this thesis, the Acc increased and the distance between FNR and FPR decreased.
APA, Harvard, Vancouver, ISO, and other styles
15

Palczewska, Anna Maria. "Interpretation, Identification and Reuse of Models. Theory and algorithms with applications in predictive toxicology." Thesis, University of Bradford, 2014. http://hdl.handle.net/10454/7349.

Full text
Abstract:
This thesis is concerned with developing methodologies that enable existing models to be effectively reused. Results of this thesis are presented in the framework of Quantitative Structural-Activity Relationship (QSAR) models, but their application is much more general. QSAR models relate chemical structures with their biological, chemical or environmental activity. There are many applications that offer an environment to build and store predictive models. Unfortunately, they do not provide advanced functionalities that allow for efficient model selection and for interpretation of model predictions for new data. This thesis aims to address these issues and proposes methodologies for dealing with three research problems: model governance (management), model identification (selection), and interpretation of model predictions. The combination of these methodologies can be employed to build more efficient systems for model reuse in QSAR modelling and other areas. The first part of this study investigates toxicity data and model formats and reviews some of the existing toxicity systems in the context of model development and reuse. Based on the findings of this review and the principles of data governance, a novel concept of model governance is defined. Model governance comprises model representation and model governance processes. These processes are designed and presented in the context of model management. As an application, minimum information requirements and an XML representation for QSAR models are proposed. Once a collection of validated, accepted and well annotated models is available within a model governance framework, they can be applied for new data. It may happen that there is more than one model available for the same endpoint. Which one to chose? The second part of this thesis proposes a theoretical framework and algorithms that enable automated identification of the most reliable model for new data from the collection of existing models. The main idea is based on partitioning of the search space into groups and assigning a single model to each group. The construction of this partitioning is difficult because it is a bi-criteria problem. The main contribution in this part is the application of Pareto points for the search space partition. The proposed methodology is applied to three endpoints in chemoinformatics and predictive toxicology. After having identified a model for the new data, we would like to know how the model obtained its prediction and how trustworthy it is. An interpretation of model predictions is straightforward for linear models thanks to the availability of model parameters and their statistical significance. For non linear models this information can be hidden inside the model structure. This thesis proposes an approach for interpretation of a random forest classification model. This approach allows for the determination of the influence (called feature contribution) of each variable on the model prediction for an individual data. In this part, there are three methods proposed that allow analysis of feature contributions. Such analysis might lead to the discovery of new patterns that represent a standard behaviour of the model and allow additional assessment of the model reliability for new data. The application of these methods to two standard benchmark datasets from the UCI machine learning repository shows a great potential of this methodology. The algorithm for calculating feature contributions has been implemented and is available as an R package called rfFC.
APA, Harvard, Vancouver, ISO, and other styles
16

Caetano, João Manuel Nunes. "Predictive models of probability of default : an empirical application." Master's thesis, Instituto Superior de Economia e Gestão, 2014. http://hdl.handle.net/10400.5/7704.

Full text
Abstract:
Mestrado em Finanças
Este estudo tem como objetivo realizar uma pesquisa dos modelos de previsão do incumprimento a empresas listadas em bolsa. Foram abordadas as metodologias do modelo de Merton (1974), modelo Contabilístico e Híbrido. Testou-se uma amostra de 172 empresas presentes no mercado Americano dos setores do Consumo, Distribuição, Produção e Telecomunicações nas quais 82 entram em incumprimento. Para cada metodologia, a capacidade preditiva foi testada através dos erros Tipo I e II. Os resultados sugerem que o modelo Híbrido, i.e., a combinação de modelos de mercado e análise contabilística, confere maior poder de precisão na classificação de incumprimento, ao invés de cada modelo individualmente.
This study intends to conduct a survey of Probability of Default models to listed companies. The methodologies of Merton (1974) model, Accounting model and Hybrid were addressed. We tested a sample of 172 American companies in the sectors of Consumer Products, Distribution, Manufacturing and Telecommunications in which 82 entered into default. For each methodology, the predictive ability was tested with Type I and II errors. The results suggests that the Hybrid model, i.e. a combination of market models and accounting analysis, have a better performance in the classification of credit default than each model individually.
APA, Harvard, Vancouver, ISO, and other styles
17

Ghose, Susmita. "Analysis of errors in software reliability prediction systems and application of model uncertainty theory to provide better predictions." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3781.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Mechanical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Hao. "Real-time Traffic State Prediction: Modeling and Applications." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/64292.

Full text
Abstract:
Travel-time information is essential in Advanced Traveler Information Systems (ATISs) and Advanced Traffic Management Systems (ATMSs). A key component of these systems is the prediction of the spatiotemporal evolution of roadway traffic state and travel time. From the perspective of travelers, such information can result in better traveler route choice and departure time decisions. From the transportation agency perspective, such data provide enhanced information with which to better manage and control the transportation system to reduce congestion, enhance safety, and reduce the carbon footprint of the transportation system. The objective of the research presented in this dissertation is to develop a framework that includes three major categories of methodologies to predict the spatiotemporal evolution of the traffic state. The proposed methodologies include macroscopic traffic modeling, computer vision and recursive probabilistic algorithms. Each developed method attempts to predict traffic state, including roadway travel times, for different prediction horizons. In total, the developed multi-tool framework produces traffic state prediction algorithms ranging from short – (0~5 minutes) to medium-term (1~4 hours) considering departure times up to an hour into the future. The dissertation first develops a particle filter approach for use in short-term traffic state prediction. The flow continuity equation is combined with the Van Aerde fundamental diagram to derive a time series model that can accurately describe the spatiotemporal evolution of traffic state. The developed model is applied within a particle filter approach to provide multi-step traffic state prediction. The testing of the algorithm on a simulated section of I-66 demonstrates that the proposed algorithm can accurately predict the propagation of shockwaves up to five minutes into the future. The developed algorithm is further improved by incorporating on- and off-ramp effects and more realistic boundary conditions. Furthermore, the case study demonstrates that the improved algorithm produces a 50 percent reduction in the prediction error compared to the classic LWR density formulation. Considering the fact that the prediction accuracy deteriorates significantly for longer prediction horizons, historical data are integrated and considered in the measurement update in the developed particle filter approach to extend the prediction horizon up to half an hour into the future. The dissertation then develops a travel time prediction framework using pattern recognition techniques to match historical data with real-time traffic conditions. The Euclidean distance is initially used as the measure of similarity between current and historical traffic patterns. This method is further improved using a dynamic template matching technique developed as part of this research effort. Unlike previous approaches, which use fixed template sizes, the proposed method uses a dynamic template size that is updated each time interval based on the spatiotemporal shape of the congestion upstream of a bottleneck. In addition, the computational cost is reduced using a Fast Fourier Transform instead of a Euclidean distance measure. Subsequently, the historical candidates that are similar to the current conditions are used to predict the experienced travel times. Test results demonstrate that the proposed dynamic template matching method produces significantly better and more stable prediction results for prediction horizons up to 30 minutes into the future for a two hour trip (prediction horizon of two and a half hours) compared to other state-of-the-practice and state-of-the-art methods. Finally, the dissertation develops recursive probabilistic approaches including particle filtering and agent-based modeling methods to predict travel times further into the future. Given the challenges in defining the particle filter time update process, the proposed particle filtering algorithm selects particles from a historical dataset and propagates particles using data trends of past experiences as opposed to using a state-transition model. A partial resampling strategy is then developed to address the degeneracy problem in the particle filtering process. INRIX probe data along I-64 and I-264 from Richmond to Virginia Beach are used to test the proposed algorithm. The results demonstrate that the particle filtering approach produces less than a 10 percent prediction error for trip departures up to one hour into the future for a two hour trip. Furthermore, the dissertation develops an agent-based modeling approach to predict travel times using real-time and historical spatiotemporal traffic data. At the microscopic level, each agent represents an expert in the decision making system, which predicts the travel time for each time interval according to past experiences from a historical dataset. A set of agent interactions are developed to preserve agents that correspond to traffic patterns similar to the real-time measurements and replace invalid agents or agents with negligible weights with new agents. Consequently, the aggregation of each agent's recommendation (predicted travel time with associated weight) provides a macroscopic level of output – predicted travel time distribution. The case study demonstrated that the agent-based model produces less than a 9 percent prediction error for prediction horizons up to one hour into the future.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Heer, Phillipp. "Decentralized Model Predictive Controlfor smart grid applications." Thesis, KTH, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-133580.

Full text
Abstract:
This thesis focuses on a model predictive control scheme which allows organizing the power production of power plants in a decentralized fashion. That way a less computationally demanding control scheme can be applied to control the frequency at each power plant without a governing, high level controller for the whole power grid. The main contribution was to develop a communication scheme between power plants that can be applied in a competitive environment like the energy market. Most established schemes for decentralized power production require giving away a mathematical representation of a plant to all its neighbors. However, the developed control scheme only requires informing the neighboring power plants the expected future evolution of the power networks voltage angle. This is information which is more efficiently communicated by power plant operators. Additionally, this simplification does not only yield a notable reduction in communicated data but also reduces the computational complexity of the control problem for a single power plant. The aforementioned control scheme was applied to a network consisting of several different plants for each of which a model was developed. The modeled plants range from conventionally generated plants like hydro-, gas- or wind power plants to more modern converter coupled plants like photovoltaic installations. The plants were modeled such that energy buffers - in the form of aggregated Electric Vehicle Batteries - can be taken into account. For the power plants and the energy transfer between them, linear time-invariant models were augmented with linear matrix inequalities. These represent physical bounds which the model has to regard in order to have a realistic system evolution ie, maximal power line capacities or limiting power plant production capabilities. Proofs are given which indicate necessary properties of the developed algorithm to ensure nominal or robust stability. Simulations were carried out which verified the conditions obtained from the proofs. Also by simulation, the obtained control scheme was compared with a centralized approach, amongst others. Considering the developments towards a Smart Grid one can say that a power production which is organized in a decentralized way reduces the computational effort greatly with a tolerable loss of performance. This statement is backed up with results from the simulations. The findings also indicate that the addition of buffers is very beneficial regarding disturbance rejection in the power grid.
APA, Harvard, Vancouver, ISO, and other styles
20

Balbis, Luisella. "Nonlinear model predictive control for industrial applications." Thesis, University of Strathclyde, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.501892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Roucou, Mickaël. "Prediction of the aeroelastic behavior An application to wind-tunnel models." Thesis, KTH, Flygdynamik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180470.

Full text
Abstract:
The work of this paper has been done during a Master thesis at the ONERA and deals with the establish-ment of an aeroelastic state-space model and its application to two wind-tunnel models studied at the ONERA. The established model takes into account a control surface input and a gust perturbation. The generalized aerodynamic forces are approximated using Roger’s and Karpel’s methods and the inertia of the aileron is computed using a finite element model in Nastran. The software used during this work was Capri, developed by the ONERA, and results validity was checked using Nastran. Comparisons between frequency response functions obtained with the aeroelastic state-space model and experimental ones show that the model gives good results in no wind conditions for an aileron deflection input and up to transonic speeds. Differences between model and experiments could be inputable to structural non-linearities.
APA, Harvard, Vancouver, ISO, and other styles
22

Sale, Thomas Clay. "Model for prediction of seepage from small unlined water impoundments." Thesis, The University of Arizona, 1985. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_e9791_1985_36_sip1_w.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Florenz-Esnal, Julian. "Temperature prediction models and their application to the control of heating systems." Thesis, University of Manchester, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lord, Dominique. "The prediction of accidents on digital networks, characteristics and issues related to the application of accident prediction models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0020/NQ53687.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sevik, Ozlem. "Application Of Sleuth Model In Antalya." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607190/index.pdf.

Full text
Abstract:
In this study, an urban growth model is used to simulate the urban growth in 2025 in the Antalya Metropolitan Area. It is the fastest growing metropolis in Turkey with a population growth of 41,79&
#8240
, although Turkey&
#8217
s growth is 18,28&
#8240
for the last decade. An Urban Growth Model (SLEUTH, Version 3.0) is calibrated with cartographic data. The prediction is based on the archived data trends of the years of the 1987, 1996, and 2002 images, which are extracted from Landsat Thematic Mapper and Enhanced Thematic Mapper satellite images and the aerial photographs acquired in 1992 and the data are prepared to insert them as input into the model. The urban extent is obtained through supervised classification of the satellite images and visual interpretation of aerial photographs. The model calibration, where a predetermined order of stepping through the coefficient space is used is performed in order to determine the best fit values for the five growth control parameters including the coefficients of diffusion, breed and spread, slope and road gravity with the historical urban extent data. The development trend in Antalya is simulated by slowing down growth by taking into consideration the road development and environmental protection. After the simulation for a period of 23 years, 9824 ha increased in urban areas is obtained for 2025.
APA, Harvard, Vancouver, ISO, and other styles
26

Srinivasan, Sriram. "Development of a Cost Oriented Grinding Strategy and Prediction of Post Grind Roughness using Improved Grinder Models." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78298.

Full text
Abstract:
Irregularities in pavement profiles that exceed standard thresholds are usually rectified using a Diamond Grinding Process. Diamond Grinding is a method of Concrete Pavement Rehabilitation that involves the use of grinding wheels mounted on a machine that scraps off the top surface of the pavement to smooth irregularities. Profile Analysis Software like ProVAL© offers simulation modules that allow users to investigate various grinding strategies and prepare a corrective action plan for the pavement. The major drawback with the current Smoothness Assurance Module© (SAM) in ProVAL© is that it provides numerous grind locations which are both redundant and not feasible in the field. This problem can be overcome by providing a constrained grinding model in which a cost function is minimized; the resulting grinding strategy satisfies requirements at the least possible cost. Another drawback with SAM exists in the built-in grinder models that do not factor in the effect of speed and depth of cut on the grinding head. High speeds or deep cuts will result in the grinding head riding out the cut and likely worsening the roughness. A constrained grinding strategy algorithm with grinder models that factor in speed and depth of cut that results in cost effective grinding with better prediction of post grind surfaces through simulation is developed in this work. The outcome of the developed algorithm is compared to ProVAL's© SAM results.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
27

Blue, Julie Elena. "Predicting tracer and contaminant transport with the stratified aquifer approach." Diss., The University of Arizona, 1999. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_e9791_1999_426_sip1_w.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Shamekh, Awad Rasheed. "Model predictive control applications in continuous and batch processes." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492831.

Full text
Abstract:
Data driven modelling has become an important aspect of modern process control and condition monitoring systems. The topic has been extensively studied in academia for several decades and applications in industry are continually increasing. In the past 20 years there has been in increased interest in the use of multivariate statistical techniques in the process industries. This interest has arisen from the need to identify techniques that are able to cope with the highly correlated data sets that are encountered in these industries as levels of instrumentation increase.
APA, Harvard, Vancouver, ISO, and other styles
29

Nappi, Angela. "Development and Application of a Discontinuous Galerkin-based Wave Prediction Model." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1385998191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kasderidis, Stathis P. "A compartmental model neuron, its networks and application to time series." Thesis, King's College London (University of London), 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Han, Mei. "Studies of Dynamic Bandwidth Allocation for Real-Time VBR Video Applications." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/32027.

Full text
Abstract:
Variable bit rate (VBR) compressed video traffic, such as live video news, is expected to account for a large portion of traffic in future integrated networks. This real-time video traffic has strict delay and loss requirements, and exhibits burstiness over multiple time scales, thus imposing a challenge on network resource allocation and management. The renegotiated VBR (R-VBR) scheme, dynamically allocating resources to capture the burstiness of VBR traffic, substantially increases network utilization while satisfying any desired quality of service (QoS) requirements. This thesis focuses on the performance evaluation of R-VBR in the context of different R-VBR approaches. The renegotiated deterministic VBR (RED-VBR) scheme, proposed by Dr. H. Zhang et al., is thoroughly investigated in this research using a variety of real-world videos, with both high quality and low quality. A new Virtual-Queue-Based RED-VBR is then developed to reduce the implementation complexity of RED-VBR. Simulation results show that this approach obtains a comparable network performance as RED-VBR: relatively high network utilization and a very low drop rate. A Prediction-Based R-VBR based on a multiresolution learning neural network traffic predictor, developed by Dr. Y. Liang, is studied and the use of binary exponential backoff (BEB) algorithm is introduced to efficiently decrease the renegotiation frequency. Compared with RED-VBR, Prediction-Based R-VBR obtains significantly improved network utilization at a little expense of the drop rate. This work provides evaluations of the advantages and disadvantages of several R-VBR approaches, and thus provides a clearer big picture on the performance of the studied R-VBR approaches, which can be used as the basis to choose an appropriate R-VBR scheme to optimize network utilization while enabling QoS for the application tasks.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
32

Waldron, David John. "The application of advanced coal combustion models to the prediction of furnace performance." Thesis, University of Leeds, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Tartakovsky, Daniel. "Prediction of transient flow in random porous media by conditional moments." Diss., The University of Arizona, 1996. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_e9791_1996_263_sip1_w.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Wood, Anthony Paul. "The performance of insolvency prediction and credit risk models in the UK : a comparative study, development and wider application." Thesis, University of Exeter, 2012. http://hdl.handle.net/10036/4211.

Full text
Abstract:
Contingent claims models have recently been applied to the field of corporate insolvency prediction in an attempt to provide the art with a theoretical methodology that has been lacking in the past. Limited studies have been carried out in order to empirically compare the performance of these “market” models with that of their accounting number-based counterparts. This thesis contributes to the literature in several ways: The thesis traces the evolution of the art of corporate insolvency prediction from its inception through to the present day, combining key developments and methodologies into a single document of reference. I use receiver operating characteristic curves and tests of economic value to assess the efficacy of sixteen models, carefully selected to represent key moments in the evolution of the art, and tested upon, for the first time, post-IFRS UK data. The variability of model efficacy is also measured for the first time, using Monte Carlo simulation upon 10,000 randomly generated training and validation samples from a dataset consisting of over 12,000 firmyear observations. The results provide insights into the distribution of model accuracy as a result of sample selection, which is something which has not appeared in the literature prior to this study. I find overall that the efficacy of the models is generally less than that reported in the prior literature; but that the theoretically driven, market-based models outperform models which use accounting numbers; the latter showing a relatively larger efficacy distribution. Furthermore, I obtain the counter-intuitive finding that predictions based on a single ratio can be as efficient as those which are based on models which are far more complicated – in terms of variable variety and mathematical construction. Finally, I develop and test a naïve version of the down-and-out-call barrier option model for insolvency prediction and find that, despite its simple formulation, it performs favourably compared alongside other market-based models.
APA, Harvard, Vancouver, ISO, and other styles
35

Chhabra, Nishchey. "Application of Numerical Model CGWave for Wave Prediction at Ponce de Leon Inlet, Florida, USA." Fogler Library, University of Maine, 2004. http://www.library.umaine.edu/theses/pdf/ChhabraN2004.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Larsson, Christian A. "Application-oriented experiment design for industrial model predictive control." Doctoral thesis, KTH, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154032.

Full text
Abstract:
Advanced process control and its prevalent enabling technology, model predictive control (MPC), can today be regarded as the industry best practice for optimizing production. The strength of MPC comes from the ability to predict the impact of disturbances and counteract their effects with control actions, and from the ability to account for constraints. These capabilities come from the use of models of the controlled process. However, relying on a model is also a weakness of MPC.The model used by the controller needs to be kept up to date with changing process conditions for good MPC performance. In this thesis, the problem of closed-loop system identification of models intended to be used in MPC is considered. The design of the identification experiment influences the quality and properties of the estimated model. In the thesis, an application-oriented framework for designing the identification experiment is used. The specifics of experiment design for identification of models for MPC are discussed. In particular, including constraints in the controllerresults in a nonlinear control law, which complicates the experiment design. The application-oriented experiment design problem with time-domain constraints is formulated as an optimal control problem, which in general is diffcult to solve. Using Markov decision theory, the experiment design problem is formulated for finite state and action spaces and solved using an extension of existing linear programming techniques for constrained Markov decision processes. The method applies to general noise and disturbance structures but is computationally intensive. Two extensions of MPC with dual control properties which implement the application-oriented experiment design idea are developed. These controllers are limited to output error systems but require less computations. Furthermore, since the controllers are based on a common MPC technique, they can be used as extensions of already available MPC implementations. One of the developed controllers is tested in an extensive experimental validation campaign, which is the first time that MPC with dual propertiesis applied to a full scale industrial process during regular operation of the plant. Existing experiment design procedures are most often formulated in the frequency domain and the spectrum of the input is used as the design variable. Therefore, a realization of the signal with the right spectrum has to be generated. This is not straightforward for systems operating under constraints. In the thesis, a framework for generating signals, with prespecified spectral properties, that respect system constraints is developed. The framework uses ideas from stochastic MPC and scenario optimization. Convergence to the desired autocorrelation is proved for a special case and the merits of the algorithm are illustrated in a series of simulation examples.

QC 20141013

APA, Harvard, Vancouver, ISO, and other styles
37

CHALAKKAL, VARGHESE KISHORE. "Application of Model Predictive Control in Supply Chain Processes." Doctoral thesis, Università Politecnica delle Marche, 2020. http://hdl.handle.net/11566/273217.

Full text
Abstract:
Questa tesi esamina la pianificazione del fabbisogno di materiali Material (Requirements Planning MRP) da una diversa prospettiva, quella delle matrici. L'intero processo viene infatti sviluppato utilizzando una serie di matrici che si evolvono nel tempo in un approccio basato sul sistema tempo-variante. Invece di iterare lungo i livelli della distinta base (materiali), calcoleremo pertanto, simultaneamente, tutti i requisiti dei materiali per tutti i prodotti in un dato momento. Il vantaggio principale di questo approccio è la velocità: possiamo calcolare MPS (Piano Principale di Produzione Master Production Schedule e MRP in pochi secondi. Nello sviluppo di questa idea ci muoveremo all'interno del perimetro di SIOP (Pianificazione vendite, inventario e operazioni (Sales, Inventory, Operations Planning), seguendo un approccio basato sul modello del controllo predittivo. Inizieremo da un'analisi dettagliata dei concetti e delle tecniche di pianificazione della domanda, sviluppando, poi, in dettaglio, i concetti centrali e l'approccio basato sulle matrici per il calcolo del MPS e MRP. Dopo la presentazione del metodo attraverso il suo utilizzo per il calcolo del Piano Principale di Produzione, estenderemo questo approccio al passaggio successivo, cioè alla pianificazione dei fabbisogni di materiali (MRP), dove vedremo come le richieste dei singoli articoli vengono ulteriormente esplose fino ai componenti seguendo la distinta base dell'articolo. In una industria multi-prodotto con prodotti complessi e con componenti che potrebbero far parte di più di un prodotto, questo calcolo, sebbene concettualmente semplice, diventa un lavoro pesantemente complesso. Il cambiamento nella struttura del prodotto, il cambiamento nella distinta dei materiali, l'obsolescenza e l'introduzione di nuovi prodotti complica ulteriormente questo calcolo. Al posto dell'approccio iterativo, ampiamente utilizzato in letteratura corrente e in tutte le attuali applicazioni software, noi utilizzeremo allora un approccio basato sulle matrici. Invece di calcolare il fabbisogno articolo per articolo seguendo i vari livelli della distinta base, la struttura a matrici proposta eseguirà i calcoli per tutti gli articoli, di un determinato periodo di tempo, tutto in una sola volta. Con MPS e MRP calcolati, estenderemo l'approccio a matrice al calcolo dei livelli di inventario e del fabbisogno di capacità produttiva richiesta per soddisfare l'MPS. Durante il calcolo dei livelli di inventario vedremo anche un'applicazione importante e diretta di questo metodo nel calcolo delle scorte. Nel calcolo dei fabbisogni di capacità ci concentreremo, in particolare, su come vengono calcolati i fabbisogni di lavoro diretto con l'utilizzo delle matrici. L'ultima e importante applicazione della modellistica di sistema che vedremo, è nella pianificazione finanziaria, in particolare per la previsione del flusso di cassa.
This thesis analyses the Material Requirements Planning (MRP) from an uncommon perspective of matrices. The whole process is developed using a set of matrices evolving over time in a time variant system approach. Instead of iterating along Bill Of Material (BOM) levels we will simultaneously calculate the materials requirement for all products at any given instance of time. The main advantage of this approach is the speed: we can calculate MPS and MRP in seconds. In the development of this idea we will be following a model predictive control approach, moving along the framework of SIOP (Sales, Inventory and Operations Planning), starting with a detailed analysis of demand planning concepts and techniques. We will then develop in detail the core concepts of the matrix approach to material requirements calculation, starting with Master Production Schedule (MPS). We will extend this approach to the next step, the Material Requirements Planning (MRP) where we will see how the demands for the single items are further exploded down to the components of that item. In a multi-product industry with complex products and with components that could be part of more than one product, this calculation though conceptually simple become a heavily complex job. Change in product structure, change in bill of materials to say it in a more technical term, obsolescence and new product introductions further complicates this calculation. Instead of the iterative approach widely used in literature and all current software applications, we will use a matrix approach here also. Instead of calculating the requirements item per item and then summing it up, the proposed matrix structure will do the calculations for all the items for a specific time period all at once. With the Master Production Schedule and Material Requirements Planning calculated, we will also extend this matrix approach to calculate the inventory levels and capacity requirements. While calculating inventory levels we will also see an important and direct application of this method in calculating the stockouts. In calculation of capacity requirement we will focus specifically on how direct labour requirements are calculated using the matrices. A last and important application of system modelling is in financial planning, especially on a systems approach to stockout forecasting.
APA, Harvard, Vancouver, ISO, and other styles
38

Bade, Shrestha Shiva Om. "A predictive model for gas fueled spark ignition engine applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0018/NQ47885.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Huzmezan, Mihai. "Theory and aerospace applications of constrained model based predictive control." Thesis, University of Cambridge, 1998. https://www.repository.cam.ac.uk/handle/1810/272419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Guiggiani, Alberto. "Embedded model predictive control: finite precision arithmetic and aerospace applications." Thesis, IMT Alti Studi Lucca, 2015. http://e-theses.imtlucca.it/168/1/thesis_GUIGGIANI.pdf.

Full text
Abstract:
Model Predictive Control (MPC) is a multivariable advanced control technique widely popular inmany industrial applications due to its ability to explicitly optimize performance, straightforwardly handling constraints on system variables. However, MPC requires solving a Quadratic Programming (QP) optimization problem at each sampling step. This has slowed down its diffusion in embedded applications, in which fast sampling rates are paired with scarce computational capabilities, as in the automotive and aerospace industries. This thesis proposes optimization techniques and controller formulations specifically tailored to embedded applications. First, fixed-point implementations of Dual Gradient Projection (DGP) and Proximal Newton methods are introduced. Detailed convergence analysis in the presence of round-off errors and algorithm optimizations are presented, and concrete guidelines for selecting the minimum number of fractional and integer bits that guarantee convergence are provided. Moreover, extensive simulations and experimental tests on embedded devices, supported by general-purpose processing units and FPGAs, are reported to demonstrate the feasibility of the proposed solvers, and to expose the benefits of fixed-point arithmetic in terms of computation speeds and memory requirements. Finally, an embedded MPC application to spacecraft attitude control with reaction wheels actuators is presented. A lightweight controller with specific optimizations is developed, and its good performance evaluated in simulations. Moreover, special MPC formulations that address the problem of reaction wheel desaturation are discussed, where the constraint handling property of MPC is exploited to achieve desaturation without the need of fuel-consuming devices such as thrusters.
APA, Harvard, Vancouver, ISO, and other styles
41

Mudalige, Gihan Ravideva. "Predictive analysis and optimisation of pipelined wavefront applications using reusable analytic models." Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/3773/.

Full text
Abstract:
Pipelined wavefront computations are an ubiquitous class of high performance parallel algorithms used for the solution of many scientific and engineering applications. In order to aid the design and optimisation of these applications, and to ensure that during procurement platforms are chosen best suited to these codes, there has been considerable research in analysing and evaluating their operational performance. Wavefront codes exhibit complex computation, communication, synchronisation patterns, and as a result there exist a large variety of such codes and possible optimisations. The problem is compounded by each new generation of high performance computing system, which has often introduced a previously unexplored architectural trait, requiring previous performance models to be rewritten and reevaluated. In this thesis, we address the performance modelling and optimisation of this class of application, as a whole. This differs from previous studies in which bespoke models are applied to specific applications. The analytic performance models are generalised and reusable, and we demonstrate their application to the predictive analysis and optimisation of pipelined wavefront computations running on modern high performance computing systems. The performance model is based on the LogGP parameterisation, and uses a small number of input parameters to specify the particular behaviour of most wavefront codes. The new parameters and model equations capture the key structural and behavioural differences among different wavefront application codes, providing a succinct summary of the operations for each application and insights into alternative wavefront application design. The models are applied to three industry-strength wavefront codes and are validated on several systems including a Cray XT3/XT4 and an InfiniBand commodity cluster. Model predictions show high quantitative accuracy (less than 20% error) for all high performance configurations and excellent qualitative accuracy. The thesis presents applications, projections and insights for optimisations using the model, which show the utility of reusable analytic models for performance engineering of high performance computing codes. In particular, we demonstrate the use of the model for: (1) evaluating application configuration and resulting performance; (2) evaluating hardware platform issues including platform sizing, configuration; (3) exploring hardware platform design alternatives and system procurement and, (4) considering possible code and algorithmic optimisations.
APA, Harvard, Vancouver, ISO, and other styles
42

Taylor, Pamela J. "The biosocial model of personality : application to the prediction of alcohol consumption /." St. Lucia, Qld, 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe17960.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lusby, Fiona. "Statistical models for exon and intron content, with an application to genetic structure prediction." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0006/MQ59836.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Qazi, Imtnan-Ul-Haque. "Luminance-Chrominance linear prediction models for color textures: An application to satellite image segmentation." Phd thesis, Université de Poitiers, 2010. http://tel.archives-ouvertes.fr/tel-00574090.

Full text
Abstract:
Cette thèse détaille la conception, le développement et l'analyse d'un nouvel outil de caractérisation des textures exploitant les modèles de prédiction linéaire complexe sur les espaces couleur perceptuels séparant l'intensité lumineuse de la partie chromatique. Des modèles multicanaux 2-d causaux et non-causaux ont été utilisés pour l'estimation simultanée des densités spectrales de puissance d'une image " bi-canal ", le premier contenant les valeurs réelles de l'intensité et le deuxième les valeurs complexes de la partie chromatique. Les bonnes performances en terme de biais et de variance de ces estimations ainsi que l'usage d'une distance appropriée entre deux spectres assurent la robustesse et la pertinence de l'approche pour la classification de textures. Une mesure de l'interférence existante entre l'intensité et la partie chromatique à partir de l'analyse spectrale est introduite afin de comparer les transformations associées aux espaces couleur. Des résultats expérimentaux en classification de textures sur différents ensembles de tests, dans différents espaces couleur (RGB, IHLS et L*a*b*) sont présentés et discutés. Ces résultats montrent que la structure spatiale associée à la partie chromatique d'une texture couleur est mieux caractérisée à l'aide de l'espace L*a*b* et de ce fait, cet espace permet d'obtenir les meilleurs résultats pour classifier les textures à l'aide de leur structure spatiale et des modèles de prédiction linéaire. Une méthode bayésienne de segmentation d'images texturées couleur a aussi été développée à partir de l'erreur de prédiction linéaire multicanale. La contribution principale de la méthode réside dans la proposition d'approximations paramétriques robustes pour la distribution de l'erreur de prédiction linéaire multicanale : la distribution de Wishart et une approximation multimodale exploitant les lois de mélanges gaussiennes multivariées. Un autre aspect original de l'approche consiste en la fusion d'un terme d'énergie sur la taille des régions avec l'énergie du modèle de Potts afin de modéliser le champ des labels de classe à l'aide d'un modèle de champ aléatoire possédant une distribution de Gibbs. Ce modèle de champ aléatoire est ainsi utilisé pour régulariser spatialement un champ de labels initial obtenu à partir des différentes approximations de la distribution de l'erreur de prédiction. Des résultats expérimentaux en segmentation d'images texturées couleur synthétiques et d'images satellites hautes résolutions QuickBird et IKONOS ont permis de valider l'application de la méthode aux images fortement texturées. De plus les résultats montrent l'intérêt d'utiliser les approximations de la distribution de l'erreur de prédiction proposées ainsi que le modèle de champ de labels amélioré par le terme d'énergie qui pénalise les petites régions. Les segmentations réalisées dans l'espace L*a*b* sont meilleures que celles obtenues dans les autres espaces couleur (RGB et IHLS) montrant à nouveau la pertinence de caractériser les textures couleur par la prédiction linéaire multicanale complexe à l'aide de cet espace couleur.
APA, Harvard, Vancouver, ISO, and other styles
45

Howarth, M. J. "The application of advanced computer models to the prediction of sound in enclosed spaces." Thesis, University of Salford, 1998. http://usir.salford.ac.uk/14677/.

Full text
Abstract:
Computer modelling of acoustics in enclosures has developed into various forms, none of which have yet demonstrated 100% accuracy. This thesis therefore details a study of room acoustic computer modelling. It highlights weaknesses with existing modelling techniques and describes the development and subsequent verification of an improved modelling technique. The study discovers that for accurate prediction of many common room acoustic parameters diffuse reflections should be accounted for in the modelling of all reflection orders. However, many of the problems encountered in existing techniques are found to be caused by the way these diffuse reflections are modelled. An improved modelling technique, referred to as a 'Hybrid-Markov' method, is proposed and developed that combines a conventional hybrid method with a radiantexchange process to model diffuse reflections. Initial verification of the new modelling technique results in similar overall accuracies to existing modelling techniques but solves many of the specific problems discovered. It therefore provides a flexible and robust framework for the future development of computer prediction of sound in enclosed spaces.
APA, Harvard, Vancouver, ISO, and other styles
46

Wells, James Z. "Application of Path Prediction Techniques for Unmanned Aerial System Operations in the National Airspace." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin161710909594714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

González, Marcos Tulio Amarís. "Performance prediction of application executed on GPUs using a simple analytical model and machine learning techniques." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-06092018-213258/.

Full text
Abstract:
The parallel and distributed platforms of High Performance Computing available today have became more and more heterogeneous (CPUs, GPUs, FPGAs, etc). Graphics Processing Units (GPU) are specialized co-processor to accelerate and improve the performance of parallel vector operations. GPUs have a high degree of parallelism and can execute thousands or millions of threads concurrently and hide the latency of the scheduler. GPUs have a deep hierarchical memory of different types as well as different configurations of these memories. Performance prediction of applications executed on these devices is a great challenge and is essential for the efficient use of resources in machines with these co-processors. There are different approaches for these predictions, such as analytical modeling and machine learning techniques. In this thesis, we present an analysis and characterization of the performance of applications executed on GPUs. We propose a simple and intuitive BSP-based model for predicting the CUDA application execution times on different GPUs. The model is based on the number of computations and memory accesses of the GPU, with additional information on cache usage obtained from profiling. We also compare three different Machine Learning (ML) approaches: Linear Regression, Support Vector Machines and Random Forests with BSP-based analytical model. This comparison is made in two contexts, first, data input or features for ML techniques were the same than analytical model, and, second, using a process of feature extraction, using correlation analysis and hierarchical clustering. We show that GPU applications that scale regularly can be predicted with simple analytical models, and an adjusting parameter. This parameter can be used to predict these applications in other GPUs. We also demonstrate that ML approaches provide reasonable predictions for different cases and ML techniques required no detailed knowledge of application code, hardware characteristics or explicit modeling. Consequently, whenever a large data set with information about similar applications are available or it can be created, ML techniques can be useful for deploying automated on-line performance prediction for scheduling applications on heterogeneous architectures with GPUs.
As plataformas paralelas e distribuídas de computação de alto desempenho disponíveis hoje se tornaram mais e mais heterogêneas (CPUs, GPUs, FPGAs, etc). As Unidades de processamento gráfico são co-processadores especializados para acelerar operações vetoriais em paralelo. As GPUs têm um alto grau de paralelismo e conseguem executar milhares ou milhões de threads concorrentemente e ocultar a latência do escalonador. Elas têm uma profunda hierarquia de memória de diferentes tipos e também uma profunda configuração da memória hierárquica. A predição de desempenho de aplicações executadas nesses dispositivos é um grande desafio e é essencial para o uso eficiente dos recursos computacionais de máquinas com esses co-processadores. Existem diferentes abordagens para fazer essa predição, como técnicas de modelagem analítica e aprendizado de máquina. Nesta tese, nós apresentamos uma análise e caracterização do desempenho de aplicações executadas em Unidades de Processamento Gráfico de propósito geral. Nós propomos um modelo simples e intuitivo fundamentado no modelo BSP para predizer a execução de funções kernels de CUDA sobre diferentes GPUs. O modelo está baseado no número de computações e acessos à memória da GPU, com informação adicional do uso das memórias cachês obtidas do processo de profiling. Nós também comparamos três diferentes enfoques de aprendizado de máquina (ML): Regressão Linear, Máquinas de Vetores de Suporte e Florestas Aleatórias com o nosso modelo analítico proposto. Esta comparação é feita em dois diferentes contextos, primeiro, dados de entrada ou features para as técnicas de aprendizado de máquinas eram as mesmas que no modelo analítico, e, segundo, usando um processo de extração de features, usando análise de correlação e clustering hierarquizado. Nós mostramos que aplicações executadas em GPUs que escalam regularmente podem ser preditas com modelos analíticos simples e um parâmetro de ajuste. Esse parâmetro pode ser usado para predizer essas aplicações em outras GPUs. Nós também demonstramos que abordagens de ML proveem predições aceitáveis para diferentes casos e essas abordagens não exigem um conhecimento detalhado do código da aplicação, características de hardware ou modelagens explícita. Consequentemente, sempre e quando um banco de dados com informação de \\textit esteja disponível ou possa ser gerado, técnicas de ML podem ser úteis para aplicar uma predição automatizada de desempenho para escalonadores de aplicações em arquiteturas heterogêneas contendo GPUs.
APA, Harvard, Vancouver, ISO, and other styles
48

Feng, Chih-Wei. "Prediction of long-term creep behavior of epoxy adhesives for structural applications." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/2560.

Full text
Abstract:
The mechanical property of polymeric materials changes over time, especially when they are subjected to long-term loading scenarios. To predict the time-dependent viscoelastic behaviors of epoxy-based adhesive materials, it is imperative that reliable accelerated tests be developed to determine their long-term performances under different exposed environments. A neat epoxy resin system and a commercial structural adhesive system for bonding aluminum substrates are investigated. A series of moisture diffusion tests have been performed for more than three months in order to understand the influence of the absorbed moisture on creep behavior. The material properties, such as elastic modulus and glass transition temperature, are also studied under different environmental conditions. The time-temperature superposition method produces a master curve allowing the long-term creep compliance to be estimated. The physics-based Coupling model is found to fit well the long-term creep master curve. The equivalence of the temperature and moisture effect on the creep compliance of the epoxy adhesives is also addressed. Finally, a methodology for predicting the long-term creep behavior of epoxy adhesives is proposed.
APA, Harvard, Vancouver, ISO, and other styles
49

Roundy, Joshua K. "Uncertainty Analysis for Land Surface Model Predictions: Application to the Simple Biosphere 3 and Noah Models at Tropical and Semiarid Locations." DigitalCommons@USU, 2009. https://digitalcommons.usu.edu/etd/404.

Full text
Abstract:
Uncertainty in model predictions is associated with data, parameters, and model structure. The estimation of these contributions to uncertainty is a critical issue in hydrology. Using a variety of single and multiple criterion methods for sensitivity analysis and inverse modeling, the behaviors of two state-of-the-art land surface models, the Simple Biosphere Model 3 and Noah model, are analyzed. The different algorithms used for sensitivity and inverse modeling are analyzed and compared along with the performance of the land surface models. Generalized sensitivity and variance methods are used for the sensitivity analysis, including the Multi-Objective Generalized Sensitivity Analysis, the Extended Fourier Amplitude Sensitivity Test, and the method of Sobol. The methods used for the parameter uncertainty estimation are based on Markov Chain Monte Carlo simulations with Metropolis type algorithms and include A Multi-algorithm Genetically Adaptive Multi-objective algorithm, Differential Evolution Adaptive Metropolis, the Shuffled Complex Evolution Metropolis, and the Multi-objective Shuffled Complex Evolution Metropolis algorithms. The analysis focuses on the behavior of land surface model predictions for sensible heat, latent heat, and carbon fluxes at the surface. This is done using data from hydrometeorological towers collected at several locations within the Large-Scale Biosphere Atmosphere Experiment in Amazonia domain (Amazon tropical forest) and at locations in Arizona (semiarid grass and shrub-land). The influence that the specific location exerts upon the model simulation is also analyzed. In addition, the Santarém kilometer 67 site located in the Large-Scale Biosphere Atmosphere Experiment in Amazonia domain is further analyzed by using datasets with different levels of quality control for evaluating the resulting effects on the performance of the individual models. The method of Sobol was shown to give the best estimates of sensitivity for the variance-based algorithms and tended to be conservative in terms of assigning parameter sensitivity, while the multi-objective generalized sensitivity algorithm gave a more liberal number of sensitive parameters. For the optimization, the Multi-algorithm Genetically Adaptive Multi-objective algorithm consistently resulted in the smallest overall error; however all other algorithms gave similar results. Furthermore the Simple Biosphere Model 3 provided better estimates of the latent heat and the Noah model gave better estimates of the sensible heat.
APA, Harvard, Vancouver, ISO, and other styles
50

Sun, Jianchen. "Sustainable road safety : development, transference and application of community-based macro-level collision prediction models." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/9312.

Full text
Abstract:
The enormous social and economic burden imposed on society by road collision injuries is a major global problem. As a result, it is of ongoing interest of governments to discover ways of reducing this burden. The traditional engineering approach has been to address road safety in reaction to existing collision histories. While this approach has proven to be very successful, road safety authorities are also pursuing more proactive engineering approaches. Rather than working reactively to improve the safety of existing facilities, the proactive engineering approach focuses on improving the safety of planned facilities. Proactive programs rely heavily on reliable empirical techniques, including macro-level collision prediction models (CPMs). The three objectives of this research were to: 1. Develop community-based macro-level CPMs for the Capital Regional District (CRD) in BC, Canada and the City of Ottawa in Ontario, Canada. 2. Perform a road safety evaluation of the Canada Mortgage and Housing Corporation’s (CMHC) recently promoted Fused Grid model for sustainable subdivision development. 3. Use these models to conduct a Black Spot analysis of each region. Results were in line with intuitive expectations of each objective. First, following the recommended development and transferability guidelines, 64 community-based macro-level CPMs were successfully developed for the CRD and City of Ottawa. These models can be used by community planners and engineers as a decision-support tool in proactive road safety improvement programs. Second, the safety level of five road network patterns was evaluated using these macro-level CPMs. It was concluded that the 3-way offset and Fused Grid road networks were the safest over all, followed by the cul-de-sac and Dutch SRS road networks. The grid network was the least safe road pattern. Finally, black spot studies were also conducted, and four black spots were selected for in-depth analysis on diagnosing safety problems, and evaluating possible remedies. The results of this research demonstrate the potential of community-based, macro-level CPMs as new empirical tools for road safety planners and engineers to conduct proactive analyses, promote more sustainable development patterns, and reduce the road collision burden on communities worldwide.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography