Dissertations / Theses on the topic 'Predictive modelling'

To see the other types of publications on this topic, follow the link: Predictive modelling.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Predictive modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Welfoot, J. St J. "Predictive modelling of membrane nanofiltration." Thesis, Swansea University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.639377.

Full text
Abstract:
The main objective of this work was to develop predictive models for nanofiltration (NF) membrane processes. A one-parameter model (pore radius) for uncharged solute rejection has been developed. The good agreement between the proposed model and experimental data confirmed that uncharged solute rejection is well described by continuum models. A two-parameter model (pore radius and membrane charge) for electrolyte rejection has also been developed. Dielectric exclusion was included as an energy barrier to ion partitioning into the pores, the reassessment of which using NaCl rejection at the membrane isoelectric point introduced a third model parameter, the average pore solvent dielectric constant. The predicted membrane charge densities with the three-parameter model were more realistic in magnitude than those from previous models and their variation with concentration for divalent salts was in better agreement with physical models of ion adsorption. Analysis of experimental rejection data with truncated pore size distributions and a variation of viscosity with pore radius resulted in model parameters that represented the average value over all pore sizes. Further, analysis of salt mixtures showed that large experimentally observed negative rejections were very well described with fitted charge densities of similar magnitude to those from single salts. Finite Difference linearisation of pore concentration gradient greatly simplified the numerical solution of the three-parameter model. The validity of the linearised model was tested both experimentally and theoretically, showing the model to be a powerful tool for characterisation of NF membranes and subsequent prediction of separation performance. Overall, the work presented in this thesis has improved the understanding of the separation mechanisms of NF membranes, especially dielectric exclusion. The developed models are more rigorous than those proposed previously and represent a significant contribution to the field of predictive NF modelling.
APA, Harvard, Vancouver, ISO, and other styles
2

Kampakis, S. "Predictive modelling of football injuries." Thesis, University College London (University of London), 2016. http://discovery.ucl.ac.uk/1508067/.

Full text
Abstract:
The goal of this thesis is to investigate the potential of predictive modelling for football injuries. This work was conducted in close collaboration with Tottenham Hotspurs FC (THFC), the PGA European tour and the participation of Wolverhampton Wanderers (WW). Three investigations were conducted: 1. Predicting the recovery time of football injuries using the UEFA injury recordings: The UEFA recordings is a common standard for recording injuries in professional football. For this investigation, three datasets of UEFA injury recordings were available: one from THFC, one from WW and one that was constructed by merging both. Poisson, negative binomial and ordinal regression were used to model the recovery time after an injury and assess the significance of various injury-related covariates. Then, different machine learning algorithms (support vector machines, Gaussian processes, neural networks, random forests, naïve Bayes and k-nearest neighbours) were used in order to build a predictive model. The performance of the machine learning models is then improved by using feature selection conducted through correlation-based subset feature selection and random forests. 2. Predicting injuries in professional football using exposure records: The relationship between exposure (in training hours and match hours) in professional football athletes and injury incidence was studied. A common problem in football is understanding how the training schedule of an athlete can affect the chance of him getting injured. The task was to predict the number of days a player can train before he gets injured. The dataset consisted of the exposure records of professional footballers in Tottenham Hotspur Football Club from the season 2012-2013. The problem was approached by a Gaussian process model equipped with a dynamic time warping kernel that allowed the calculation of the similarity of exposure records of different lengths. 3. Predicting intrinsic injury incidence using in-training GPS measurements: A significant percentage of football injuries can be attributed to overtraining and fatigue. GPS data collected during training sessions might provide indicators of fatigue, or might be used to detect very intense training sessions which can lead to overtraining. This research used GPS data gathered during training sessions of the first team of THFC, in order to predict whether an injury would take place during a week. The data consisted of 69 variables in total. Two different binary classification approaches were followed and a variety of algorithms were applied (supervised principal component analysis, random forests, naïve Bayes, support vector machines, Gaussian process, neural networks, ridge logistic regression and k-nearest neighbours). Supervised principal component analysis shows the best results, while it also allows the extraction of components that reduce the total number of variables to 3 or 4 components which correlate with injury incidence. The first investigation contributes the following to the field: • It provides models based on the UEFA injury recordings, a standard used by many clubs, which makes it easier to replicate and apply the results. • It investigates which variables seem to be more highly related to the prediction of recovery after an injury. • It provides a comparison of models for predicting the time to return to play after injury. The second investigation contributes the following to the field: • It provides a model that can be used to predict the time when the first injury of the season will take place. • It provides a kernel that can be utilized by a Gaussian process in order to measure the similarity of training and match schedules, even if the time series involved are of different lengths. The third investigation contributes the following to the field: • It provides a model to predict injury on a given week based on GPS data gathered from training sessions. • It provides components, extracted through supervised principal component analysis, that correlate with injury incidence and can be used to summarize the large number of GPS variables in a parsimonious way.
APA, Harvard, Vancouver, ISO, and other styles
3

Davis, Luke M. "Predictive modelling of bone ageing." Thesis, University of East Anglia, 2013. https://ueaeprints.uea.ac.uk/45085/.

Full text
Abstract:
Bone age assessment (BAA) is a task performed daily by paediatricians in hospitalsworldwide. The main reasons for BAA to be performed are: fi�rstly, diagnosis of growth disorders through monitoring skeletal development; secondly, prediction of final adult height; and fi�nally, verifi�cation of age claims. Manually predicting bone age from radiographs is a di�fficult and time consuming task. This thesis investigates bone age assessment and why automating the process will help. A review of previous automated bone age assessment systems is undertaken and we investigate why none of these systems have gained widespread acceptance. We propose a new automated method for bone age assessment, ASMA (Automated Skeletal Maturity Assessment). The basic premise of the approach is to automatically extract descriptive shape features that capture the human expertise in forming bone age estimates. The algorithm consists of the following six modularised stages: hand segmentation; hand segmentation classifi�cation; bone segmentation; feature extraction; bone segmentation classifi�cation; bone age prediction. We demonstrate that ASMA performs at least as well as other automated systems and that models constructed on just three bones are as accurate at predicting age as expert human assessors using the standard technique. We also investigate the importance of ethnicity and gender in skeletal development. Our conclusion is that the feature based system of separating the image processing from the age modelling is the best approach, since it off�ers flexibility and transparency, and produces accurate estimates.
APA, Harvard, Vancouver, ISO, and other styles
4

Kalathenos, Panayiotis. "Predictive modelling of wine spoilage microorganisms." Thesis, University of Reading, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kougoulos, Eleftherios. "Predictive modelling of organic crystallization processes." Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1445645/.

Full text
Abstract:
This thesis is concerned with the development of a predictive model for batch cooling suspension pharmaceutical crystallizations, with a focus on product performance. A major challenge involved in the design of industrial pilot plant pharmaceutical crystallizers, is to predict the influence of crystallizer geometry, scale and operating conditions on the process behaviour and crystal size distribution (CSD). The design of industrial crystallizers is hindered by the lack of scale-up rules due to the absence of reliable predictive process models. Currently no reliable predictive or 'dial up a particle size' tool exists for scale-up predictions. The research involves the development of a novel predictive compartmental modelling framework for the scale-up of an organic fine chemical. A new approach of using compartments is developed in order to facilitate scale-up design and process modelling by separating crystallization kinetic and hydrodynamic phenomena. Application of this technique involves determining key process engineering information on a laboratory scale, which is critical for technology transfer, and combining this data with hydrodynamic information on transfer to large scale for predictive scale-up purposes. The key process engineering information required for predictive modelling includes the determination of solubility characteristics, thermodynamic properties and crystallization kinetics of the organic fine chemical. Attenuated Total Reflectance Ultra-Violet (ATR-UV) spectroscopy is used as an 'in-situ' measurement technique to measure solute concentration. A modified continuous Mixed Suspension Mixed Product Removal (MSMPR) crystallizer is designed specifically for innovative drug candidates available in limited quantities to derive steady state crystallization kinetics with minimal influence from hydrodynamic phenomena. Batch attrition experiments were carried out to determine the effects of specific power input on the CSD using Lasentec Focussed Beam Reflectance Monitoring (FBRM) to monitor the process on-line and to develop an attrition rate model. Computational Fluid Dynamics (CFD) is a simulation tool that is also introduced to provide valuable insight into mixing, heat transfer and hydrodynamic phenomena within agitated batch cooling suspension crystallization vessels including investigating the effects of scale-up. CFD is used to aid the development of the compartmental modelling framework. The design of the compartmental structure is based on high spatial resolution CFD simulations of internal flow, mixing and heat transfer within crystallizers upon scale-up. The great advantage of using a compartmental modelling framework is that the spatial resolution is reduced and the full population balance with kinetic models can be implemented. The detailed compartmental framework is based on the overall flow pattern, local energy dissipation rate, solids concentration and temperature distribution obtained from CFD. The number, location, cross-sectional area and volume of compartments are determined from CFD results based on the physical crystallizer dimensions. The compartments are selected such that they have approximately uniform temperature, local energy dissipation and solids concentration. Each dynamic compartment has a mass, concentration, enthalpy and population balance combined with MSMPR crystallization kinetic models. The compartments are therefore well mixed and physically connected via interconnecting flows determined from CFD. A general process modelling tool, gPROMS (Process Systems Enterprises) that supports both steady state and dynamics simulations is used to solve sets of ordinary differential and algebraic equations in each compartment. A single compartmental modelling approach is used initially as a first approach without taking into account local variations in process conditions. Predictions on a laboratory scale for an MSMPR and batch cooling crystallizer were satisfactory but upon scale-up the effects of mixing and hydrodynamics is not taken into account and therefore the predictions become less reliable. A compartmentalization approach can be introduced into gPROMS whereby the compartments are modelled as individual units with input and output streams using CFD hydrodynamic information.
APA, Harvard, Vancouver, ISO, and other styles
6

Valvatne, Per Henrik. "Predictive pore-scale modelling of multiphase flow." Thesis, Imperial College London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Robertson, Mark Peter. "Predictive modelling of species' potential geographical distributions." Thesis, Rhodes University, 2003. http://hdl.handle.net/10962/d1007189.

Full text
Abstract:
Models that are used for predicting species' potential distributions are important tools that have found applications in a number of areas of applied ecology. The majority of these models can be classified as correlative, as they rely on strong, often indirect, links between species distribution records and environmental predictor variables to make predictions. Correlative models are an alternative to more complex mechanistic models that attempt to simulate the mechanisms considered to underlie the observed correlations with environmental attributes. This study explores the influence of the type and quality of the data used to calibrate correlative models. In terms of data type, the most popular techniques in use are group discrimination techniques, those that use both presence and absence locality data to make predictions. However, for many organisms absence data are either not available or are considered to be unreliable. As the available range of profile techniques (those using presence only data) appeared to be limited, new profile techniques were investigated and evaluated. A new profile modelling technique based on fuzzy classification (the Fuzzy Envelope Model) was developed and implemented. A second profile technique based on Principal Components Analysis was implemented and evaluated. Based on quantitative model evaluation tests, both of these techniques performed well and show considerable promise. In terms of data quality, the effects on model performance of false absence records, the number of locality records (sample size) and the proportion of localities representing species presence (prevalence) in samples were investigated for logistic regression distribution models. Sample size and prevalence both had a significant effect on model performance. False absence records had a significant influence on model performance, which was affected by sample size. A quantitative comparison of the performance of selected profile models and group discrimination modelling techniques suggests that different techniques may be more successful for predicting distributions for particular species or types of organism than others. The results also suggest that several different model design! sample size combinations are capable of making predictions that will on average not differ significantly in performance for a particular species. A further quantitative comparison among modelling techniques suggests that correlative techniques can perform as well as simple mechanistic techniques for predicting potential distributions.
APA, Harvard, Vancouver, ISO, and other styles
8

Lindström, Martin. "Predictive Modelling of Heavy Metals in Urban Lakes." Doctoral thesis, Uppsala University, Department of Earth Sciences, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-530.

Full text
Abstract:

Heavy metals are well-known environmental pollutants. In this thesis predictive models for heavy metals in urban lakes are discussed and new models presented. The base of predictive modelling is empirical data from field investigations of many ecosystems covering a wide range of ecosystem characteristics. Predictive models focus on the variabilities among lakes and processes controlling the major metal fluxes.

Sediment and water data for this study were collected from ten small lakes in the Stockholm area, the Eastern parts of Lake Mälaren, the innermost areas of the Stockholm archipelago and from literature studies. By correlating calculated metal loads to the land use of the catchment areas (describing urban and natural land use), the influences of the local urban status on the metal load could be evaluated. Copper was most influenced by the urban status and less by the regional background. The opposite pattern was shown for cadmium, nickel and zinc (and mercury). Lead and chromium were in-between these groups.

It was shown that the metal load from the City of Stockholm is considerable. There is a 5-fold increase in sediment deposition of cadmium, copper, mercury and lead in the central areas of Stockholm compared to surrounding areas.

The results also include a model for the lake characteristic concentration of suspended particulate matter (SPM), and new methods for empirical model testing. The results indicate that the traditional distribution (or partition) coefficient Kd (L kg-1) is unsuitable to use in modelling of the particle association of metals. Instead the particulate fraction, PF (-), defined as the ratio of the particulate associated concentration to the total concentration, is recommended. Kd is affected by spurious correlations due to the definition of Kd as a ratio including SPM and also secondary spurious correlations with many variables correlated to SPM. It was also shown that Kd has a larger inherent within-system variability than PF. This is important in modelling.

APA, Harvard, Vancouver, ISO, and other styles
9

Lindström, Martin. "Predictive modelling of heavy metals in urban lakes /." Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2000. http://publications.uu.se/theses/91-554-4854-2/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yeh, Der-Ming. "Manipulation and predictive modelling of flowering in cineraria." Thesis, University of Nottingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.309594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gilholm, Patricia. "Methods for personalised predictive modelling of developmental milestones for children with disabilities." Thesis, Queensland University of Technology, 2021. https://eprints.qut.edu.au/212038/1/Patricia%20Gilholm%20Thesis.pdf.

Full text
Abstract:
This thesis developed methods for personalised modelling of developmental milestones for children with disabilities. Using data containing 348 milestone measurements from a small sample of children with a diverse range of disabilities, methods were developed to create a comprehensive personalised developmental profile for each child. These profiles incorporate multiple developmental domains and are designed to be updated in real time so that parents can be provided with feedback as their child develops. The outputs of the methods developed in this thesis will be used to help inform decision-making and assist with personalised intervention planning at the Developing Foundation.
APA, Harvard, Vancouver, ISO, and other styles
12

Till, Robert John. "Predictive behavioural models in credit scoring and retail banking." Thesis, Imperial College London, 2002. http://hdl.handle.net/10044/1/7984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lonsdale, Jack Henry. "Predictive modelling and uncertainty quantification of UK forest growth." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/16202.

Full text
Abstract:
Forestry in the UK is dominated by coniferous plantations. Sitka spruce (Picea sitchensis) and Scots pine (Pinus sylvestris) are the most prevalent species and are mostly grown in single age mono-culture stands. Forest strategy for Scotland, England, and Wales all include efforts to achieve further afforestation. The aim of this afforestation is to provide a multi-functional forest with a broad range of benefits. Due to the time scale involved in forestry, accurate forecasts of stand productivity (along with clearly defined uncertainties) are essential to forest managers. These can be provided by a range of approaches to modelling forest growth. In this project model comparison, Bayesian calibration, and data assimilation methods were all used to attempt to improve forecasts and understanding of uncertainty therein of the two most important conifers in UK forestry. Three different forest growth models were compared in simulating growth of Scots pine. A yield table approach, the process-based 3PGN model, and a Stand Level Dynamic Growth (SLeDG) model were used. Predictions were compared graphically over the typical productivity range for Scots pine in the UK. Strengths and weaknesses of each model were considered. All three produced similar growth trajectories. The greatest difference between models was in volume and biomass in unthinned stands where the yield table predicted a much larger range compared to the other two models. Future advances in data availability and computing power should allow for greater use of process-based models, but in the interim more flexible dynamic growth models may be more useful than static yield tables for providing predictions which extend to non-standard management prescriptions and estimates of early growth and yield. A Bayesian calibration of the SLeDG model was carried out for both Sitka spruce and Scots pine in the UK for the first time. Bayesian calibrations allow both model structure and parameters to be assessed simultaneously in a probabilistic framework, providing a model with which forecasts and their uncertainty can be better understood and quantified using posterior probability distributions. Two different structures for including local productivity in the model were compared with a Bayesian model comparison. A complete calibration of the more probable model structure was then completed. Example forecasts from the calibration were compatible with existing yield tables for both species. This method could be applied to other species or other model structures in the future. Finally, data assimilation was investigated as a way of reducing forecast uncertainty. Data assimilation assumes that neither observations nor models provide a perfect description of a system, but combining them may provide the best estimate. SLeDG model predictions and LiDAR measurements for sub-compartments within Queen Elizabeth Forest Park were combined with an Ensemble Kalman Filter. Uncertainty was reduced following the second data assimilation in all of the state variables. However, errors in stand delineation and estimated stand yield class may have caused observational uncertainty to be greater thus reducing the efficacy of the method for reducing overall uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
14

Wang, Jinhuo. "Predictive modelling and experimental measurement of composite forming behaviour." Thesis, University of Nottingham, 2008. http://eprints.nottingham.ac.uk/10602/.

Full text
Abstract:
Optimised design of textile composite structures based on computer simulation techniques requires an understanding of the deformation behaviour during forming of 3-dimensional double-curvature components. Purely predictive material models are highly desirable to facilitate an optimised design scheme and to significantly reduce time and cost at the design stage, such as experimental characterisation. In-plane shear and out-of-plane bending are usually thought to be the key forming mechanisms. Therefore, this thesis is concerned with studies of the shear and bending behaviour by experimental characterisation and theoretical modelling. Micromechanical interaction between fibre and matrix offers fundamental understanding of deformation mechanisms at the micro-scale level, leading to development of composite viscosity models, as input to shear and bending models. The composite viscosity models were developed based on rheological behaviour during movement of fibres, and validation was performed using experimental results collected from the literature. A novel characterisation method for measuring the bending behaviour, by means of a large-displacement buckling test, was attempted due to some significant advantages over other methods. Development of a bending model was also undertaken for unidirectional composites but experimental validation suggests further study may be required for woven composites. The shear behaviour was characterised using a picture frame test for viscous polymer composites. To obtain reliable experimental data, some efforts of improving the characterisation method were made. The experimental results were then used to validate a shear model, suggesting that further improvement is required, in terms of weave patterns, rate and temperature dependence.
APA, Harvard, Vancouver, ISO, and other styles
15

Jayakumar, Jayanthi. "Evaluation of predictive computational modelling in biologic formulation development." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112488.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, in conjunction with the Leaders for Global Operations Program at MIT, 2017.
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, in conjunction with the Leaders for Global Operations Program at MIT, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 81-87).
Computational modelling has completely redefined the experimentation process in many industries, allowing large sets of design concepts to be tested quickly and cheaply very early in the innovation process. Harnessing the power of computational modelling for protein drug formulation has numerous, currently unrealized, benefits. This project aims to be the first step in the development of a high throughput predictive computational model to screen for excipients that would decrease protein aggregation in solution and thus increase its stability and enable clinical effectiveness. Protein drug formulation currently relies heavily on empirical evidence from wet-lab experiments and personal experience. During the biologic drug development process, proteins that target specific disease pathways are identified, developed, isolated, and purified. Scientists then conduct a series of wet-lab experiments to identify the optimal formulation that will allow the protein to be used as a drug therapy. A critical part of formulation development is the identification of inactive ingredients called excipients that perform various important functions including prevention of protein aggregation. Despite their critical role in enabling proteins to be effective therapies, very little is understood about excipient-protein interaction. Furthermore, often a limited set of compounds are tested for their use as excipients since wet-lab experiments are expensive and time consuming. This project accomplishes the following goals: ** Identification of databases of compounds that could be used as excipients in biologic formulation; ** Development of a high throughput method to computationally model a target protein and 247 potential excipients; ** Evaluation of potential relationship between computational output and wet-lab results based onxperimentation with 32 of the 247 excipients; ** Recommendations on next steps that include feedback on types of proteins and excipients to be tested for the validation of the method developed in this project.
by Jayanthi Jayakumar.
M.B.A.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
16

Abdelhamid, S. "Respiratory motion modelling and predictive tracking for adaptive radiotherapy." Thesis, Coventry University, 2010. http://curve.coventry.ac.uk/open/items/f135cb12-e9f9-1e4f-9c57-6de2fc378069/1.

Full text
Abstract:
External beam radiation therapy (EBRT) is the most common form of radiation therapy (RT) that uses controlled energy sources to eradicate a predefined tumour volume, known as the planning target volume (PTV), whilst at the same time attempting to minimise the dose delivered to the surrounding healthy tissues. Tumours in the thoracic and abdomen regions are susceptible to motion caused mainly by the patient respiration and movement that may occur during the treatment preparation and delivery. Usually, an adaptive approach termed adaptive radiation therapy (ART), which involves feedback from imaging devices to detect organ/surrogate motion, is considered. The feasibility of such techniques is subject to two main problems. First, the exact position of the tumour has to be estimated/detected in real-time and second, the delay that can arise from the tumour position acquisition and the motion tracking compensation. The research work described in this thesis is part of the European project entitled ‘Methods and advanced equipment for simulation and treatment in radiation oncology’ (MAESTRO), see Appendix A. The thesis presents both theoretical and experimental work to model and predict the respiratory surrogate motion. Based on a widely investigated clinical internal and external respiratory surrogate motion data, two new approaches to model respiratory surrogate motion were developed. The first considers the lung as a bilinear model that replicates the motion in response to a virtual input signal that can be seen as a signal generated by the nervous system. This model and a statistical model of the respiratory period and duty cycle were used to generate a set of realistic respiratory data of varying difficulties. The aim of the latter was to overcome the lack of test data for a researcher to evaluate their algorithms. The second approach was based on an online polynomial function that was found to adequately replicate the breathing cycles of regular and irregular data, using the same number of parameters as a benchmark sinusoidal model.
APA, Harvard, Vancouver, ISO, and other styles
17

Greenstein, Stanley. "Our Humanity Exposed : Predictive Modelling in a Legal Context." Doctoral thesis, Stockholms universitet, Juridiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-141657.

Full text
Abstract:
This thesis examines predictive modelling from the legal perspective. Predictive modelling is a technology based on applied statistics, mathematics, machine learning and artificial intelligence that uses algorithms to analyse big data collections, and identify patterns that are invisible to human beings. The accumulated knowledge is incorporated into computer models, which are then used to identify and predict human activity in new circumstances, allowing for the manipulation of human behaviour. Predictive models use big data to represent people. Big data is a term used to describe the large amounts of data produced in the digital environment. It is growing rapidly due mainly to the fact that individuals are spending an increasing portion of their lives within the on-line environment, spurred by the internet and social media. As individuals make use of the on-line environment, they part with information about themselves. This information may concern their actions but may also reveal their personality traits. Predictive modelling is a powerful tool, which private companies are increasingly using to identify business risks and opportunities. They are incorporated into on-line commercial decision-making systems, determining, among other things, the music people listen to, the news feeds they receive, the content people see and whether they will be granted credit. This results in a number of potential harms to the individual, especially in relation to personal autonomy. This thesis examines the harms resulting from predictive modelling, some of which are recognized by traditional law. Using the European legal context as a point of departure, this study ascertains to what extent legal regimes address the use of predictive models and the threats to personal autonomy. In particular, it analyses Article 8 of the European Convention on Human Rights (ECHR) and the forthcoming General Data Protection Regulation (GDPR) adopted by the European Union (EU). Considering the shortcomings of traditional legal instruments, a strategy entitled ‘empowerment’ is suggested. It comprises components of a legal and technical nature, aimed at levelling the playing field between companies and individuals in the commercial setting. Is there a way to strengthen humanity as predictive modelling continues to develop?
APA, Harvard, Vancouver, ISO, and other styles
18

Ruci, Xhesika. "Capacity Management in Hyper-Scale Datacenters using Predictive Modelling." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-72487.

Full text
Abstract:
Big data applications have become increasingly popular with the emerge of cloud computing and the explosion of artificial intelligence. Hence, the increasing adoption of data-hungry machines and services is driving the need for more power to keep the datacenters of the world running. It has become crucial for large IT companies such as Google, Facebook, Amazon etc. to monitor the energy efficiency of their datacenters’ facilities and take actions on optimization of these heavy consumers of electricity. This master thesis work proposes several predictive models to forecast PUE (Power Usage Effectiveness), regarded as the industry-de-facto metric for measuring datacenter’s IT power efficiency. This approach is a novel capacity management technique to predict and monitor the environment in order to prevent future disastrous events, which are strictly unacceptable in datacenter’s business.
APA, Harvard, Vancouver, ISO, and other styles
19

Ossai, Chinedu Ishiodu. "Predictive modelling and reliability analysis of aged, corroded pipelines." Thesis, Curtin University, 2016. http://hdl.handle.net/20.500.11937/2561.

Full text
Abstract:
Pipeline corrosion is a major challenge facing many oil and gas industries today because of the enormous downtime associated with corrosion related failures. This research utilized field data for modelling the reliability of aged corroded pipelines via probabilistic-based modelling techniques that included Markov decision process. The models developed from this work have been used for estimating the remaining useful life of corroded pipelines and can potentially enhance the integrity of future operational pipelines.
APA, Harvard, Vancouver, ISO, and other styles
20

Rocks-Macqueen, James Douglas. "Agent based predictive models in archaeology." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/21013.

Full text
Abstract:
For over 40 years archaeologists have been using predictive modelling to locate archaeological sites. While great strides have been made in the theory and methods of site predictive modelling there are still unresolved issues like a lack of theory, poor data, biased datasets and poor accuracy and precision in the models. This thesis attempts to address the problems of poor model performance and lack of theory driven models through the development of a new method for predictive modelling, agent based modelling. Applying GIS and agent based modelling tools to a project area in southeaster New Mexico this new methodology explored possible behaviours that resulted in site formation such as access to water resources, travel routes and resource exploitation. The results in regards to improved accuracy over traditional methods were inconclusive as a data error was found in the previously created predictive models for the area that were to be used as a comparison. But, the project was more successful in providing explanatory reasons for site placement based on the models created. This work has the potential to open up predictive modelling to wider archaeology audiences, such as those based at universities. Additional findings also impacted other areas of archaeological investigation outside of predictive modelling, such as least cost path analyses and resource gathering analyses.
APA, Harvard, Vancouver, ISO, and other styles
21

Lindgren, Cory John. "Addressing the risks of invasive plants through spatial predictive modelling." Canadian Journal of Plant Science, 2010. http://hdl.handle.net/1993/18344.

Full text
Abstract:
The objective of this dissertation is to extend the use of spatial predictive modelling for use by biosecurity agencies to help prevent the introductions of new and emerging invasive plants (i.e., pests). A critical review of international and national policy instruments found that they did not effectively articulate how spatial predictive modelling could be incorporated into the biosecurity toolbox. To determine how spatial predictive modelling could be extended I modelled the potential distribution of Tamarix and Lythrum salicaria in Prairie Canada using a genetic algorithm. New seasonal growth data was used to interpolate a growing degree-day’s risk surface for L. salicaria. Models were developed using suites of predictive variables as well as different data partitioning methods and evaluated using different performance measures. Expert evaluation was found to important in final model selection. The results indicated that both invasive plants have yet to reach their potential distribution in Prairie Canada. The spatial models can be used to direct risk-based surveillance efforts and to support biosecurity policy decisions. The results of this dissertation conclude that spatial predictive modelling is an informative tool that needs to be incorporated into the biosecurity toolbox. A phytosanitary standard is proposed to guide toolbox development.
APA, Harvard, Vancouver, ISO, and other styles
22

Wheatley, David. "The application of geographic information systems to archaeology : with case studies from Neolithic Wessex." Thesis, University of Southampton, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yuan, Qiang. "Adaptive, statistical, context modelling for predictive coding of medical images." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq21056.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Vera, Epiphany. "Fractal modelling of residual in linear predictive coding of speech." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0006/MQ41642.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

De, Jongh Cornel. "Critical evaluation of predictive modelling of a cervical disc design." Thesis, Link to the online version, 2007. http://hdl.handle.net/10019/601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bellamy, Chloe Charlotte. "Predictive modelling of bat-habitat relationships on different spatial scales." Thesis, University of Leeds, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.550267.

Full text
Abstract:
To develop effective conservation strategies we need to know the ecological drivers of species' distributions at different spatial scales. Geographic Information Systems (GIS) and habitat suitability modelling technology provide us with the tools to examine multiscale, species-habitat relationships. We collected data on the presence of foraging bats from thirty 1 km2 field sites in the south of the Lake District National Park and used the presence-only modelling software MaxEnt to predict foraging habitat suitability across the entire Park (rv3,300 km') for seven species/species groups. Museum records of species' roosts were also used to investigate roost-habitat relationships. Using a moving window analysis we were able to assess the impact of habitat composition and structure over a range of extents (100 m - 6 km). This revealed species- and scale-specific habitat effects. The presence of foraging bats was best predicted by small scale habitat variables, which may reflect the high mobility of these mammals. The strength of roost-habitat associations tended to be more consistent across the range of scales, suggesting that bats may be sensitive to considerable landscape modification's made at great distances from the roost. Each species presence was best predicted by a unique set of enviro-geographic variables. Of the sibling Pipistre//us species, P. pygmaeus had a more narrow niche breadth because of its dependence on high water cover at large spatial scales (1 - 1.5 km) for foraging. Negative impacts of large scale urban cover were detected for P. euntos. Myotis nattereri and M. brandtiijmystacinus, whereas Pipistre//us avoided roosting within large coniferous plantations. Predictions were mapped at a fine resolution across the Park and validated with independent data, revealing that the high density of deciduous and ancient woodland across the southern Low Fells provided good foraging and roosting habitat for all species. This bat "hotspot" should be of high conservation priority.
APA, Harvard, Vancouver, ISO, and other styles
27

Alshammari, Dhahi. "Evaluation of cloud computing modelling tools : simulators and predictive models." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/41050/.

Full text
Abstract:
Experimenting with novel algorithms and configurations for the automatic management of Cloud Computing infrastructures is expensive and time consuming on real systems. Cloud computing delivers the benefits of using virtualisation techniques to data centers instead of physical servers for customers. However, it is still complex for researchers to test and run their experiments on data center due to the cost for repeating the experiments. To address this, various tools are available to enable simulators, emulators, mathematical models, statistical models and benchmarking. Despite this, there are different methods used by researchers to avoid the difficulty of conducting Cloud Computing research on actual large data centre infrastructure. However, it is still difficult to chose the best tool to evaluate the proposed research. This research focuses on investigating the level of accuracy of existing known simulators in the field of cloud computing. Simulation tools are generally developed for particular experiments, so there is little assurance that using them with different workloads will be reliable. Moreover, a predictive model based on a data set from a realistic data center is delivered as an alternative model of simulators as there is a lack of their sufficient accuracy. So, this work addresses the problem of investigating the accuracy of different modelling tools by developing and validating a procedure based on the performance of a target micro data centre. Key insights and contributions are: Involving three alternative models for Cloud Computing real infrastructure showing the level of accuracy of selected simulation tools. Developing and validating a predictive model based on a Raspberry Pi small scale data centre. The use of predictive model based on Linear Regression and Artificial Neural Net- works models based on training data set drawn from a Raspberry Pi Cloud infrastructure provides better accuracy.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhu, Ping. "Predictive modelling and simulations of internal transport barriers in tokamaks /." Full text (PDF) from UMI/Dissertation Abstracts International, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3008483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Tan, Stefanie Hun Yen. "Predictive modelling of orthodontic treatment outcomes in the adolescent patients." Thesis, University of Bristol, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.738314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Mesgarpour, Mohsen. "Predictive risk modelling of hospital emergency readmission, and temporal comorbidity index modelling using machine learning methods." Thesis, University of Westminster, 2017. https://westminsterresearch.westminster.ac.uk/item/q3031/predictive-risk-modelling-of-hospital-emergency-readmission-and-temporal-comorbidity-index-modelling-using-machine-learning-methods.

Full text
Abstract:
This thesis considers applications of machine learning techniques in hospital emergency readmission and comorbidity risk problems, using healthcare administrative data. The aim is to introduce generic and robust solution approaches that can be applied to different healthcare settings. Existing solution methods and techniques of predictive risk modelling of hospital emergency readmission and comorbidity risk modelling are reviewed. Several modelling approaches, including Logistic Regression, Bayes Point Machine, Random Forest and Deep Neural Network are considered. Firstly, a framework is proposed for pre-processing hospital administrative data, including data preparation, feature generation and feature selection. Then, the Ensemble Risk Modelling of Hospital Readmission (ERMER) is presented, which is a generative ensemble risk model of hospital readmission model. After that, the Temporal-Comorbidity Adjusted Risk of Emergency Readmission (T-CARER) is presented for identifying very sick comorbid patients. A Random Forest and a Deep Neural Network are used to model risks of temporal comorbidity, operations and complications of patients using the T-CARER. The computational results and benchmarking are presented using real data from Hospital Episode Statistics (HES) with several samples across a ten-year period. The models select features from a large pool of generated features, add temporal dimensions into the models and provide highly accurate and precise models of problems with complex structures. The performances of all the models have been evaluated across different timeframes, sub-populations and samples, as well as previous models.
APA, Harvard, Vancouver, ISO, and other styles
31

Chhatwal, Harprit Singh. "Spectral modelling techniques for speech signals based on linear predictive analysis." Thesis, Imperial College London, 1988. http://hdl.handle.net/10044/1/46996.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wilcox, Bill. "Archaeological predictive modelling of late Anglo-Saxon settlement in East Anglia." Thesis, University of East Anglia, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554292.

Full text
Abstract:
This thesis is primarily about archaeological predictive modelling and its possible application as a tool for cultural heritage management in the UK. The research is focused on four main questions using East Anglia during the late Anglo-Saxon period as a case study: what would be the best way of developing an archaeological predictive model; how would such a model compare with other models from around the world; how would archaeological predictive models compare with the existing system of cultural heritage management in the UK; and what would be the implications of employing the technique in the UK? . Published reports of other archaeological predictive models are examined in order to determine what lessons can be learnt, and what techniques might be appropriate for predictive modelling in East Anglia, given the terrain and the character of the archaeological data available. The thesis argues that inductive techniques, using binary logistic regression analysis, are the optimum choice for the modelling process. However, due to statistical problems of modelling across modern administration boundaries, it was decided to concentrate on the county of Norfolk alone. The predictive models produced for this county correctly predicted between 22% and 32% above chance. These results have been verified and are roughly comparable with other archaeological predictive models, taking into account issues such as terrain, available data, etc. The cultural heritage management systems of countries that incorporate archaeological predictive modelling were investigated to determine how well those systems worked, what the legal aspects were, how these models were regulated, how much they cost and how they compare with the current system of heritage management in the UK. Archaeological predictive modelling works well in certain circumstances and can save time and money because, as it predicts archaeological data, archaeological investigation can be targeted to specific areas. For example, within the UK it is envisaged that archaeological predictive modelling could be a viable technique in advance of mineral extraction, new pipelines, road and rail schemes, etc.
APA, Harvard, Vancouver, ISO, and other styles
33

Reynolds, Rachael Amy. "Predictive modelling of climate change impacts on disease dynamics in Tanzania." Thesis, Manchester Metropolitan University, 2018. http://e-space.mmu.ac.uk/621437/.

Full text
Abstract:
Climate and the environment are key determinants impacting various aspects of disease transmission, including lifecycle, survivability and prevalence. Recent changes in both the long-term climatology, and short term El Niño events are impacting the spatial distribution of disease, increasing the number of people being at higher risk of contracting fatal diseases. These changes are particularly detrimental in developing countries, where socioeconomic conditions hinder access to disease prevention and treatment. This thesis explores climate, environment and disease interactions using multiple epidemiological modelling methodologies to develop an informative framework within which disease risk can be assessed, to aid decision-making. Statistical analysis of the impact of extreme events indicate that El Niño has a significant impact on the Tanzanian climate, which differs by location. Spatial modelling results demonstrate that by 2050 under RCP 8.5 mean malaria risk will initially reduce by 4.7%, which then reverses to an increase of 8.9% in 2070. Overall, analysis indicates increases in mean malaria risk. Biological modelling indicates that the predicted increases in malaria risk are likely a result of the reduction in time taken to complete the sporogonic and gonotrophic cycles due to increasingly optimum environmental conditions. The novel approach applied here contributes the development of a new model in environmental epidemiology. This thesis concludes that epidemiological modelling results could be beneficial in aiding decision makers to prepare for the impact of climate and environmental change, with a recommendation to continue research in this area with a particular focus on understudied and developing countries.
APA, Harvard, Vancouver, ISO, and other styles
34

Degereji, Mohammed Usman. "Predictive modelling of ash particle deposition in a PF combustion furnace." Thesis, University of Leeds, 2011. http://etheses.whiterose.ac.uk/21122/.

Full text
Abstract:
Slagging and fouling during the combustion of pulverised coal in boilers is a major problem as power generators strive to improve the efficiency of plants. The coal type has a major influence on the slagging propensity in furnaces. The correlation between predicted results using some of the existing slagging indices and the actual observations made in most conventional boilers has been poor, especially when their use is extended to different coals. In this thesis, a numerical model to predict coal ash particle deposition rate in pulverized coal boilers has been developed. The overall sticking probability of the particle is determined by its viscosity and its tendency to rebound after impaction. The deposition model has been implemented in the Fluent 12.1 software, and the effects of swirling motion ash particle viscosity on deposition rates have been investigated. The predicted results are in good agreement with the reported experimental measurements on the Australian bituminous coals. Also, a novel numerical slagging index (NSI) which is based on ash fusibility, ash viscosity and the content of ash in the coals has been developed. The incoming ash shows significant influence on slag accumulation in boilers. The results of assessment of the slagging potential using the NSI on a wide range of coals and some coal blends correlate very well with the reported field performance of the coals. The NSI has been modified to predict the slagging potential of some coal and biomass blends with < 20% biomass ratio. The results of predictions using the modified index on coals blended with sewage sludge and saw-dust are in good agreement with the experimental data.
APA, Harvard, Vancouver, ISO, and other styles
35

Villavicencio, Rojas Maria Daniela. "Predictive modelling of the tribological behaviour of self-lubricating composite materials." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI040.

Full text
Abstract:
Dans les matériaux composites autolubrifiants, la génération de particules d’usure est nécessaire pour assurer la lubrification. Dans les roulements à billes, ce type de lubrification est possible grâce au matériau de la cage, composé d’un matériau composite autolubrifiant, tandis que le reste du roulement est fait en métal (AISI 440C). Pour les applications spatiales, le RT/Duroid 5813 est un composite autolubrifiant reconnu pour les cages de ce type de roulements. Ce matériau a été largement utilisé car il a répondu aux besoins de la lubrification sèche dans l’espace. Cependant, la production de ce matériau a été arrêtée dans les années 90. Cette situation a conduit à la recherche d’un « matériau équivalent » , répondant à la fois aux besoins du marché spatial et aux « besoins tribologiques ». Aujourd’hui, le principal inconvénient lié à ces matériaux est le manque de prévisibilité de leur comportement tribologique. Dans ce travail, une approche couplée expérimentale et numérique a été proposée afin de modéliser le comportement tribologique des matériaux composites autolubrifiants. Le modèle numérique proposé a été nourri par des caractérisations expérimentaux (comme la tomographie à rayons X pour la création de la morphologie du matériau numérique, ou la microscopie à force atomique pour informer la valeur de l’adhésion entre les composants). Le but d’une telle démarche numérique est de palier les limites d’une approche entièrement expérimentale qui ne permet pas d’observer in-situ le contact de par son caractère confiné. L’objectif du présent travail est de donner une réponse au problème de compréhension du comportement tribologique de matériaux composites autolubrifiants dans le mécanisme de double transfert. Ceci en vue de contribuer au développement d’un nouveau matériau tribologique répondant aux besoins des applications spatiales. Parmi tous les matériaux autolubrifiants, le PGM-HT a été sélectionné dans cette étude car sa morphologie grossière a permis de construire une version numérique du matériau (avec la résolution du tomographe à rayons X utilisé dans ce travail). Néanmoins, l’approche proposée ici pour construire le modèle numérique peut être étendue à d’autres matériaux composites autolubrifiants. Le modèle numérique proposé dans ces travaux ouvre de nouvelles perspectives en termes de conception de matériaux, car il permet d’étudier directement les scénarios de dégradation et d’usure des matériaux composites. D’un point de vue général, il est à noter que la tribologie numérique est un outil offrant de multiples possibilités pour la compréhension des matériaux autolubrifiants, et permet d’aider dans le processus de prédiction du comportement tribologique des matériaux autolubrifiants
In self-lubricating composite materials, the generation of a stable third body layer is necessary to ensure contact lubrication. This is specially true for the contact in which these materials are directly involved, and also in other contacts implicating its counterface. Such type of lubrication is possible in self-lubricating bearings thanks to its cage material, which is made of the self-lubricating composite, while the rest of the bearing is usually made of AISI 440C. For space applications, RT/Duroid 5813 is a recognized self-lubricating composite cage material for this kind of bearings. This material has been widely used not only because of the space heritage, but also because it has satisfied the needs of space dry lubrication. However, the production of this material has been stopped in the 90’s, and it has placed the latter out of the market. This situation has led to the search for an equivalent material, that meets both the needs of the space market and the "tribological needs." Today, the main inconvenient related to these materials is the lack of predictability of their tribological behaviour. In this work, the "making of" a coupled numerical-experimental approach has been proposed in order to carry out the understanding of these materials. The goal of this numerical approach is to let to "complement" the limitations of a fully experimental or a fully numerical approach (the confined nature of the contact does not allow in situ observation). Such numerical approach has been informed with experimental test (as X-ray tomography for the creation of the numerical morphology, or atomic force microscopy to inform the value of adhesion between the components). Among all the self-lubricating materials, PGM-HT has been selected in this study because its coarse morphology let to build a numerical version of the material (with the resolution of the X-ray tomograph used in this work). Nevertheless, the approach that has been proposed here to build the numerical model, can be extended to other self-lubricating composite materials. The numerical model developed in this work opens new perspectives in terms of material design, as it makes it possible to directly study the scenarios of damage and wear of self-lubricating composite materials. From a general point of view, from this work it can be highlighted that numerical tribology is a tool that offers multiple possibilities in the understanding of self-lubricating materials, and that helps in the predictionof the tribological behaviour of self-lubricating materials. This work has then let to advance in the understanding of these materials
APA, Harvard, Vancouver, ISO, and other styles
36

Mamma-Graham, Adamantia S. "An intermittent predictive control approach to modelling sustained human motor control." Thesis, University of Glasgow, 2014. http://theses.gla.ac.uk/5425/.

Full text
Abstract:
Although human sustained control movements are continuous in nature there is still controversy on the mechanisms underlying such physiological systems. A popular topic of debate is whether human motor control mechanisms could be modelled as engineering control systems, and if so, what control algorithm is most appropriate. Since the early years of modelling sustained control tasks in human motor control the servomechanism has been an adequate model to describe human tracking tasks. Another continuous-time system model that is often used to model sustained control tasks is the predictive controller which is based on internal models and includes prediction and optimisation. On the other hand, studies have suggested intermittent behaviour of the ``human controller'' in sustained motor control tasks. This thesis investigated whether intermittent control is a suitable approach to describe sustained human motor control. It was investigated how well an intermittent control system model could approximate both the deterministic and non-deterministic parts of experimental data, from a visual-manual compensatory tracking task. Finally, a preliminary study was conducted to explore issues associated with the practical implementation of the intermittent control model. To fit the deterministic part of experimental data, a frequency domain identification method was used. Identification results obtained with an intermittent controller were compared against the results using continuous-time non-predictive and predictive controllers. The results show that the identified frequency response functions of the intermittent control model not only fit the frequency response functions derived from the experimental data well, but most importantly resulted in identified controller parameters which are similar to those identified using a predictive controller, and whose parameter values appear to be physiologically meaningful. A novel way to explain human variability, as represented by the non-deterministic part of the experimental data (the \emph{remnant}), was developed, based on an intermittent control model with variable intermittent interval. This model was compared against the established paradigm, in which variability is explained by a predictive controller with added noise, either signal dependent control signal noise, or observation noise. The study has shown that the intermittent controller with a variable intermittent interval could model the non-deterministic experimental data as well as the predictive controller model with added noise. This provides a new explanation for the source of remnant in human control as inherent to the controller structure, rather than as a noise signal, and enables a new interpretation for the physiological basis for human variability. Finally, the theoretical intermittent control model was implemented in real-time in the context of the physiological control mechanism of human standing balance. An experimental method was developed to apply automatic artificial balance of an inverted pendulum in the context of human standing, via functions electrical stimulation control of the lower leg muscles of a healthy subject. The significance of this study is, firstly, that frequency domain identification was applied for the first time with intermittent control, and it could be shown that both intermittent and predictive control models can model deterministic experimental data from manual tracking tasks equally well. Secondly, for the first time the inherent variability, which is represented by the remnant signal, in human motor control tasks could be modelled as part of the structure of the intermittent controller rather than as an added noise model. Although, the experimental method to apply automatic artificial balance of an inverted pendulum in the context of human standing was not successful, the intermittent controller was implemented for the first time in real-time and combined with electrical muscle stimulation to control a physiological mechanism.
APA, Harvard, Vancouver, ISO, and other styles
37

Williams, Daniel George. "The mind as a predictive modelling engine : generative models, structural similarity, and mental representation." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/286067.

Full text
Abstract:
I outline and defend a theory of mental representation based on three ideas that I extract from the work of the mid-twentieth century philosopher, psychologist, and cybernetician Kenneth Craik: first, an account of mental representation in terms of idealised models that capitalize on structural similarity to their targets; second, an appreciation of prediction as the core function of such models; and third, a regulatory understanding of brain function. I clarify and elaborate on each of these ideas, relate them to contemporary advances in neuroscience and machine learning, and favourably contrast a predictive model-based theory of mental representation with other prominent accounts of the nature, importance, and functions of mental representations in cognitive science and philosophy.
APA, Harvard, Vancouver, ISO, and other styles
38

Woodman, Patricia E. "Archaeological predictive modelling using GIS : a case study from the Scottish Mesolithic." Thesis, University of Reading, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Obajemu, Olusayo. "Predictive dynamic risk mapping and modelling of patients diagnosed with bladder cancer." Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/14368/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Alaya, Mili Nourhene. "Managing the empirical hardness of the ontology reasoning using the predictive modelling." Thesis, Paris 8, 2016. http://www.theses.fr/2016PA080062/document.

Full text
Abstract:
Multiples techniques d'optimisation ont été implémentées afin de surmonter le compromis entre la complexité des algorithmes du raisonnement et l'expressivité du langage de formulation des ontologies. Cependant les compagnes d'évaluation des raisonneurs continuent de confirmer l'aspect imprévisible et aléatoire des performances de ces logiciels à l'égard des ontologies issues du monde réel. Partant de ces observations, l'objectif principal de cette thèse est d'assurer une meilleure compréhension du comportement empirique des raisonneurs en fouillant davantage le contenu des ontologies. Nous avons déployé des techniques d'apprentissage supervisé afin d'anticiper des comportements futurs des raisonneurs. Nos propositions sont établies sous forme d'un système d'assistance aux utilisateurs d'ontologies, appelé "ADSOR". Quatre composantes principales ont été proposées. La première est un profileur d'ontologies. La deuxième est un module d'apprentissage capable d'établir des modèles prédictifs de la robustesse des raisonneurs et de la difficulté empirique des ontologies. La troisième composante est un module d'ordonnancement par apprentissage, pour la sélection du raisonneur le plus robuste étant donnée une ontologie. Nous avons proposé deux approches d'ordonnancement; la première fondée sur la prédiction mono-label et la seconde sur la prédiction multi-label. La dernière composante offre la possibilité d'extraire les parties potentiellement les plus complexes d'une ontologie. L'identification de ces parties est guidée par notre modèle de prédiction du niveau de difficulté d'une ontologie. Chacune de nos approches a été validée grâce à une large palette d'expérimentations
Highly optimized reasoning algorithms have been developed to allow inference tasks on expressive ontology languages such as OWL (DL). Nevertheless, reasoning remains a challenge in practice. In overall, a reasoner could be optimized for some, but not all ontologies. Given these observations, the main purpose of this thesis is to investigate means to cope with the reasoner performances variability phenomena. We opted for the supervised learning as the kernel theory to guide the design of our solution. Our main claim is that the output quality of a reasoner is closely depending on the quality of the ontology. Accordingly, we first introduced a novel collection of features which characterise the design quality of an OWL ontology. Afterwards, we modelled a generic learning framework to help predicting the overall empirical hardness of an ontology; and to anticipate a reasoner robustness under some online usage constraints. Later on, we discussed the issue of reasoner automatic selection for ontology based applications. We introduced a novel reasoner ranking framework. Correctness and efficiency are our main ranking criteria. We proposed two distinct methods: i) ranking based on single label prediction, and ii) a multi-label ranking method. Finally, we suggested to extract the ontology sub-parts that are the most computationally demanding ones. Our method relies on the atomic decomposition and the locality modules extraction techniques and employs our predictive model of the ontology hardness. Excessive experimentations were carried out to prove the worthiness of our approaches. All of our proposals were gathered in a user assistance system called "ADSOR"
APA, Harvard, Vancouver, ISO, and other styles
41

Krieger, Alexandra. "Modelling, optimisation and explicit model predictive control of anaesthesia drug delivery systems." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/23908.

Full text
Abstract:
The contributions of this thesis are organised in two parts. Part I presents a mathematical model for drug distribution and drug effect of volatile anaesthesia. Part II presents model predictive control strategies for depth of anaesthesia control based on the derived model. Closed-loop model predictive control strategies for anaesthesia are aiming to improve patient's safety and to fine-tune drug delivery, routinely performed by the anaesthetist. The framework presented in this thesis highlights the advantages of extensive modelling and model analysis, which are contributing to a detailed understanding of the system, when aiming for the optimal control of such system. As part of the presented framework, the model uncertainty originated from patient-variability is analysed and the designed control strategy is tested against the identified uncertainty. An individualised physiologically based model of drug distribution and uptake, pharmacokinetics, and drug effect, pharmacodynamics, of volatile anaesthesia is presented, where the pharmacokinetic model is adjusted to the weight, height, gender and age of the patient. The pharmacodynamic model links the hypnotic depth measured by the Bispectral index (BIS), to the arterial concentration by an artificial effect site compartment and the Hill equation. The individualised pharmacokinetic and pharmacodynamic variables and parameters are analysed with respect to their influence on the measurable outputs, the end-tidal concentration and the BIS. The validation of the model, performed with clinical data for isoflurane and desflurane based anaesthesia, shows a good prediction of the drug uptake, while the pharmacodynamic parameters are individually estimated for each patient. The derived control design consists of a linear multi-parametric model predictive controller and a state estimator. The non-measurable tissue and blood concentrations are estimated based on the end-tidal concentration of the volatile anaesthetic. The designed controller adapts to the individual patient's dynamics based on measured data. In an alternative approach, the individual patient's sensitivity is estimated on-line by solving a least squares parameter estimation problem.
APA, Harvard, Vancouver, ISO, and other styles
42

Kobor, Hans P. "Closed loop supply chain waste reduction through predictive modelling and process analysis." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122573.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2019, In conjunction with the Leaders for Global Operations Program at MIT
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019, In conjunction with the Leaders for Global Operations Program at MIT
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 59-60).
Verizon distributes Customer Premises Equipment (CPE) such as set top boxes, broadband routers, and WiFi extenders to Fios customers via a variety of paths; for example: direct ship to customer (either for self-install or for later installation by a field technician), delivery via field technicians, or retail store pickup (primarily for self-install). Each method has its own benefits and shortcomings due to impacts on metrics such as inventory levels, shipping costs, on-time delivery, and system complexity. Although the majority of shipments are successfully activated in the customer's home, a non-trivial percentage results in unused returns or inventory shrinkage. These undesirable results represent a significant amount of wasted resources. This thesis is focused on identifying and realizing cost savings in the Fios supply chain through reduction in waste associated with unsuccessful shipments.
In order to effectively analyze the closed-loop supply chain, accurate and reliable process mapping is critical. Interviews with key stakeholders, together with order and shipment data analysis yielded a complete picture of the ecosystem's processes and infrastructure. Process mining techniques augmented this understanding, using event log data to identify and map equipment and information flows across the supply chain. All together this analysis is used to identify order cancellations as a key source of waste. To limit waste, it is necessary to conduct analysis both internal to Verizon's processes and externally, to determine if there are customer trends leading to order termination. Process mining was used for the internal analysis and, while it helped identify singular cases in which process abnormalities were associated with undesirable outcomes, its current form proved unsuited for root cause analysis.
Internal analysis did, however, illuminate opportunities for improvement in radio-frequency identification (RFID) usage and protocols across the supply chain. Current systems can result in poor visibility of equipment as it moves within some segments of the supply chain. The actual monetary impact is difficult to determine but likely to increase as the importance of RFID increases. External analysis is conducted through predictive modelling. Using a variety of data sources, a model with over 80% sensitivity and a low false positive rate is achieved. Operationalizing this model through real time incorporation with sales was explored but found to be overly complex. Instead, the random forest model yielded policy changes guided by the features with the highest importance. A pilot is currently in development to test the efficacy of suggested changes, as the model implies significant savings opportunity.
by Hans P. Kobor.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
43

Ceran, Murat. "Parametric human spine modelling." Thesis, Loughborough University, 2006. https://dspace.lboro.ac.uk/2134/7958.

Full text
Abstract:
3-D computational modelling of the human spine provides a sophisticated and cost-effective medium for bioengineers, researchers, and ergonomics designers in order to study the biomechanical behaviour of the human spine under different loading conditions. Developing a generic parametric computational human spine model to be employed in biomechanical modelling introduces a considerable potential to reduce the complexity of implementing and amending the intricate spinal geometry. The main objective of this research is to develop a 3-D parametric human spine model generation framework based on a command file system, by which the parameters of each vertebra are read from the database system, and then modelled within commercial 3-D CAD software. A novel data acquisition and generation system was developed as a part of the framework for determining the unknown vertebral dimensions, depending on the correlations between the parameters estimated from existing anthropometrical studies in the literature. The data acquisition system embodies a predictive methodology that comprehends the relations between the features of the vertebrae by employing statistical and geometrical techniques. Relations amongst vertebral parameters such as golden ratio were investigated and successfully implemented into the algorithms. The validation of the framework was carried out by comparing the developed 3-D computational human spine models against various real life human spine data, where good agreements were achieved. The constructed versatile framework possesses the capability to be utilised as a basis for quickly and effectively developing biomechanical models of the human spine such as finite element models.
APA, Harvard, Vancouver, ISO, and other styles
44

Atkinson, Ian Andrew. "Advanced linear predictive speech compression at 3.0 kbits/sec and below." Thesis, University of Surrey, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

McNeilly, Gordon. "Coordinated control of hot strip tandem rolling mill." Thesis, University of Strathclyde, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Gyllenhammar, Andreas. "Predictive Modelling of Aquatic Ecosystems at Different Scales using Mass Balances and GIS." Doctoral thesis, Uppsala University, Department of Earth Sciences, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4143.

Full text
Abstract:

This thesis presents models applicable for aquatic ecosystems. Geographical Information Systems (GIS) form an important part of the thesis. The dynamic mass balance models focus on nutrient fluxes, biotic/abiotic interactions and operate on different temporal and spatial scales (site, local, regional and international). The relevance and role of scale in mass balance modelling is a focal point of the thesis.

A mesocosm experiment was used to construct a model to estimate the nutrient load of phosphorus and nitrogen from net cage fish farming (i.e., the site scale). The model was used to estimate what feeding conditions that are required for a sustainable aquaculture scenario, i.e., a zero nutrient load situation (a linkage between the site scale and the regional scale).

A dynamic model was constructed for suspended particulate matter (SPM) and sedimentation in coastal areas (i.e., the local scale) with different morphometric characteristics and distances to the Sea. The results demonstrate that the conditions in the Sea (the regional and international scale) are of fundamental importance, also for the most enclosed coastal areas.

A foodweb model for lakes was transformed and recalibrated for Baltic Sea conditions (i.e., the international scale). The model also includes a mass balance model for phosphorus and accounts for key environmental factors that regulate the presuppositions for production and biomasses of key functional groups of organisms. The potential use of the new model for setting fish quotas of cod was examined.

For the intermittent (i.e., regional) scale, topographically complex areas can be difficult to define and model. Therefore, an attempt was made to construct a waterscape subbasin identification program (WASUBI). The method was tested for the Finnish Archipelago Sea and the Okavango Delta in Botswana. A comparison to results from a semi-random delineation method showed that more enclosed basins was created with the WASUBI method.

APA, Harvard, Vancouver, ISO, and other styles
47

Shaikhina, Torgyn. "Machine learning with limited information : risk stratification and predictive modelling for clinical applications." Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/99640/.

Full text
Abstract:
The high cost, complexity and multimodality of clinical data collection restrain the datasets available for predictive modelling using machine learning (ML), thus necessitating new data-efficient approaches specifically for limited datasets. This interdisciplinary thesis focuses on clinical outcome modelling using a range of ML techniques, including artificial neural networks (NNs) and their ensembles, decision trees (DTs) and random forests (RFs), as well as classical logistic regression (LR) and Cox proportional hazards (Cox PH) models. The utility of ML for data-efficient regression, classification and survival analyses was investigated in three clinical applications, whereby exposing the common limitations inherent in patient data, such as class imbalance, incomplete samples, and, in particular, limited dataset size. The latter problem was addressed by developing a methodological framework for learning from datasets with less than 10 observations per predictor variable. A novel method of multiple runs overcame the volatility of NN and DT models due to limited training samples, while a surrogate data test allowed for regression model evaluation in the presence of noise due to limited dataset size. When applied to hard tissue engineering for predicting femoral fracture risk, the framework resulted in 98.3% accurate regression NN. The framework was used to detect early rejection in antibody- incompatible kidney transplantation, achieving 85% accurate classification DT. The third clinical task – that of predicting 10-year incidence of type 2 diabetes in the UK population – resulted in 70-85% accurate classification and survival models, whilst highlighting the challenges of learning with the limited information characteristic of routinely collected data. By discovering unintuitive patterns, supporting existing hypotheses and generating novel insight, the ML models developed in this research contributed meaningfully to clinical research and paved the way for data-efficient applications of ML in engineering and clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
48

Rampartab, Chanel. "Facilitating golden mole conservation in South African highland grasslands : a predictive modelling approach." Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/20967.

Full text
Abstract:
Golden moles are subterranean mammals endemic to sub-Saharan Africa and threatened by anthropogenic habitat loss. At present, little is known about the biology, taxonomy, distribution and severity of threats faced by many of these taxa. In an attempt to raise awareness of these elusive grassland flagship taxa, the Endangered Wildlife Trust's Threatened Grassland Species Programme (EWT-TGSP) identified the need for more information on the distributions and conservation status of four poorly-known golden mole taxa (Amblysomus hottentotus longiceps, A. h. meesteri, A. robustus, A. septentrionalis) that are endemic to the Grassland Biome, and which may be heavily impacted by anthropogenic habitat alteration in the Highveld regions of Mpumalanga Province. This study employed species distribution modelling to predict the distributional ranges of these taxa, and involved four main processes: (i) creating initial models trained on sparse museum data records; (ii) ground-truthing field surveys during austral spring/summer to gather additional specimens at additional localities; (iii) genetic analyses (using cytochrome-b) to determine the species identities of the newly-acquired specimens, as these taxa are morphologically indistinguishable; and (iv) refining the models and determining the conservation status of these Highveld golden moles. Initial species distribution models were developed using occurrence records for 38 specimens, based on interpolated data for 19 bioclimatic variables, continuous altitude data, as well as categorical spatial data for landtypes, WWF ecoregions and vegetation types. These initial models helped to effectively focus survey efforts within a vast study area, with surveying during the austral spring-summer of 2013-4 resulting in the acquisition of 25 specimens from across Mpumalanga, nine individuals of which (A. h. meesteri n = 2; A. septentrionalis n = 5; unknown n = 2) were captured in five new quarter-degree-squares (QDSs) where no previous golden moles have been recorded. Additionally, observed activity was also recorded in nine new QDSs (see Appendix 3), showing that the model refinement methods used (variable selection, auto-correlation, non-repeated versus cross-validated models, jackknife of variable importance and localities, independent data testing) were effective in locating golden mole populations. By using genetically-identified historical golden mole records, predictive distribution models were calibrated in maximum entropy (MaxEnt) software to focus ground-truthing efforts.
APA, Harvard, Vancouver, ISO, and other styles
49

Cisneros, Pablo S. G. [Verfasser]. "Quasi-Linear Model Predictive Control: Stability, Modelling and Implementation / Pablo Sebastian Gonzalez Cisneros." Hamburg : Universitätsbibliothek der Technischen Universität Hamburg-Harburg, 2021. http://d-nb.info/1234550792/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Gonzalez, Cisneros Pablo Sebastian [Verfasser]. "Quasi-Linear Model Predictive Control: Stability, Modelling and Implementation / Pablo Sebastian Gonzalez Cisneros." München : Verlag Dr. Hut, 2021. http://d-nb.info/1238422772/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography