Dissertations / Theses on the topic 'Predicitons'

To see the other types of publications on this topic, follow the link: Predicitons.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 39 dissertations / theses for your research on the topic 'Predicitons.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

PAGLIARINI, ELENA. "Predictive Timing in Developmental Dyslexia: a New Hypothesis. Anticipatory skills across language and motor domains." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2016. http://hdl.handle.net/10281/100447.

Full text
Abstract:
Developmental dyslexia (DD) is a learning disorder characterized by specific difficulty in learning to read accurately and fluently. It has been argued that the source of the disorder in DD is phonological in nature (Snowling 2000; Ramus et al. 2003). However, this theory does not account for fine and gross motor problems frequently attested in DD (Lam et al., 2011; Nicolson & Fawcett, 1990). In addition, dyslexics often suffer from subtle deficits in the processing of morphosyntactic features and of complex syntactic structures (e.g., Cantiani et al. 2013; Robertson & Joanisse 2010). These facts could be accounted for in terms of comorbidity. But, the frequency of co-occurrence of these disorders can also hint at something deeper. In this thesis, I take up the following questions: which is the nature of the impairment in individuals with DD? What do language and motor activities have in common? To answer to these queries, I propose a framework according to which dyslexics struggle in exploiting temporal regularities to efficiently anticipate linguistic and motor events. I provide evidence for this hypothesis with 5 studies on Italian children and adults with DD. Study 1 shows that handwriting (a motor activity) and reading (which is based on language) follow a similar pathway in pupils in the first years of school. Study 2 shows that children with DD are less able to comply with the rhythmic principles of handwriting (RPH) as compared to controls. Moreover, the presence of correlations among handwriting and reading/language measures suggests that the language and the motor systems are potentially linked in the brain. Study 3 shows that RHP are at work from the age of 6 and that all groups of children (range 6-10 yo) comply with them in equal terms, thus disconfirming the possibility that the RHP emerge after some amounts of handwriting and reading experience. Study 4 shows that adults with DD display a greater error and are more variable than controls in high predictable rhythmic stimuli. They also display a poorer performance in the test for reception of grammar and inserted fewer pegs in the fine motor skill task. Study 5 is an extension of Study 4 to children with DD. It shows that children with DD over-anticipate the occurrence of the beat and are very variable in their response. They are also impaired in morphosyntax as compared to controls. In Study 4 and 5, participants with good predictive skills are also faster in reading and performed better in the language tasks. Overall, the results show that the language and the motor system are more linked than has been previously suggested. Language and motor acts have a rhythmic structure that allows humans to generate timing predictions about an upcoming sensory input. The ability to predict efficiently future events reduces memory load through the pre-activation of the sensory system. In the light of the present results, it seems that dyslexics are unable to exploit temporal regularities to efficiently anticipate the next sensory event. Therefore, the predictive timing system of dyslexics appears impaired, thus affecting both the reading/language and the motor domain.
APA, Harvard, Vancouver, ISO, and other styles
2

Høst, Jan. "In silico predicition of intestinal transport /." Cph. : The Danish University of Pharmaceutical Sciences, 2006. http://www.dfuni.dk/index.php/Jan_Hoest/3066/0/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jahanbakhsh, Alireza. "Predicition of air flow in diesel combustion chambers." Thesis, Imperial College London, 1986. http://hdl.handle.net/10044/1/38049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Giannakou, Antri. "Prediciting the progression of cognitive impairment in memory clinics." Thesis, University of Bristol, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.600726.

Full text
Abstract:
Neuropsychological tests are routinely undertaken at memory clinics to help diagnose Mild Cognitive Impairment (MCI) and Dementia. For those patients found to have MCI it would be clinically valuable to be able to distinguish patients at high risk of progressing to Dementia. The aim of this Thesis is to develop a prognostic algorithm using neuropsychological test performance, in order to estimate the probability of progressing to dementia in patients with MC!. The cohort providing data attended the BRACE memory clinic in Bristol, UK and comprises 643 MCI patients, 268 of whom were observed to progress to Dementia. To inform our analysis, a systematic search of the published literature and meta-analysis were conducted for an indication of which neuropsychological tests can distinguish those MCI patients at risk of progressing to dementia. This resulted in 38 articles fulfilling the inclusion criteria. A simulation experiment determined optimal methods for transforming the different types of estimate presented to a Diagnostic Odds Ratio (DOR) scale. Subsequently the meta-analysis revealed the most predominant domains associated with progression to dementia were Memory, Learning, and Language. Using the BRACE data, the best subset of neuropsychological tests to predict dementia was found to be compromising the tests of Mini Mental State Examination (MMSE), the Immediate Recall of the Story Test and the total score of Hopkins Verbal Learning Test (HVLT).
APA, Harvard, Vancouver, ISO, and other styles
5

Wilson, David J. "A comparison of high-latitude ionosphere propagation predicitions from AMBCOM with measured data." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/43734.

Full text
Abstract:
Approved for public release; distribution is unlimited.
This thesis examines the performance of SRI's Ambient Communications (AMBCOM) model for high latitude propagation prediction. It is one in a series of studies, conducted at the Naval Postgraduate School, to establish the relative merits of several computer-based propagation prediction models using a standard set of measured data. AMBCOM modeled the propagation path between a transmitter located in the polar cap region and several midlatitude receiver sites. Model predictions were matched to measured data obtained during two high- latitude communication experiments (campaigns). The absolute difference between model signal-to-noise ratio (SNR) and measured SNR was considered as error. Error statistics were accumulated to show the distribution of the error by campaign and frequency. The percentage, by frequency, of matched AMBCOM predictions in reference to total predictions for a given frequency was considered a measure of AMBCOM performance. AMBCOM exhibited small absolute values of average error, i.e., 7-11 dB, and high percentages of matched records. The average error was typically distributed between -20 and +20 dB. Unfortunately, these are only relative measures of model performance. The site antenna and environmental data used to model high latitude campaigns were estimated not measured, and some variation in AMBCOM results may be attributable to poor estimates. The measured data were not designed specifically for model validation, and further comparisons are needed with new measured data.
APA, Harvard, Vancouver, ISO, and other styles
6

Du, Toit Jacques Emile. "Prediciting functionals of brownian motion through local time-space calculus." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.506582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

黃小華 and Siu-wah Wong. "Predicition of fatigue crack propagation using strain energy density method." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1989. http://hub.hku.hk/bib/B31209506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wong, Siu-wah. "Predicition of fatigue crack propagation using strain energy density method /." [Hong Kong : University of Hong Kong], 1989. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12751601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Y. "A Molecular approach for charcterization and property predicitions of petroleum mixtures with applications to refinery modelling." Thesis, University of Manchester, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515183.

Full text
Abstract:
A new consistent characterisation method has been developed to describe the complex composition of petroleum mixtures in terms of the molecular type and homologous series. The petroleum mixture is conceived as a matrix in which the rows represent carbon numbers, while each column represents a homologous series. The concentration of each individual component in the matrix can be measured using modern analytical tools such as gas chromatography (GC), high-performance liquid chromatography (HPLC), mass spectrometry (MS), field ionisation mass spectrometry (FIMS), sulphur chemiluminescence detection (SCD), etc. To evaluate the impacts of crude composition and refining chemistry on the composition and quality of refinery products, a novel method is proposed to predict the properties of petroleum mixtures based on the compositional information contained in the matrix. In this method, molecular structure-property correlations have been developed first to predict the boiling point and density of the molecular type homologous series in the matrix with high accuracy. Then the ASTM distillation curve and bulk density of the petroleum mixtures can be calculated with an assumed mixing rule. To predict other properties such as critical constants, freezing point, cetane number, pour point, cloud point, etc., well-tested correlations based on the distillation curve and bulk density are used along with the compositional information in the matrix. In addition, gasoline octane number can be predicted from molecular composition-based correlations. A simple but accurate method is also proposed to predict the molecular composition of a new feed through blending of fully characterised petroleum mixtures, thus expensive and time-consuming experimental analyses can be spared. The consistent molecular level characterisation of petroleum mixtures has enabled the development of refinery reaction and separation models based on the underlying process chemistry and thermodynamic principles. In addition, with the molecular information provided by the new characterisation, more efficient optimisation and integration can be conducted in the context of overall refinery.
APA, Harvard, Vancouver, ISO, and other styles
10

Albajar, Viñas Ferran. "Radiation Transport Modelling in a Tokomak Plasma: Application to Performance Prediciton and Design of Future Machines." Doctoral thesis, Universitat Politècnica de Catalunya, 2001. http://hdl.handle.net/10803/6599.

Full text
Abstract:
The understanding and modelling of heat and radiation transport in tokamak plasmas is essential in order to progress in the development of thermonuclear fusion towards a practical energy source which meets all the future needs of environment, safety, and fuel inexhaustibility. This activity enables prospective studies and design to be carried out for next step tokamaks. Due to the complexity of the exact calculation, synchrotron losses are usually estimated in such studies, with expressions derived from a plasma description using simplifying assumptions on the geometry, radiation absorption, density and temperature profiles. In this thesis, a complete formulation of the transport of synchrotron radiation is performed for realistic conditions of toroidal plasma geometry with elongated cross-section, using a precise method for the calculation of the absorption coefficients, and for arbitrary shapes of density and temperature profiles. In particular, this formulation is able to describe plasmas with arbitrary aspect ratios and with temperature profiles obtained in internal transport barrier regimes, which cannot be described accurately with the present expressions. As an illustration, we show that in the case of an advanced high-temperature plasma envisaged for a steady state D-T commercial reactor, synchrotron losses represent approximately 20% of the total losses. Considering the quantitative importance of the above effects and the significant magnitude of synchrotron losses in the thermal power balance of a D-T tokamak reactor plasma, a new fit for the fast calculation of the synchrotron radiation loss is proposed. Using this improved model in the thermal balance, prospective and sensitivity studies are performed for future tokamak projects, and the key issues which limit the performance are isolated. It is shown that, the most restrictive constraint for achieving higher plasma performance is the peak heat flux on the divertor plates. In non-inductive steady-state operation, advanced tokamak regimes are required to achieve relevant thermonuclear plasma performance for next step tokamaks and for a commercial reactor. In the frame of a multi-step strategy towards a commercial reactor, a superconducting next step tokamak compatible with the European budget possibilities is optimized. Considering both the plasma physics and the magnetic system technology and for a given aspect ratio, the smallest machine meeting the physical and technological requirements is determined. In a steady state tokamak commercial reactor, we show that there is an optimal value for the confinement enhancement factor which maximizes the plasma performance, for a given and also for the highest electrical power into the network. This highest electrical power meeting the stability requirements steadily decreases with the confinement enhancement factor. This effect is crucial because both a high plasma performance and a high enough electrical power into the network are required to minimize the cost of electricity, and consequently to make fusion energy more competitive.
APA, Harvard, Vancouver, ISO, and other styles
11

Rideg, Johan, and Max Markensten. "Are we there yet? : Prediciting bus arrival times with an artificial neural network." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-386548.

Full text
Abstract:
Public transport authority UL (Upplands Lokaltrafik) aims to reduce emissions, air pollution, and traffic congestion by providing bus journeys as an alternative to using a car. In order to incentivise bus travel, accurate predictions are critical to potential passengers. Accurate arrival time predictions enable the passengers to spend less time waiting for the bus and revise their plan for connections when their bus runs late. According to literature, Artificial Neural Networks (ANN) has the ability to capture nonlinear relationships between time of day and position of the bus and its arrival time at upcoming bus stops. Using arrival times of buses on one line from July 2018 to February 2019, a data-set for supervised learning was curated and used to train an ANN. The ANN was implemented on data from the city buses and compared to one of the models currently in use. Analysis showed that the ANN was better able to handle the fluctuations in travel time during the day, only being outperformed at night. Before the ANN can be implemented, real time data processing must be added. To cement its practicality, whether its robustness can be improved upon should be explored as the current model is highly dependent on static bus routes.
APA, Harvard, Vancouver, ISO, and other styles
12

Asgaryan, Mohammad. "Prediciton of the remaining service life of superheater and reheater tubes in coal-biomass fired power plants." Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/8278.

Full text
Abstract:
As a result of concern about the effects of CO2 emssions on the global warming, there is increasing pressure to reduce such emissions from power generation systems. The use of biomass co-firing with coal in conventional pulverised fuel power plants has provided the most immediate route to introduce a class of fuel that is regarded as both sustainable and carbon neutral as it produces less net CO2 emissions. In the future it is anticipated that increased levels of biomass will be required to use in such systems to accomplish the desired CO2 emissions targets. The use of biomass, however, is believed to result in severe fireside corrosion of superheater and reheater tubing and cause unexpected early failures of tubes, which can lead to significant economic penalties. Moreover, future pulverised fuel power systems will need to use much higher steam temeptures and pressures to increase the boiler efficiency. Higher operating temperatures and pressures will also increase the risk of fireside corrosion damage to the boiler tubing and lead to shorter component life. Predicting the remaining service life of superheater and reheater tubes in coal-biomass fired power plants is therefore an important aspect of managing such power plants. The path to this type of failure of heat exchangers involves five processes: combustion, deposition, fireside corrosion, steam-side oxidation, and creep. Various models or partial models each of these processes are available from existing research, but to fully understand the impact of new fuel mixtures (i.e. biomass and coal) and changing operating conditions on such failures, an integrated model of all of these processes is required. This work has produced an integrated set of models and so predicted the remaining service life of superheater/reheater tubes based on the three frameworks which have been developed by analysing those models used in depicting the five processes: one was conceptual and the other two were based on mathematical model. In addition, the outputs of the integrated mathematical models were compared with the laboratory generated data from Cranfield University as well as historical data from Central Electricity Research Laboratories. Furthermore, alternative models for each process were applied in the model and the results were compared with other models results as well as with the experimental data. Based on these comparisons and the availability of models constants the best models were chosen in the integrated model. Finally, a sensitivity analysis was performed to assess the effect of different model input values on the residual life superheater and reheater tubing. Mid-wall metal temperature of tubes was found to be the most important factor affecting the remaining service life of boiler tubing. Tubing wall thickness and outer diameter were another critical input in the model. Significant differences were observed between the residual life of thin-walled and thick-walled tubes.
APA, Harvard, Vancouver, ISO, and other styles
13

Tate, Geoffrey W. "Machine learning for predicitng the risk of osteoporosis from patient attributes, health and lifestyle history." Thesis, Cranfield University, 2004. http://dspace.lib.cranfield.ac.uk/handle/1826/11346.

Full text
Abstract:
The most widely-used method for diagnosis of osteoporosis is to determine bone mineral density (BMD) by bone densitometry. At present mass screening is not, on the basis of resource constraints, considered a option. This project investigates if artificial neural networks (ANN s) or Baysian networks (BNs), using the health and lifestyle history of a patient, (risk factors - used as a generic term for inputs) may be used to develop a preliminary screening system to determine in a patient is at particular risk from osteoporosis and hence in need of a scan. Two databases have been used, one containing 486 records (29 risk factors) of patients examined with a G E Linear Peripheral Densitometer (PIXI) and the other with 4,980 records (33 risk factors) of patients examined with dual energy X ray absorptiometry (DEXA). BNs tend to out-perform AN s particularly where smaller learning sets are involved. The best result was 84% accuracy (sensitivity 0.89 and specificity 0.80) with PIXI and a BN. I general, however, with ANNs the sensitivity achieved with PIXI and DEXA was 0.65 and 0.80 respectively and the corresponding values with BNs were 0.72 and 0.81. The diagnostic performance with ANNs could be achieved with fewer risk factors (PDQ from 29 to 4 and DEXA from 33 to 5) but with BNs a reduction in performance accompanied a reduction in the number of risk factors. l The results also indicate: 0 For Positive patients, the more severely affected by the disease the more accurately they are diagnosed . 0 The lack of continuous values in the DEXA data results in a poor diagnosis of Negative patients. 0 Classifications based on BMD predictions and pattern recognition give similar results. 0 Reasoning with BNs can provide an indication of how a particular risk factor state contributes to a patient`s risk from osteoporosis.
APA, Harvard, Vancouver, ISO, and other styles
14

Rauch, Alan F. "EPOLLS: An Empirical Method for Prediciting Surface Displacements Due to Liquefaction-Induced Lateral Spreading in Earthquakes." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30346.

Full text
Abstract:
In historical, large-magnitude earthquakes, lateral spreading has been a very damaging type of ground failure. When a subsurface soil deposit liquefies, intact blocks of surficial soil can move downslope, or toward a vertical free face, even when the ground surface is nearly level. A lateral spread is defined as the mostly horizontal movement of gently sloping ground (less than 5% surface slope) due to elevated pore pressures or liquefaction in undelying, saturated soils. Here, lateral spreading is defined specifically to exclude liquefaction failures of steeper embankments and retaining walls, which can also produce lateral surface deformations. Lateral spreads commonly occur at waterfront sites underlain by saturated, recent sediments and are particularly threatening to buried utilities and transportation networks. While the occurrence of soil liquefaction and lateral spreading can be predicted at a given site, methods are needed to estimate the magnitude of the resulting deformations. In this research effort, an empirical model was developed for predicting horizontal and vertical surface displacements due to liquefaction-induced lateral spreading. The resulting model is called "EPOLLS" for Empirical Prediction Of Liquefaction-induced Lateral Spreading. Multiple linear regression analyses were used to develop model equations from a compiled database of historical lateral spreads. The complete EPOLLS model is comprised of four components: (1) Regional-EPOLLS for predicting horizontal displacements based on the seismic source and local severity of shaking, (2) Site-EPOLLS for improved predictions with the addition of data on the site topography, (3) Geotechnical-EPOLLS using additional data from soil borings at the site, and (4) Vertical-EPOLLS for predicting vertical displacements. The EPOLLS model is useful in phased liquefaction risk studies: starting with regional risk assessments and minimal site information, more precise predictions of displacements can be made with the addition of detailed site-specific data. In each component of the EPOLLS model, equations are given for predicting the average and standard deviation of displacements. Maximum displacements can be estimated using probabilities and the gamma distribution for horizontal displacements or the normal distribution for vertical displacements.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Perry, Robert Theodore. "The efficacy of attribution theory for prediciting [sic] MSW's orientations towards treating children with attention deficit disorders." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/2027.

Full text
Abstract:
An overview of Attention Deficit Disorders is given along with a description of attribution theory and issues facing MSWs in CPS type settings. A questionnaire was administered to Masters of Social Workers (MSWs) employed by the Department of Children's Services, San Bernardino, California to test the hypothesis that Master of Social Work (MSW) workers attitudes towards children with Attention Deficit Disorders (ADD/ADHD) are affected by the perceived cause of the disorders.
APA, Harvard, Vancouver, ISO, and other styles
16

Rutaganda, Remmy. "Automated Model-Based Reliability Prediction and Fault Tree Analysis." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-67240.

Full text
Abstract:
This work was undertaken as a final year project in Computer Engineering, within the Department of Computer and Information Science at Linköping University. At the Department of Computer and Information Science, work oriented at testing and analyzing applications is developed to provide solution approaches to problems that arise in system product development. One of the current applications being developed is the ‘Systemics Analyst’. The purpose of the application is to facilitate for system developers with an analysis tool permitting insights on system reliability, system critical components, how to improve the system and the consequences as well as risks of a system failure. The purpose of the present thesis was to enhance the ‘Systemics Analyst application’ by incorporating an ‘automated model-based reliability prediction’ and ‘fault tree analysis’ modules. This enables reliability prediction and fault tree analysis diagrams to be generated automatically from the data files and relieves the system developer from manual creation of the diagrams. The enhanced Systemics Analyst application managed to present the results in respective models using the new incorporated functionality. To accomplish the above tasks, ‘Systemics Analyst application’ was integrated with a library that handles automated model-based reliability prediction and fault tree analysis, which is described in this thesis. The reader will be guided through the steps that are performed to accomplish the tasks with illustrating figures, methods and code examples in order to provide a closer vision of the work performed.
APA, Harvard, Vancouver, ISO, and other styles
17

Danielsson, Jakob, and Anton Forsberg. "Crowd-based Network Prediction : a Comparison of Data-exchange Policies." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119685.

Full text
Abstract:
Network performance maps can be used as a tool to predict network conditions at a given location, based on previous measurements at that location. By using measurement data from other users in similar locations, these predictions can be significantly improved. This thesis looks into the accuracy of predictions when using different approaches to distribute this data between users, we compare the accuracy of predictions achieved by using a central server containing all known measurements to the accuracy achieved when using a crowd-based approach with opportunistic exchanges between users. Using data-driven simulations, this thesis also compares and evaluates the impact of using different exchange policies. Based on these simulations we conclude which of the exchange policies provides the most accurate predictions.
APA, Harvard, Vancouver, ISO, and other styles
18

Tom, Tracey Hiroto Alena. "Development of Wave Prediction and Virtual Buoy Systems." 京都大学 (Kyoto University), 2010. http://hdl.handle.net/2433/120845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Smith, Marcus Edward Brockbank. "A Parametric Physics Based Creep Life Prediction Approach to Gas Turbine Blade Conceptual Design." Diss., Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22637.

Full text
Abstract:
The required useful service lives of gas turbine components and parts are naturally one of the major design constraints limiting the gas turbine design space. For example, the required service life of a turbine blade limits the firing temperature in the combustor, which in turn limits the performance of the gas turbine. For a cooled turbine blade, it also determines the necessary cooling flow, which has a strong impact on the turbine efficiency. In most gas turbine design practices, the life prediction is only emphasized during or after the detailed design has been completed. Limited life prediction efforts have been made in the early design stages, but these efforts capture only a few of the necessary key factors, such as centrifugal stress. Furthermore, the early stage prediction methods are usually hard coded in the gas turbine system design tools and hidden from the system designer s view. The common failure mechanisms affecting the service life, such as creep, fatigue and oxidation, are highly sensitive to the material temperatures and/or stresses. Calculation of these temperatures and stresses requires that the geometry, material properties, and operating conditions be known; information not typically available in early stages of design. Even without awareness of the errors, the resulting inaccuracy in the life prediction may mislead the system designers when examining a design space which is bounded indirectly by the inaccurate required life constraints. Furthermore, because intensive creep lifing analysis is possible only towards the end of the design process, any errors or changes will cost the engine manufacturer significant money; money that could be saved if more comprehensive creep lifing predictions were possible in the early stages of design. A rapid, physics-based life prediction method could address this problem by enabling the system designer to investigate the design space more thoroughly and accurately. Although not meant as a final decision method, the realistic trends will help to reduce risk, by providing greater insight into the bounded space at an earlier stage of the design. The method proposed by this thesis was developed by first identifying the missing pieces in the system design tools. Then, by bringing some key features from later stages of design and analysis forward through 0/1/2Ds dimensional modeling and simulation, the method allows estimation of the geometry, material selection, and the loading stemming from the operating conditions. Finally, after integration with a system design platform, the method provides a rapid and more complete way to allow system designers to better investigate the required life constraints. It also extracts the creep life as a system level metric to allow the designers to see the impact of their design decisions on life. The method is to be first applied to a cooled gas turbine blade and could be further development for other critical parts. These new developments are integrated to allow the system designers to better capture the blade creep life as well as its impact on the overall design.
APA, Harvard, Vancouver, ISO, and other styles
20

Olsson, Kevin, and Valeriy Ivinskiy. "Predicting runners’ oxygen consumption on flat terrain using accelerometer data." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252749.

Full text
Abstract:
This project aimed to use accelerometer data and KPIs to predict the oxygen consumption of runners’ during exercises on flat terrain. Based on many studies researching the relationship between oxygen consumption and running economy and a small set of data, a model was constructed which had a prediction accuracy of 81.1% on one individual. Problems encountered during the research include issues with comparing data from different systems, model nonlinearity and data noise. These problems were solved using transformation of data in the R software, model re-specification and identifying outlying observations that could be viewed as noise. The results from this project should be seen as a proof of concept for further studies, showing that it is possible to predict oxygen consumption using a set of accelerometer data and KPIs. With a larger sample set this model can be validated and furthermore implemented in Racefox’s current service as a calibration method of individual results and an early warning system to avoid running economy deficiency.
Detta projekts målsättning var att använda accelerometerdata och KPI-värden för att prediktera syrekonsumtion för löpare på plan mark. Baserat på ett urval av studier om korrelationen mellan syrekonsumtion och löpekonomi samt en liten mängd data så konstruerades en modell med en förklaringsgrad på 81.1% på en individ. Svårigheter under arbetet inkluderar datajämförelser, icke-linjäriteter och databrus. Detta hanterades genom datatransformationer i mjukvaran R, modell-modifikationer och identifikation av avvikande data som kunde klassificeras som brus. Resultaten kan ses som en förstudie som indikerar att det är möjligt att prediktera syrekonsumtion genom accelerometerdata och KPI-värden. En fortsatt större studie med fler individer och mätningar som underlag kan validera denna slutsats samt då implementeras i Racefoxs nuvarande tjänst som en kalibreringsmetod för individuella resultat och som ett varningssystem för att undvika försämrad löpekonomi.
APA, Harvard, Vancouver, ISO, and other styles
21

Isacson, Jonas. "Network Interconnectivity Prediction from SCADA System Data : A Case Study in the Wastewater Industry." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-255812.

Full text
Abstract:
Increased strain on incumbent wastewater distribution networks originating from population increases as well as climate change calls for enhanced resource utilization. Accurately being able to predict network interconnectivity is vital within the wastewater industry to enable operational management strategies that optimizes the performance of the wastewater system. In this thesis, an evaluation of the network interconnectivity prediction performance of two machine learning models, the multilayer perceptron (MLP) and the support vector machine (SVM), utilizing supervisory control and dataacquisition (SCADA) system data for a wastewater system is presented. Results of the thesis imply that the MLP achieves the best predictions of the network interconnectivity. The thesis concludes that the MLP is the superior model and that the highest achievable network interconnectivity accuracy is 56% which is attained by the MLP model.
Den ökade påfrestningen på nuvarande avloppsnät till följd av befolkningstillväxt och klimatförändringar medför att det finns behov för optimerad resursförbrukning. Att korrekt kunna predicera ett avloppsnät är önskvärt då det möjliggör för effektivitetshöjande operativ förvaltning av avloppssystemet. I denna avhandling evalueras hur väl två maskininlärningsmodeller kan predicera nätverketssammankoppling med data från ett system för övervakning och kontroll av data (SCADA) genererat av ett avloppsnätverk. De två modellerna som testas är en multilagersperceptron (MLP) och en stödvektormaskin (SVM). Resultaten av avhandlingen visar på att MLP modellen uppnår den bästa prediktionen av nätverketssammankoppling. Avhandlingen konkluderar att MLP modellen är den bästa modellen för att predicera nätverkets sammankoppling samt att den högsta nåbara korrektheten var 56% vilket uppnåddes av MLP modellen.
APA, Harvard, Vancouver, ISO, and other styles
22

PRADHAN, Bharat. "Out of Plane response of Unreinforced Masonry infills: Comparative analysis of experimental tests for the definition of strategies of macro modelling and fragility prediction." Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/578468.

Full text
Abstract:
During an earthquake, an interaction between the in-plane and out-of-plane seismic forces occurs and the infilled frames suffer damage in both in-plane and out-of-plane directions simultaneously. Particularly, the out-of-plane collapse of unreinforced masonry infill walls is critical even for new buildings complying with the modern seismic codes, resulting in high casualties and huge economic losses. However, the out-of-plane behaviour of infill walls is yet not fully understood. This study is therefore aimed towards characterizing the out-of-plane seismic capacity of unreinforced masonry infill walls. First of all, available out-of-plane experimental tests performed on unreinforced masonry infill walls are reviewed with a detailed comparison of the experimental results. The influence of parameters like slenderness ratio, aspect ratio, boundary conditions, openings, vertical load, in-plane damage level, the strength of masonry and plaster, and frame stiffness are evaluated, and research gaps are identified. Based on the collected experiments, all available analytical capacity models are checked for their accuracy in the prediction of the out-of-plane capacity of unreinforced masonry infill walls. In doing so, both types of capacity models are evaluated: Type (I) for the estimation of the out-of-plane strength in the in-plane undamaged state; Type-II for the estimation of out-of-plane strength reduction factor for the in-plane damaged state. Afterwards, the best pairs of models from two groups i.e. Type I and Type II, are coupled and checked with the experimented specimens where the reference infill specimen (specimen tested in out-of-plane without prior in-plane damage) is not available. In addition, the influence of orthotropy of the infill masonry in the out-of-plane capacity predicted by the capacity models is analysed. The possibility of using the capacity models in the cases of infill-beam gap and infill with openings is also checked. Different available macro-modelling techniques are investigated and a simple macro-element model which can simulate the behaviour of unreinforced masonry infill walls under in-plane and out-of-plane loads is developed. The model is validated with different sets of experiments. The model takes into account the decrease in out-of-plane capacity due to prior in-plane damage and is capable to capture in-plane/out-of-plane interaction effects of the seismic forces. From the correlation between the experimental and macro-model results, empirical equations are developed that can be used to calculate the stress-strain parameters required for defining the compressive behaviour of the struts. With the provided strategy, the geometrical and mechanical parameters required for the struts can be easily identified for numerical modelling of the infill wall. Using the model, in-plane and out-of-plane responses of the infill wall in lateral loads can be checked. To enrich the information obtained from the experiments regarding the out-of-plane behaviour of infill walls, numerical experimentation is performed by using the developed macro-model covering the range of infill’s geometrical and mechanical properties. From the detailed parametric analysis, the out-of-plane strength of the infill wall is found to be largely influenced by compressive strength, slenderness ratio, aspect ratio, and more importantly by the level of in-plane damage. The decay of strength and stiffness due to prior in-plane damage is also largely governed by the strength and the slenderness ratio of the unreinforced masonry infill. Based on the numerical results, empirical equations are proposed for the evaluation of the infilled frame’s out-of-plane capacity under in-plane damaged or undamaged conditions. The reliability of the proposed equations is proved by comparisons with experimental results. Finally, a procedure for developing the out-of-plane fragility functions is proposed by using the developed macro-model. The fragility is calculated assuming the uncertainty in the geometric and mechanical properties of infill walls instead of the uncertainty in the seismic input. The fragility is defined with respect to the position of the infill wall in a low-rise RC building. Experimental data available in the literature are used for the validation of the output. Overall, the results indicated lower vulnerability in the out-of-plane direction for infill walls without prior in-plane damage and high vulnerability when the infill wall is prior damaged in the in-plane. The proposed procedure can be extended to other types of infill walls depending on the construction technique of the site of interest, obtaining different and specific fragility curves for perming a large-scale risk analysis.
During an earthquake, an interaction between the in-plane and out-of-plane seismic forces occurs and the infilled frames suffer damage in both in-plane and out-of-plane directions simultaneously. Particularly, the out-of-plane collapse of unreinforced masonry infill walls is critical even for new buildings complying with the modern seismic codes, resulting in high casualties and huge economic losses. However, the out-of-plane behaviour of infill walls is yet not fully understood. This study is therefore aimed towards characterizing the out-of-plane seismic capacity of unreinforced masonry infill walls. First of all, available out-of-plane experimental tests performed on unreinforced masonry infill walls are reviewed with a detailed comparison of the experimental results. The influence of parameters like slenderness ratio, aspect ratio, boundary conditions, openings, vertical load, in-plane damage level, the strength of masonry and plaster, and frame stiffness are evaluated, and research gaps are identified. Based on the collected experiments, all available analytical capacity models are checked for their accuracy in the prediction of the out-of-plane capacity of unreinforced masonry infill walls. In doing so, both types of capacity models are evaluated: Type (I) for the estimation of the out-of-plane strength in the in-plane undamaged state; Type-II for the estimation of out-of-plane strength reduction factor for the in-plane damaged state. Afterwards, the best pairs of models from two groups i.e. Type I and Type II, are coupled and checked with the experimented specimens where the reference infill specimen (specimen tested in out-of-plane without prior in-plane damage) is not available. In addition, the influence of orthotropy of the infill masonry in the out-of-plane capacity predicted by the capacity models is analysed. The possibility of using the capacity models in the cases of infill-beam gap and infill with openings is also checked. Different available macro-modelling techniques are investigated and a simple macro-element model which can simulate the behaviour of unreinforced masonry infill walls under in-plane and out-of-plane loads is developed. The model is validated with different sets of experiments. The model takes into account the decrease in out-of-plane capacity due to prior in-plane damage and is capable to capture in-plane/out-of-plane interaction effects of the seismic forces. From the correlation between the experimental and macro-model results, empirical equations are developed that can be used to calculate the stress-strain parameters required for defining the compressive behaviour of the struts. With the provided strategy, the geometrical and mechanical parameters required for the struts can be easily identified for numerical modelling of the infill wall. Using the model, in-plane and out-of-plane responses of the infill wall in lateral loads can be checked. To enrich the information obtained from the experiments regarding the out-of-plane behaviour of infill walls, numerical experimentation is performed by using the developed macro-model covering the range of infill’s geometrical and mechanical properties. From the detailed parametric analysis, the out-of-plane strength of the infill wall is found to be largely influenced by compressive strength, slenderness ratio, aspect ratio, and more importantly by the level of in-plane damage. The decay of strength and stiffness due to prior in-plane damage is also largely governed by the strength and the slenderness ratio of the unreinforced masonry infill. Based on the numerical results, empirical equations are proposed for the evaluation of the infilled frame’s out-of-plane capacity under in-plane damaged or undamaged conditions. The reliability of the proposed equations is proved by comparisons with experimental results. Finally, a procedure for developing the out-of-plane fragility functions is proposed by using the developed macro-model. The fragility is calculated assuming the uncertainty in the geometric and mechanical properties of infill walls instead of the uncertainty in the seismic input. The fragility is defined with respect to the position of the infill wall in a low-rise RC building. Experimental data available in the literature are used for the validation of the output. Overall, the results indicated lower vulnerability in the out-of-plane direction for infill walls without prior in-plane damage and high vulnerability when the infill wall is prior damaged in the in-plane. The proposed procedure can be extended to other types of infill walls depending on the construction technique of the site of interest, obtaining different and specific fragility curves for perming a large-scale risk analysis.
APA, Harvard, Vancouver, ISO, and other styles
23

Royo, Aznar Ana. "Factores predictivos de la reconstrucción instestinal tras la intervención de Hartmann." Doctoral thesis, Universitat Autònoma de Barcelona, 2017. http://hdl.handle.net/10803/458134.

Full text
Abstract:
Introducción. Actualmente la intervención de Hartmann es una alternativa válida en el tratamiento de patologías de colon izquierdo o recto en pacientes ASA IV, con peritonitis fecaloidea, desnutridos, inmunodeprimidos o con inestabilidad hemodinámica. Se emplea en casos de riesgo de dehiscencia anastomótica, de recidiva tumoral o de incontinencia anal. Sin embargo, los factores relacionados con la decisión de reconstruir el tránsito intestinal no están tan bien establecidos. El objetivo principal del estudio consiste en determinar si existen factores que pueden predecir la reconstrucción intestinal tras una intervención de Hartmann. El análisis de las complicaciones de estas intervenciones y la determinación de los factores predictivos de ausencia de reconstrucción del tránsito intestinal pueden ayudar a realizar una precisa selección de los pacientes y ofrecer información preoperatoria individualizada sobre los resultados más probables de la intervención. Material y métodos. Estudio observacional, retrospectivo y longitudinal que incluye todos los pacientes sometidos a intervención de Hartmann desde Enero de 1999 hasta Diciembre 2014 en un hospital terciario universitario. No se excluyó ningún paciente candidato a una posible reconstrucción del tránsito intestinal. Las variables recogidas fueron clasificadas en 1) específicas del paciente: edad, sexo, IMC, riesgo ASA, índice de Charlson, incontinencia anal; 2) específicas de la enfermedad: tipo de patología (benigna vs maligna), diagnóstico principal, estadio tumoral, grado de contaminación peritoneal, 3) específicas del tratamiento: indicación de procedimiento de Hartmann, transfusión perioperatoria y postoperatoria, procedimiento quirúrgico principal, tipo de cirugía (programada vs urgente), tipo de cirujano (general vs colorrectal), longitud del muñón rectal, clasificación Clavien- Dindo, reingresos, causas de no reversión de la intervención de Hartmann. Se realizó un estudio descriptivo de la muestra. Se evaluó la asociación estadística entre las variables cualitativas con la prueba de 2 o prueba exacta de Fisher. Se compararon grupos mediante los tests U de Mann-Whitney o Kruskal-Wallis en función del tipo de variables. Se realizó un análisis univariante y multivariante mediante una regresión logística binaria. Además se realizó un árbol de clasificación y regresión. Por último, se elaboraron las curvas COR de cada modelo y se compararon entre sí mediante la prueba de DeLong. Resultados. Se incluyeron 533 pacientes consecutivos a los que se realizó un procedimiento de Hartmann. 110 (20,6%) pacientes fueron intervenidos para la reconstrucción intestinal. La edad media fue de 71,7 años. En el análisis multivariante se obtuvieron como factores independientes predictivos de presentar mayor probabilidad de reconstrucción del tránsito la edad inferior a 69 años, el grado ASA I ó II, la indicación de intervención de Hartmann por dehiscencia anastomótica y la altura del muñón rectal por encima del promontorio sacro o a la altura del mismo. Sin embargo, los factores independientes predictivos de una menor probabilidad de reconstrucción fueron la incontinencia anal, estadio IV, recibir transfusión postoperatoria o realizar intervención de Hartmann de forma programada. Del árbol de clasificación se deduce que un paciente de edad inferior a 69 años, que presente baja comorbilidad, con un muñón rectal en el promontorio o por encima de éste y que no haya requerido transfusión tendrá un 85% de probabilidad de reconstrucción intestinal. Discusión. La identificación de factores predictivos de restauración de la continuidad intestinal puede servir para aconsejar a los pacientes y ayudar a los cirujanos a escoger la mejor opción, tanto antes de la realización de una intervención de Hartmann, como en el momento de indicar su reconstrucción. Conclusión. Edad, riesgo ASA, indicación de procedimiento de Hartmann, longitud del muñón rectal, incontinencia anal, estadio tumoral, transfusión postoperatoria y cirugía programada pueden predecir la reconstrucción de la intervención de Hartmann.
Introduction. Hartmann´s procedure is still a valid alternative in the treatment of pathologies of the left colon or rectum in patients having an ASA score IV, feculent peritonitis, malnutrition, immunosuppression or with hemodynamic instability. Hartmann´s procedure is mainly indicated in cases of high risk of anastomotic leakage, local tumor recurrence or anal incontinence. However, the factors related to the decision to restore intestinal continuity are not well established. The main objective of this study was to determine predictive factors of Hartmann´s reversal. The analysis of the morbidity of those interventions and the identification of predictive factors of intestinal transit reconstruction could ease an accurate selection of patients and give individualized preoperative counselling providing information on the most likely outcomes of the intervention. Material and methods. A retrospective observational study in which all consecutive patients that underwent Hartmann´s procedure from January 1999 to December 2014 in a tertiary University Hospital were included. No patient with a possible intestinal continuity restoration was excluded. The data collected were classified into 1) patient-specific: age, sex, body mass index, ASA score, Charlson index, anal incontinence; 2) disease-specific: type of disorder (benign vs malignant), main diagnosis, tumor stage, degree of peritoneal contamination, 3) treatment-specific: period of years of surgery, indication of Hartmann´s procedure, perioperative and postoperative transfusion, main surgical procedure, type of surgery (elective vs urgent), type of surgeon (general vs colorrectal), length of the rectal stump, Clavien-Dindo classification, readmission rate, causes of nonreversal Hartmann´s procedure. A descriptive analysis was performed. The 2 test or the Fisher exact test were used for categorical variables. Comparisons between groups were made using Mann-Whitney U test or Kruskal-Wallis test for continuous variables, where appropriate. Univariate and multivariate binary logistic regression model were used. Further, a classification and regression tree was performed. Finally, COR curves of each model were elaborated and compared with the DeLong test. Results. A total of 533 consecutive patients underwent Hartmann’s procedure. 110 (20,8%) patients underwent Hartmann´s reversal procedure. Mean age was 71,7 years. Multivariate analysis showed that the independent predictors of higher probability of intestinal transit reversal were age lower than 69 years, ASA grade I or II, indication of HP for anastomotic leak and the rectal stump above or at the sacral promontory. However, the independent factors related to a reduced probability of intestinal reconstruction following HP were anal incontinence, stage IV, postoperative transfusion or elective Hartmann's intervention. From the classification tree it is deduced that a patient below 69 years of age who presents low comorbidity, with a rectal stump at or above the promontory and that did not require perioperative transfusion would have 85% of probability of intestinal transit reconstruction. Discussion. Identification of predictive factors of intestinal continuity restoration may help surgeons to inform the patient and to choose the better option, both before performing a Hartmann procedure, and at the time of indicating reconstruction of intestinal continuity. Conclusion. Age, ASA, indication of Hartmann’s procedure, length of rectal stump, anal incontinence, tumor stage, postoperative transfusion and elective surgery can predict Hartmann’s reversal.
APA, Harvard, Vancouver, ISO, and other styles
24

Engelmann, James E. "An Information Management and Decision Support tool for Predictive Alerting of Energy for Aircraft." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1595779161412401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Tsai, Meng-Ta, and 蔡孟達. "Chaotic Pig-Divined Prediciton of New Era." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/bk6qx6.

Full text
Abstract:
碩士
國立臺北藝術大學
造形研究所
96
As a creator in this complex new era today, the context from the observation to a fusion, metropolis mixed relationships and ideology to our vast world culture of today, is presented through the creation of art series “Chaotic Pig”. This thesis statement in hope to dissect author’s self emotional and physical stage through careful self-diagnose as an approach to visualize icons and symbols that the society incidents may bring through author’s artistic interpretations. Art present a way of life and in regards to the youth generation of Taiwan, the fashion trend, media over exposure, and the World Wide Web has knock down the boundaries between countries and languages. As part of new generation, we are accustomed to receive the trash or the outer cultural nourishments and transfer that into information as something the youth must do on daily basis. The author was blown into this fusion of information and like a person without a controlled direction or focuses; the mind is forced to receive multi-cultural elements. In the instance of Taiwanese local belief, like the pig of god that was forced to stuff food in, this kind of local belief is what started “Chaotic Pig” art series. This thesis only wish to establish the process to creating of the new era icons and symbols through the fusion of absorbing cultural aspects with emotional embellishments. The author set the ground rule through what Taiwanese youth perceive as, as well as author’s personal lifestyle to further explain the quality and the integrity of new generation today juxtapose to how the author create through the modern means and how the author take the information and transfer into the art series. In the process of establishment, the copying usage of icons and symbols already established in the world, and that is to enhance and mock the integrity of already established icons and symbols. Next, the decomposing of the core elements help understand how to set the main frame while slowly adding other recognizable icons or symbols to neglect or recreate a brand new image or another yet recognizable icon. This new icon is then copied or borrowed by other unknown user to extend its life and purpose in a never ending cycle. Through the observation and summary of what “Chaotic Pig” art series brings in research studies, the author hope to be truthful as a new generation artist and be able to extend this idea process with new ground possibilities in art creations.
APA, Harvard, Vancouver, ISO, and other styles
26

Liau, Yue-Der, and 廖育德. "A New Approach for Dynamic Brach Prediciton." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/40245513583246514083.

Full text
Abstract:
碩士
淡江大學
電機工程學系
87
Due to the popular application of the deeply-pipelined and widely-issued processors today, branch prediction has become more important than ever. The design of an excellent branch predictor becomes more vital to delivering the potential performance of a widely issued ,deeply-pipelined microarchitecture. The performance of processors(especially superscalar pipelined and super pipelined processors) will be designed when branch instructions occur in the control flow. Because changes in the control flow can not be predicted, memory cycles will be wasted on fetching and decoding instructions that will never be used, and pipelines will be stalled. If we can decide on which instructions to fetch and decode before the control flow changes, some memory cycles can be saved , and the network performance will be uplifted accordingly. Our research focuses on improving the accuracy of the branch prediction in order to raise the operational performance of processors on two level branch predictor . We make some changes on both the dispatch part and predict part to observe the degree of improvement on prediction accuracy . Two predict schemes , the 2-bit counter predict scheme and Markov predict scheme, are generally used in branch prediction but are not able to provide enough prediction accuracy for a high performance processor ,while the PPM algorithm costs too much . Therefore, we replace the predict part by our predict scheme that can variably read and compare traces to do predict . In the dispatch part , we do some changes to make the predictor become more suitable for our predict scheme and to improve the overall prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
27

Pereira, Pedro Miguel Piedade Mota. "Failure Prediciton - An Application in the Railway Industry." Master's thesis, 2014. https://repositorio-aberto.up.pt/handle/10216/77318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Pereira, Pedro Miguel Piedade Mota. "Failure Prediciton - An Application in the Railway Industry." Dissertação, 2014. https://repositorio-aberto.up.pt/handle/10216/77318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Chih-Hao, and 陳志豪. "Prediciting cox model with time-dependent covaroates." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/47531847645189328084.

Full text
Abstract:
碩士
國立政治大學
統計研究所
93
It is so called “time-dependent covariates” that the values of covariates change over time. Time-dependent covariates are measured repeatedly and often appear in the longitudinal data. Time-dependent covariates can be regularly or irregularly measured. In the regular case, we can ignore the TEL(time elapsed since last observation) effect and the grouped Cox model or the pooled logistic regression model is employed to anlalyze. The pooled logistic regression is an analytic method using the“person-period”approach. The grouped Cox model and the pooled logistic regression model also can be used to predict survival probablity. D’Agostino et al. (1990) had proved that pooled logistic regression model is asymptotically equivalent to the grouped Cox model. If time-dependent covariates are observed irregularly, Cox model under counting process may be taken into account. Before making the prediction we must turn the original data into“person-interval”form, and this data form is also suitable for the prediction of grouped Cox model in regular measurements. de Bruijne et al.(2001) first considered TEL as a time-dependent covariate and used B-spline function to model it in their proposed extended Cox model. We also show that TEL is a very significant time-dependent covariate in our paper. The extended Cox model provided an alternative for the irregularly measured time-dependent covariates. On the other hand, we use exponential smoothing with trend to predict the future value of time-dependent covariates. Using the predicted values with the extended Cox model then we can predict survival probablity.
APA, Harvard, Vancouver, ISO, and other styles
30

Bensch, Michael, Dominik Brugger, Wolfgang Rosenstiel, Martin Bogdan, and Wilhelm Spruth. "Self-Learning Prediciton System for Optimisation of Workload Managememt in a Mainframe Operating System." 2007. https://ul.qucosa.de/id/qucosa%3A32106.

Full text
Abstract:
We present a framework for extraction and prediction of online workload data from a workload manager of a mainframe operating system. To boost overall system performance, the prediction will be corporated into the workload manager to take preventive action before a bottleneck develops. Model and feature selection automatically create a prediction model based on given training data, thereby keeping the system flexible. We tailor data extraction, preprocessing and training to this specific task, keeping in mind the nonstationarity of business processes. Using error measures suited to our task, we show that our approach is promising. To conclude, we discuss our first results and give an outlook on future work.
APA, Harvard, Vancouver, ISO, and other styles
31

CHING, CHENG TZU, and 鄭慈靜. "Preschool children, elementary children and adults' models for prediciting teleological action." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/36819173174171927361.

Full text
Abstract:
碩士
國立屏東師範學院
國民教育研究所
92
This study means to explore three levels of people’s--preschool children, elementary children and adults--teleological actions toward animals, plants, machines and artifacts. It goes further to explore and analyze the people among three levels’ deduction models to the concept of animate. One-by-one interview is conduct in the methodology. Twenty four preschool children, and fifth-grade children and adults are the subject in interview. Each member within the three levels was examined under the following two condition:benefit-present condition and benefit-absent condition. Three primary findings are discovered in this study. First, preschool children were largely affected an animism. That is, they presume all entities can lead to teleological actions under the benefit-present condition. The fifth-grade children were little affected animism. Their reactions to teleological actions approximate adults’ mature concept to entities. Adults can clearly distinguish animate or inanimate entities. Second, preschool children tend to animatize all entities such as animals, plants, machines and artifacts. They infer the teleological behavior of every entity from either their biological or psychological egocentrism. The fifth-grade children’s explanation of teleological actions of animals and plants base on animates’ needs of surviving. They used mechanic organization or personified biological and psychological motivation to explain the movements of machines and artifacts. However, adults explain entities scientifically. They interpret the differences of animals and plants in terms of attributes and the differences of machines and artifacts in terms of physical principles. Third, preschool children employ the model of finalism to deduce the teleological action of entities. Fifth-grade children form more complicated and complex deduction models than those of the preschool children. Complexity-based teleology and biology-based teleology are two major deduction models. Adult employ biology-based teleology to interpret entity’s teleological action. This mesns adults have a full-blown concept of animate.
APA, Harvard, Vancouver, ISO, and other styles
32

Smith, Mark Preston. "Prediciting fuel models and subsequent fire behavior from vegetation classification maps." 2003. http://www.lib.ncsu.edu/theses/available/etd-08122003-152132/unrestricted/etd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Faruk, Abu N. "Prediciting Size Effects and Determing Length Scales in Small Scale Metaliic Volumes." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7981.

Full text
Abstract:
The purpose of this study is to develop an understanding of the behavior of metallic structures in small scales. Structural materials display strong size dependence when deformed non-uniformly into the inelastic range. This phenomenon is widely known as size effect. The primary focus of this study is on developing analytical models to predict some of the most commonly observed size effects in structural metals and validating them by comparing with experimental results. A nonlocal rate-dependent and gradient dependent theory of plasticity on a thermodynamically consistent framework is adopted for this purpose. The developed gradient plasticity theory is applied to study size effects observed in biaxial and thermal loading of thin films and indentation tests. One important intrinsic material property associated with this study is material length scale. The work also presents models for predicting length scales and discusses their physical interpretations. It is found that the proposed theory is successful for the interpretation of indentation size effects in micro/nano-hardness when using pyramidal or spherical indenters and gives sound interpretation of the size effects in thin films under biaxial or thermal loading.
APA, Harvard, Vancouver, ISO, and other styles
34

"Radiation Transport Modelling in a Tokomak Plasma: Application to Performance Prediciton and Design of Future Machines." Universitat Politècnica de Catalunya, 2001. http://www.tesisenxarxa.net/TDX-0114104-103202/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Bhaskaran, Ganesh. "Prediciting the corrosion and stress corrosion performance of copper in anaerobic sulfide solution." Thesis, 2010. http://hdl.handle.net/1807/25432.

Full text
Abstract:
Stress corrosion cracking (SCC) susceptibility of the phosphorus de-oxidized copper has been evaluated in synthetic seawater polluted by sulfides using slow strain rate test (SSRT). The effect of concentration of sulfide, temperature, and applied cathodic and anodic potentials on the final strain values and maximum stress were also studied. No cracks were found under the tested conditions. The final strain and maximum stress values decreased but not significantly, with increase in the temperature, applied anodic potential and sulfide concentration. The observed effect is due to the section reduction by uniform corrosion. Lateral cross section and microscopic examination of the fractured specimen ruled out the existence of the localized corrosion. Electrochemical measurements showed that the Cu2S film is not a protective film and also exhibits a mass transfer limitation to the inward diffusion of the sulfides. Based on these results the reasons for the absence of cracking are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
36

Pawlak, Daniel T. "Development and evaluation of a shortwave full-spectrum correlated K-distribution radiative transfer algorithm for numerical weather prediciton." 2004. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-645/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Obiosa-Maife, Collins. "Predicition of the molecular structure of ill-defined hydrocarbons using vibrational, 1H, and 13C NMR spectroscopy." Master's thesis, 2009. http://hdl.handle.net/10048/803.

Full text
Abstract:
This represents a proof-of-concept study of the appropriateness of vibrational and NMR spectroscopy for predicting the molecular structure of large molecules on the basis of a library of small molecules. Density Functional Theory (DFT) B3LYP/6-311G was used generate all spectra. 20 model compounds comprising two multiple-ringed polynuclear aromatic hydrocarbons (PAHs) connected by varying aliphatic chain-lengths were investigated. A least squares optimization algorithm was developed to determine the contribution of molecular subunits in the model compounds. 1H and 13C NMR spectroscopy failed to identify subunits unambiguously even with a constrained library. By contrast, IR and Raman results independently identified 40% and 65% respectively and jointly more than 80 % of the aromatic groups present; however, the aliphatic chain-length was poorly defined in general. IR and Raman spectroscopy are a suitable basis for spectral decomposition and should play a greater role in the identification of ringed subunits present in ill-defined hydrocarbons
Chemical Engineering
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Ye-Nong, and 劉儀農. "Survey of Compliance with Labeling in Commercial Beef Jerky and Effects of Different Packagings on Shelf Life Predicition of Homemade Beef Jerky." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/49j5qs.

Full text
Abstract:
碩士
台北海洋科技大學
食品科技與行銷系碩士班
107
Beef jerky is a popular traditional Chinese snack. It is enjoyed and by people of all ages owing to the many unique flavors available, convenience, long shelf life, and portability. For this study, 50 beef jerky products from 9 manufacturers were obtained from 6 major retail franchises between January and June, 2016. The packaging of the commercially available beef jerkies was evaluated. This packaging should comply with article #22 of the Act Governing Food Safety and Sanitation in terms of product naming, ingredient declaration, net weight, place of origin, and additive, manufacturer, and nutritional details. As jerky is a perishable product, food additives are often introduced into the manufacturing ingredients to sustain the shelf life of the product from 120 to 270 days. This long shelf life is mainly required due to the long transport time and channel distribution needs. The objective of this study was to report and compare the microbial changes and water activity content variation in homemade air-dried beef jerky produced with and without antioxidant additives. The products were separately packaged in sealed transparent zip-lock bags, aluminum foil zip-lock bags, and high-barrier zip-lock bags and stored for 21 days at a constant temperature of 32°C. The following are the collective data and observations collected over the 21-day test period. Regardless of the packaging type, antioxidant-added beef jerkies had a longer shelf life than jerkies without antioxidants. For samples without antioxidants, the microbial counts on 21 days for the transparent, aluminum foil, and high-barrier zip-lock bags were 6.40×104, 2.80×104, and 3.90×105 cfu/g, respectively. Food products produced at home for commercial distribution may carry higher food safety risks than food products produced in industrial settings due to problems such as a producer’s lack of proper food-handling knowledge, environmental contamination, cross-contamination of food processing equipment and cookware, and other indeterminable factors. Further education, especially in relation to safe food handling and processing, should be mandated by the regulatory body and adopted by all manufacturers to ensure total elimination of such risks to consumers.
APA, Harvard, Vancouver, ISO, and other styles
39

PAPARCONE, RAFFAELLA, Stefano MOROSETTI, Anita SCIPIONI, and SANTIS Pasquale DE. "Superstructural information in DNA sequences: from structural toward functional genomics." Doctoral thesis, 2005. http://hdl.handle.net/11573/391173.

Full text
Abstract:
Although DNA is iconized as a straight double helix, it does not exist in this canonical form in biological systems. Instead, it is characterized by sequence dependent structural and dynamic deviations from the monotonous regularity of the canonical B-DNA. Despite the complexity of the system, we showed that DNA structural and dynamics large-scale properties can be predicted starting from the simple knowledge of nucleotide sequence by adopting a statistical approach. The paper reports the statistical analysis of large pools of different prokaryotic genes in terms of the sequence-dependent curvature and flexibility. Conserved features characterize the regions close to the Start Translation Site, which are related to their function in the regulation system. In addition, regular patterns with three-fold periodicity were found in the coding regions. They were reproduced in terms of the nucleotide frequency expected on the basis of the genetic code and the pertinent occurrence of the aminoacid residues.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography