Siga este enlace para ver otros tipos de publicaciones sobre el tema: Evaluation of extreme classifiers.

Tesis sobre el tema "Evaluation of extreme classifiers"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 49 mejores tesis para su investigación sobre el tema "Evaluation of extreme classifiers".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Legrand, Juliette. "Simulation and assessment of multivariate extreme models for environmental data". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASJ015.

Texto completo
Resumen
L'estimation précise des probabilités d'occurrence des événements extrêmes environnementaux est une préoccupation majeure dans l'évaluation des risques. Pour l'ingénierie côtière par exemple, le dimensionnement de structures implantées sur ou à proximité des côtes doit être tel qu'elles résistent aux événements les plus sévères qu'elles puissent rencontrer au cours de leur vie. Cette thèse porte sur la simulation d'événements extrêmes multivariés, motivée par des applications aux hauteurs significatives de vagues, et sur l'évaluation de modèles de prédiction d'occurrence d'événements extrêmes.Dans la première partie du manuscrit, nous proposons et étudions un simulateur stochastique qui génère conjointement, en fonction de certaines conditions d'état de mer au large, des extrêmes de hauteur significative de vagues (Hs) au large et à la côte. Pour cela, nous nous appuyons sur l'approche par dépassements de seuils bivariés et nous développons un algorithme de simulation non-paramétrique de lois de Pareto généralisées bivariées. À partir de ce simulateur d'événements cooccurrents, nous dérivons un modèle de simulation conditionnel. Les deux algorithmes de simulation sont mis en oeuvre sur des expériences numériques et appliqués aux extrêmes de Hs près des côtes bretonnes françaises. Un autre développement est traité quant à la modélisation des lois marginales des Hs. Afin de prendre en compte leur non-stationnaritée, nous adaptons une extension de la loi de Pareto généralisée, en considérant l'effet de la période et de la direction pic sur ses paramètres.La deuxième partie de cette thèse apporte un développement plus théorique. Pour évaluer différents modèles de prédiction d'extrêmes, nous étudions le cas spécifique des classifieurs binaires, qui constituent la forme la plus simple de prévision et de processus décisionnel : un événement extrême s'est produit ou ne s'est pas produit. Des fonctions de risque adaptées à la classification binaire d'événements extrêmes sont développées, ce qui nous permet de répondre à notre deuxième question. Leurs propriétés sont établies dans le cadre de la variation régulière multivariée et de la variation régulière cachée, permettant de considérer des formes plus fines d'indépendance asymptotique. Ces développements sont ensuite appliqués aux débits de rivière extrêmes
Accurate estimation of the occurrence probabilities of extreme environmental events is a major issue for risk assessment. For example, in coastal engineering, the design of structures installed at or near the coasts must be such that they can withstand the most severe events they may encounter in their lifetime. This thesis focuses on the simulation of multivariate extremes, motivated by applications to significant wave height, and on the evaluation of models predicting the occurrences of extreme events.In the first part of the manuscript, we propose and study a stochastic simulator that, given offshore conditions, produces jointly offshore and coastal extreme significant wave heights (Hs). We rely on bivariate Peaks over Threshold and develop a non-parametric simulation scheme of bivariate generalised Pareto distributions. From such joint simulator, we derive a conditional simulation model. Both simulation algorithms are applied to numerical experiments and to extreme Hs near the French Brittanny coast. A further development is addressed regarding the marginal modelling of Hs. To take into account non-stationarities, we adapt the extended generalised Pareto model, letting the marginal parameters vary with the peak period and the peak direction.The second part of this thesis provides a more theoretical development. To evaluate different prediction models for extremes, we study the specific case of binary classifiers, which are the simplest type of forecasting and decision-making situation: an extreme event did or did not occur. Risk functions adapted to binary classifiers of extreme events are developed, answering our second question. Their properties are derived under the framework of multivariate regular variation and hidden regular variation, allowing to handle finer types of asymptotic independence. This framework is applied to extreme river discharges
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Lavesson, Niklas. "Evaluation and Analysis of Supervised Learning Algorithms and Classifiers". Licentiate thesis, Karlskrona : Blekinge Institute of Technology, 2006. http://www.bth.se/fou/Forskinfo.nsf/allfirst2/c655a0b1f9f88d16c125714c00355e5d?OpenDocument.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Nygren, Rasmus. "Evaluation of hyperparameter optimization methods for Random Forest classifiers". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301739.

Texto completo
Resumen
In order to create a machine learning model, one is often tasked with selecting certain hyperparameters which configure the behavior of the model. The performance of the model can vary greatly depending on how these hyperparameters are selected, thus making it relevant to investigate the effects of hyperparameter optimization on the classification accuracy of a machine learning model. In this study, we train and evaluate a Random Forest classifier whose hyperparameters are set to default values and compare its classification accuracy to another classifier whose hyperparameters are obtained through the use of the hyperparameter optimization (HPO) methods Random Search, Bayesian Optimization and Particle Swarm Optimization. This is done on three different datasets, and each HPO method is evaluated based on the classification accuracy change it induces across the datasets. We found that every HPO method yielded a total classification accuracy increase of approximately 2-3% across all datasets compared to the accuracies obtained using the default hyperparameters. However, due to limitations of time, data and computational resources, no assertions can be made as to whether the observed positive effect is generalizable at a larger scale. Instead, we could conclude that the utility of HPO methods is dependent on the dataset at hand.
För att skapa en maskininlärningsmodell behöver en ofta välja olika hyperparametrar som konfigurerar modellens egenskaper. Prestandan av en sådan modell beror starkt på valet av dessa hyperparametrar, varför det är relevant att undersöka hur optimering av hyperparametrar kan påverka klassifikationssäkerheten av en maskininlärningsmodell. I denna studie tränar och utvärderar vi en Random Forest-klassificerare vars hyperparametrar sätts till särskilda standardvärden och jämför denna med en klassificerare vars hyperparametrar bestäms av tre olika metoder för optimering av hyperparametrar (HPO) - Random Search, Bayesian Optimization och Particle Swarm Optimization. Detta görs på tre olika dataset, och varje HPO- metod utvärderas baserat på den ändring av klassificeringsträffsäkerhet som den medför över dessa dataset. Vi fann att varje HPO-metod resulterade i en total ökning av klassificeringsträffsäkerhet på cirka 2-3% över alla dataset jämfört med den träffsäkerhet som kruleslassificeraren fick med standardvärdena för hyperparametrana. På grund av begränsningar i form av tid och data kunde vi inte fastställa om den positiva effekten är generaliserbar till en större skala. Slutsatsen som kunde dras var istället att användbarheten av metoder för optimering av hyperparametrar är beroende på det dataset de tillämpas på.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Dang, Robin y Anders Nilsson. "Evaluation of Machine Learning classifiers for Breast Cancer Classification". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280349.

Texto completo
Resumen
Breast cancer is a common and fatal disease among women globally, where early detection is vital to improve the prognosis of patients. In today’s digital society, computers and complex algorithms can evaluate and diagnose diseases more efficiently and with greater certainty than experienced doctors. Several studies have been conducted to automate medical imaging techniques, by utilizing machine learning techniques, to predict and detect breast cancer. In this report, the suitability of using machine learning to classify whether breast cancer is of benign or malignant characteristic is evaluated. More specifically, five different machine learning methods are examined and compared. Furthermore, we investigate how the efficiency of the methods, with regards to classification accuracy and execution time, is affected by the preprocessing method Principal component analysis and the ensemble method Bootstrap aggregating. In theory, both methods should favor certain machine learning methods and consequently increase the classification accuracy. The study is based on a well-known breast cancer dataset from Wisconsin which is used to train the algorithms. The result was evaluated by applying statistical methods concerning the classification accuracy, sensitivity and execution time. Consequently, the results are then compared between the different classifiers. The study showed that the use of neither Principal component analysis nor Bootstrap aggregating resulted in any significant improvements in classification accuracy. However, the results showed that the support vector machines classifiers were the better performer. As the survey was limited in terms of the amount of datasets and the choice of different evaluation methods with associating adjustments, it is uncertain whether the obtained result can be generalized over other datasets or populations.
Bröstcancer är en vanlig och dödlig sjukdom bland kvinnor globalt där en tidig upptäckt är avgörande för att förbättra prognosen för patienter. I dagens digitala samhälle kan datorer och komplexa algoritmer utvärdera och diagnostisera sjukdomar mer effektivt och med större säkerhet än erfarna läkare. Flera studier har genomförts för att automatisera tekniker med medicinska avbildningsmetoder, genom maskininlärnings tekniker, för att förutsäga och upptäcka bröstcancer. I den här rapport utvärderas och jämförs lämpligheten hos fem olika maskininlärningsmetoder att klassificera huruvida bröstcancer är av god- eller elakartad karaktär. Vidare undersöks hur metodernas effektivitet, med avseende på klassificeringssäkerhet samt exekveringstid, påverkas av förbehandlingsmetoden Principal component analysis samt ensemble metoden Bootstrap aggregating. I teorin skall båda förbehandlingsmetoder gynna vissa maskininlärningsmetoder och således öka klassificeringssäkerheten. Undersökningen är baserat på ett välkänt bröstcancer dataset från Wisconsin som används till att träna algoritmerna. Resultaten är evaluerade genom applicering av statistiska metoder där träffsäkerhet, känslighet och exekveringstid tagits till hänsyn. Följaktligen jämförs resultaten mellan de olika klassificerarna. Undersökningen visade att användningen av varken Principal component analysis eller Bootstrap aggregating resulterade i några nämnvärda förbättringar med avseende på klassificeringssäkerhet. Dock visade resultaten att klassificerarna Support vector machines Linear och RBF presterade bäst. I och med att undersökningen var begränsad med avseende på antalet dataset samt val av olika evalueringsmetoder med medförande justeringar är det därför osäkert huruvida det erhållna resultatet kan generaliseras över andra dataset och populationer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Fischer, Manfred M., Sucharita Gopal, Petra Staufer-Steinnocher y Klaus Steinocher. "Evaluation of Neural Pattern Classifiers for a Remote Sensing Application". WU Vienna University of Economics and Business, 1995. http://epub.wu.ac.at/4184/1/WSG_DP_4695.pdf.

Texto completo
Resumen
This paper evaluates the classification accuracy of three neural network classifiers on a satellite image-based pattern classification problem. The neural network classifiers used include two types of the Multi-Layer-Perceptron (MLP) and the Radial Basis Function Network. A normal (conventional) classifier is used as a benchmark to evaluate the performance of neural network classifiers. The satellite image consists of 2,460 pixels selected from a section (270 x 360) of a Landsat-5 TM scene from the city of Vienna and its northern surroundings. In addition to evaluation of classification accuracy, the neural classifiers are analysed for generalization capability and stability of results. Best overall results (in terms of accuracy and convergence time) are provided by the MLP-1 classifier with weight elimination. It has a small number of parameters and requires no problem-specific system of initial weight values. Its in-sample classification error is 7.87% and its out-of-sample classification error is 10.24% for the problem at hand. Four classes of simulations serve to illustrate the properties of the classifier in general and the stability of the result with respect to control parameters, and on the training time, the gradient descent control term, initial parameter conditions, and different training and testing sets. (authors' abstract)
Series: Discussion Papers of the Institute for Economic Geography and GIScience
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Alorf, Abdulaziz Abdullah. "Primary/Soft Biometrics: Performance Evaluation and Novel Real-Time Classifiers". Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96942.

Texto completo
Resumen
The relevance of faces in our daily lives is indisputable. We learn to recognize faces as newborns, and faces play a major role in interpersonal communication. The spectrum of computer vision research about face analysis includes, but is not limited to, face detection and facial attribute classification, which are the focus of this dissertation. The face is a primary biometric because by itself revels the subject's identity, while facial attributes (such as hair color and eye state) are soft biometrics because by themselves they do not reveal the subject's identity. In this dissertation, we proposed a real-time model for classifying 40 facial attributes, which preprocesses faces and then extracts 7 types of classical and deep features. These features were fused together to train 3 different classifiers. Our proposed model yielded 91.93% on the average accuracy outperforming 7 state-of-the-art models. We also developed a real-time model for classifying the states of human eyes and mouth (open/closed), and the presence/absence of eyeglasses in the wild. Our method begins by preprocessing a face by cropping the regions of interest (ROIs), and then describing them using RootSIFT features. These features were used to train a nonlinear support vector machine for each attribute. Our eye-state classifier achieved the top performance, while our mouth-state and glasses classifiers were tied as the top performers with deep learning classifiers. We also introduced a new facial attribute related to Middle Eastern headwear (called igal) along with its detector. Our proposed idea was to detect the igal using a linear multiscale SVM classifier with a HOG descriptor. Thereafter, false positives were discarded using dense SIFT filtering, bag-of-visual-words decomposition, and nonlinear SVM classification. Due to the similarity in real-life applications, we compared the igal detector with state-of-the-art face detectors, where the igal detector significantly outperformed the face detectors with the lowest false positives. We also fused the igal detector with a face detector to improve the detection performance. Face detection is the first process in any facial attribute classification pipeline. As a result, we reported a novel study that evaluates the robustness of current face detectors based on: (1) diffraction blur, (2) image scale, and (3) the IoU classification threshold. This study would enable users to pick the robust face detector for their intended applications.
Doctor of Philosophy
The relevance of faces in our daily lives is indisputable. We learn to recognize faces as newborns, and faces play a major role in interpersonal communication. Faces probably represent the most accurate biometric trait in our daily interactions. Thereby, it is not singular that so much effort from computer vision researchers have been invested in the analysis of faces. The automatic detection and analysis of faces within images has therefore received much attention in recent years. The spectrum of computer vision research about face analysis includes, but is not limited to, face detection and facial attribute classification, which are the focus of this dissertation. The face is a primary biometric because by itself revels the subject's identity, while facial attributes (such as hair color and eye state) are soft biometrics because by themselves they do not reveal the subject's identity. Soft biometrics have many uses in the field of biometrics such as (1) they can be utilized in a fusion framework to strengthen the performance of a primary biometric system. For example, fusing a face with voice accent information can boost the performance of the face recognition. (2) They also can be used to create qualitative descriptions about a person, such as being an "old bald male wearing a necktie and eyeglasses." Face detection and facial attribute classification are not easy problems because of many factors, such as image orientation, pose variation, clutter, facial expressions, occlusion, and illumination, among others. In this dissertation, we introduced novel techniques to classify more than 40 facial attributes in real-time. Our techniques followed the general facial attribute classification pipeline, which begins by detecting a face and ends by classifying facial attributes. We also introduced a new facial attribute related to Middle Eastern headwear along with its detector. The new facial attribute were fused with a face detector to improve the detection performance. In addition, we proposed a new method to evaluate the robustness of face detection, which is the first process in the facial attribute classification pipeline. Detecting the states of human facial attributes in real time is highly desired by many applications. For example, the real-time detection of a driver's eye state (open/closed) can prevent severe accidents. These systems are usually called driver drowsiness detection systems. For classifying 40 facial attributes, we proposed a real-time model that preprocesses faces by localizing facial landmarks to normalize faces, and then crop them based on the intended attribute. The face was cropped only if the intended attribute is inside the face region. After that, 7 types of classical and deep features were extracted from the preprocessed faces. Lastly, these 7 types of feature sets were fused together to train three different classifiers. Our proposed model yielded 91.93% on the average accuracy outperforming 7 state-of-the-art models. It also achieved state-of-the-art performance in classifying 14 out of 40 attributes. We also developed a real-time model that classifies the states of three human facial attributes: (1) eyes (open/closed), (2) mouth (open/closed), and (3) eyeglasses (present/absent). Our proposed method consisted of six main steps: (1) In the beginning, we detected the human face. (2) Then we extracted the facial landmarks. (3) Thereafter, we normalized the face, based on the eye location, to the full frontal view. (4) We then extracted the regions of interest (i.e., the regions of the mouth, left eye, right eye, and eyeglasses). (5) We extracted low-level features from each region and then described them. (6) Finally, we learned a binary classifier for each attribute to classify it using the extracted features. Our developed model achieved 30 FPS with a CPU-only implementation, and our eye-state classifier achieved the top performance, while our mouth-state and glasses classifiers were tied as the top performers with deep learning classifiers. We also introduced a new facial attribute related to Middle Eastern headwear along with its detector. After that, we fused it with a face detector to improve the detection performance. The traditional Middle Eastern headwear that men usually wear consists of two parts: (1) the shemagh or keffiyeh, which is a scarf that covers the head and usually has checkered and pure white patterns, and (2) the igal, which is a band or cord worn on top of the shemagh to hold it in place. The shemagh causes many unwanted effects on the face; for example, it usually occludes some parts of the face and adds dark shadows, especially near the eyes. These effects substantially degrade the performance of face detection. To improve the detection of people who wear the traditional Middle Eastern headwear, we developed a model that can be used as a head detector or combined with current face detectors to improve their performance. Our igal detector consists of two main steps: (1) learning a binary classifier to detect the igal and (2) refining the classier by removing false positives. Due to the similarity in real-life applications, we compared the igal detector with state-of-the-art face detectors, where the igal detector significantly outperformed the face detectors with the lowest false positives. We also fused the igal detector with a face detector to improve the detection performance. Face detection is the first process in any facial attribute classification pipeline. As a result, we reported a novel study that evaluates the robustness of current face detectors based on: (1) diffraction blur, (2) image scale, and (3) the IoU classification threshold. This study would enable users to pick the robust face detector for their intended applications. Biometric systems that use face detection suffer from huge performance fluctuation. For example, users of biometric surveillance systems that utilize face detection sometimes notice that state-of-the-art face detectors do not show good performance compared with outdated detectors. Although state-of-the-art face detectors are designed to work in the wild (i.e., no need to retrain, revalidate, and retest), they still heavily depend on the datasets they originally trained on. This condition in turn leads to variation in the detectors' performance when they are applied on a different dataset or environment. To overcome this problem, we developed a novel optics-based blur simulator that automatically introduces the diffraction blur at different image scales/magnifications. Then we evaluated different face detectors on the output images using different IoU thresholds. Users, in the beginning, choose their own values for these three settings and then run our model to produce the efficient face detector under the selected settings. That means our proposed model would enable users of biometric systems to pick the efficient face detector based on their system setup. Our results showed that sometimes outdated face detectors outperform state-of-the-art ones under certain settings and vice versa.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ayhan, Tezer Bahar. "Damage evaluation of civil engineering structures under extreme loadings". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00975488.

Texto completo
Resumen
In many industrial and scientific domains, especially in civil engineering and mechanical engineering fields, materials that can be used on the microstructure scale, are highly heterogeneous by comparison to the nature of mechanical behavior. This feature can make the prediction of the behavior of the structure subjected to various loading types, necessary for sustainable design, difficult enough. The construction of civil engineering structures is regulated all over the world: the standards are more stringent and taken into account, up to a limit state, due to different loadings, for example severe loadings such as impact or earthquake. Behavior models of materials and structures must include the development of these design criteria and thereby become more complex, highly nonlinear. These models are often based on phenomenological approaches, are capable of reproducing the material response to the ultimate level. Stress-strain responses of materials under cyclic loading, for which many researches have been executed in the previous years in order to characterize and model, are defined by different kind of cyclic plasticity properties such as cyclic hardening, ratcheting and relaxation. By using the existing constitutive models, these mentioned responses can be simulated in a reasonable way. However, there may be failure in some simulation for the structural responses and local and global deformation. Inadequacy of these studies can be solved by developing strong constitutive models with the help of the experiments and the knowledge of the principles of working of different inelastic behavior mechanisms together. This dissertation develops a phenomenological constitutive model which is capable of coupling two basic inelastic behavior mechanisms, plasticity and damage by studying the cyclic inelastic features. In either plasticity or damage part, both isotropic and linear kinematic hardening effects are taken into account. The main advantage of the model is the use of independent plasticity versus damage criteria for describing the inelastic mechanisms. Another advantage concerns the numerical implementation of such model provided in hybrid-stress variational framework, resulting with much enhanced accuracy and efficient computation of stress and internal variables in each element. The model is assessed by simulating hysteresis loop shape, cyclic hardening, cyclic relaxation, and finally a series of ratcheting responses under uniaxial loading responses. Overall, this dissertation demonstrates a methodical and systematic development of a constitutive model for simulating a broad set of cycle responses. Several illustrative examples are presented in order to confirm the accuracy and efficiency of the proposed formulation in application to cyclic loading.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zuzáková, Barbora. "Exchange market pressure: an evaluation using extreme value theory". Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-199589.

Texto completo
Resumen
This thesis discusses the phenomenon of currency crises, in particular it is devoted to empirical identification of crisis periods. As a crisis indicator, we aim to utilize an exchange market pressure index which has been revealed as a very powerful tool for the exchange market pressure quantification. Since enumeration of the exchange market pressure index is crucial for further analysis, we pay special attention to different approaches of its construction. In the majority of existing literature on exchange market pressure models, a currency crisis is defined as a period of time when the exchange market pressure index exceeds a predetermined level. In contrast to this, we incorporate a probabilistic approach using the extreme value theory. Our goal is to prove that stochastic methods are more accurate, in other words they are more reliable instruments for crisis identification. We illustrate the application of the proposed method on a selected sample of four central European countries over the period 1993 - 2012, or 1993 - 2008 respectively, namely the Czech Republic, Hungary, Poland and Slovakia. The choice of the sample is motivated by the fact that these countries underwent transition reforms to market economies at the beginning of 1990s and therefore could have been exposed to speculative attacks on their newly arisen currencies. These countries are often assumed to be relatively homogeneous group of countries at similar stage of the integration process. Thus, a resembling development of exchange market pressure, particularly during the last third of the estimation period, would not be surprising.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Buolamwini, Joy Adowaa. "Gender shades : intersectional phenotypic and demographic evaluation of face datasets and gender classifiers". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/114068.

Texto completo
Resumen
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 103-116).
This thesis (1) characterizes the gender and skin type distribution of IJB-A, a government facial recognition benchmark, and Adience, a gender classification benchmark, (2) outlines an approach for capturing images with more diverse skin types which is then applied to develop the Pilot Parliaments Benchmark (PPB), and (3) uses PPB to assess the classification accuracy of Adience, IBM, Microsoft, and Face++ gender classifiers with respect to gender, skin type, and the intersection of skin type and gender. The datasets evaluated are overwhelming lighter skinned: 79.6% - 86.24%. IJB-A includes only 24.6% female and 4.4% darker female, and features 59.4% lighter males. By construction, Adience achieves rough gender parity at 52.0% female but has only 13.76% darker skin. The Parliaments method for creating a more skin-type-balanced benchmark resulted in a dataset that is 44.39% female and 47% darker skin. An evaluation of four gender classifiers revealed a significant gap exists when comparing gender classification accuracies of females vs males (9 - 20%) and darker skin vs lighter skin (10 - 21%). Lighter males were in general the best classified group, and darker females were the worst classified group. 37% - 83% of classification errors resulted from the misclassification of darker females. Lighter males contributed the least to overall classification error (.4% - 3%). For the best performing classifier, darker females were 32 times more likely to be misclassified than lighter males. To increase the accuracy of these systems, more phenotypically diverse datasets need to be developed. Benchmark performance metrics need to be disaggregated not just by gender or skin type but by the intersection of gender and skin type. At a minimum, human-focused computer vision models should report accuracy on four subgroups: darker females, lighter females, darker males, and lighter males. The thesis concludes with a discussion of the implications of misclassification and the importance of building inclusive training sets and benchmarks.
by Joy Adowaa Buolamwini.
S.M.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Pydipati, Rajesh. "Evaluation of classifiers for automatic disease detection in citrus leaves using machine vision". [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0006991.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Lantz, Linnea. "Evaluation of the Robustness of Different Classifiers under Low- and High-Dimensional Settings". Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385554.

Texto completo
Resumen
This thesis compares the performance and robustness of five different varities of discriminant analysis, namely linear (LDA), quadratic (QDA), generalized quadratic (GQDA), diagonal linear (DLDA) and diagonal quadratic (DQDA) discriminant analysis, under elliptical distributions and small sample sizes.  By means of simulations, the performance of the classifiers are compared against separation of mean vectors, sample size, number of variables, degree of non-normality and covariance structures. Results show that QDA is competitive under most settings, but can be outperformed by other classifiers with increasing sample size and when the covariance structures across classes are similar. Other noteworthy results include sensitivity of DQDA to non-normality and dependence of the performance of GQDA on whether sample sizes are balanced or not.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Yanko, William Andrew. "Experimental and numerical evaluation of concrete spalling during extreme thermal loading". [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0006380.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Wilson, P. C. "Construction and evaluation of probabilistic classifiers to detect acute myocardial infarction from multiple cardiac markers". Thesis, Queen's University Belfast, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.411812.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Nyaupane, Narayan. "STATISTICAL EVALUATION OF HYDROLOGICAL EXTREMES ON STORMWATER SYSTEM". OpenSIUC, 2018. https://opensiuc.lib.siu.edu/theses/2300.

Texto completo
Resumen
Climate models have anticipated higher future extreme precipitations and streamflows for various regions. Urban stormwater facilities are vulnerable to these changes as the design assumes stationarity. However, recent climate change studies have argued about the existence of non-stationarity of the climate. Distribution method adopted on extreme precipitation varies spatially and may not always follow same distribution method. In this research, two different natural extremities were analyzed for two separate study areas. First, the future design storm depth based on the stationarity of climate and GEV distribution method was examined with non-stationarity and best fit distribution. Second, future design flood was analyzed and routed on a river to estimate the future flooding. Climate models from North American Regional Climate Change Assessment Program (NARCCAP) and Coupled Model Intercomparison Project phase 5 (CMIP5) were fitted to 27 different distribution using Chi-square and Kolmogorov Smirnov goodness of fit. The best fit distribution method was used to calculate design storm depth as well as design flood. Climate change scenarios were adopted as delta change factor, a downscaling approach to transfer historical design value to the climate adopted future design value. Most of the delta change factor calculated were higher than one, representing strong climate change impact on future. HEC-HMS and HEC-RAS models were used to simulate the stormwater infrastructures and river flow. The result shows an adverse effect on stormwater infrastructure in the future. The research highlights the importance of available climate information and suggests a possible approach for climate change adaptation on stormwater design practice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Watkins, Bobby Gene II. "Materials selection and evaluation of Cu-W particulate composites for extreme electrical contacts". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39494.

Texto completo
Resumen
Materials for extreme electrical contacts need to have high electrical conductivity coupled with good structural properties. Potential applications include motor contacts, high power switches, and the components of electromagnetic launch (EML) systems. In particular, the lack of durability of these materials in rail components limits practical EML implementation. These rails experience significant amounts of Joule heating, due to extreme current densities, and subsequent thermally-assisted wear. New more durable materials solutions are needed for these components. A systematic materials selection study was executed to identify and compare candidate materials solutions. Several possible candidate non-dominated materials as well as hybrid materials that could potential fill the "white spaces" on the Ashby charts were identified. A couple potential candidate materials were obtained and evaluated. These included copper-tungsten W-Cu, "self-lubricating" graphite-impregnated Cu, and Gr-W-Cu composites with different volume fractions of the constituents. The structure-property relations were determined through mechanical and electrical resistivity testing. A unique test protocol for exposing mechanical test specimens to extreme current densities up to 1.2 GA/m2 was developed and used to evaluate these candidate materials. The systematic design of multi-functional materials for these extreme electrical contacts requires more than an empirical approach. Without a good understanding of both the tribological and structural performance, the optimization of the microstructure will not be quickly realized. By using micromechanics modeling and other materials design modeling tools coupled with systematic mechanical and tribological experiments, the design of materials for these applications can potentially be accelerated. In addition, using these tools, more complex functionally-graded materials tailored to the application can be systematically designed. In this study, physics- and micromechanics-based models were used to correlate properties to the volume fraction of the constituents of the evaluated candidate materials. Properties correlated included density, elastic modulus, hardness, strength, and electrical resistivity of the W-Cu materials.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Papšys, Kęstutis. "Methodology of development of cartographic information system for evaluation of risk of extreme events". Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130220_160846-94374.

Texto completo
Resumen
The thesis describes the methodology of evaluation of extreme events and development of cartographic information system for this purpose. Existing complex risk assessment systems in the world are analysed highlighting their advantages and disadvantages. Author proposes original integrated risk assessment methodology based on integration of information from different geographic data sources. A cartographic information system designed by the author allows for the assessment of extreme events threats and risks. The developed methodology includes methodology of cartographic information system component development and deployment. The work describes necessary extreme events data, methods of their collection and database design principles. The created model enables the user to collect the data on extreme hazard events and to aggregate several threats into a single synthetic threat. The concepts of risks and threats and risk assessment methodology are explained. The author introduces project of an information system operating in the Lithuanian Geographic Information Infrastructure and integrated in the Lithuania spatial information portal. The system is tested with several consistent spatial data sets for Lithuania. The thesis presents experimental results that show increased geological and meteorological risk areas in Lithuania. Finally, methodological and practical conclusions about the methods and system customization, reliability and compliance with standards are presented.
Disertacijoje aprašoma ekstremalių įvykių vertinimo kartografinės informacinės sistemos kūrimo metodologija. Analizuojamos pasaulyje egzistuojančios kompleksinės rizikos vertinimo sistemos išryškinami jų trūkumai ir privalumai. Atliktos analizės pagrindu sukuriama originali daugeliu duomenų šaltinių pagrįsta kompleksinio rizikos vertinimo metodologija ir aprašoma autoriaus suprojektuota informacinė sistema leidžianti vertinti ekstremalių įvykių grėsmes ir riziką. Sukurta metodologija apima kartografinės informacinės sistemos sudedamųjų dalių kūrimo ir diegimo metodiką. Aprašomi sistemos veikimui reikiamų duomenų tipai, jų surinkimas, ekstremalių įvykių duomenų bazės kaupimo principai, sukuriamas ekstremalių įvykių grėsmių skaičiavimo ir kelių grėsmių apjungimo į vieną sintetinę grėsmę modelis. Aprašomas rizikos ir grėsmės santykis ir rizikos vertinimo metodologija. Disertacijoje taip pat pateikiama visos sistemos, veikiančios Lietuvos geografinės informacijos infrastruktūroje, ir integruotos Lietuvos erdvinės informacijos portale projektas. Sistema išbandyta su Lietuvoje pasiekiamais ir realiai egzistuojančiais erdvinių duomenų rinkiniais. Pateikiami eksperimento metu gauti rezultatai, rodantys padidintų geologinių ir meteorologinių rizikos rajonus Lietuvoje. Darbo pabaigoje pateikiamos metodologinės ir praktinės išvados apie metodų ir sistemos pritaikymą, patikimumą ir atitikimą standartams.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Kallio, Rebecca Mae. "Evaluation of Channel Evolution and Extreme Event Routing for Two-Stage Ditches in a Tri-State Region of the USA". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1275424336.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Katsara, Maria-Alexandra [Verfasser], Michael [Gutachter] Nothnagel, Meaux Juliette [Gutachter] de y Thomas [Gutachter] Wiehe. "Evaluation of a prior-incorporated statistical model and established classifiers for externally visible characteristics prediction / Maria-Alexandra Katsara ; Gutachter: Michael Nothnagel, Juliette de Meaux, Thomas Wiehe". Köln : Universitäts- und Stadtbibliothek Köln, 2021. http://d-nb.info/1237814405/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Burke, Susan Marie. "Striving for Credibility in the Face of Ambiguity: A Grounded Theory Study of Extreme Hardship Immigration Psychological Evaluations". Antioch University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=antioch1570121587640465.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Mugume, Seith Ncwanga. "Modelling and resilience-based evaluation of urban drainage and flood management systems for future cities". Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/18870.

Texto completo
Resumen
In future cities, urban drainage and flood management systems should be designed not only to reliable during normal operating conditions but also to be resilient to exceptional threats that lead to catastrophic failure impacts and consequences. Resilience can potentially be built into urban drainage systems by implementing a range of strategies, for example by embedding redundancy and flexibility in system design or rehabilitation to increase their ability to efficiently maintain acceptable customer flood protection service levels during and after occurrence of failure or through installation of equipment that enhances customer preparedness for extreme events or service disruptions. However, operationalisation of resilience in urban flood management is still constrained by lack of suitable quantitative evaluation methods. Existing hydraulic reliability-based approaches tend to focus on quantifying functional failure caused by extreme rainfall or increases in dry weather flows that lead to hydraulic overloading of the system. Such approaches take a narrow view of functional resilience and fail to explore the full system failure scenario space due to exclusion of internal system failures such as equipment malfunction, sewer (link) collapse and blockage that also contribute significantly to urban flooding. In this research, a new analytical approach based on Global Resilience Analysis (GRA) is investigated and applied to systematically evaluate the performance of an urban drainage system (UDS) when subjected to a wide range of both functional and structural failure scenarios resulting from extreme rainfall and pseudo random cumulative link failure respectively. Failure envelopes, which represent the resulting loss of system functionality (impacts) are determined by computing the upper and lower limits of the simulation results for total flood volume (failure magnitude) and average flood duration (failure duration) at each considered failure level. A new resilience index is developed and applied to link resulting loss of functionality magnitude and duration to system residual functionality (head room) at each considered failure level. With this approach, resilience has been tested and characterized for a synthetic UDS and for an existing UDS in Kampala city, Uganda. In addition, the approach has been applied to quantify the impact of interventions (adaptation strategies) on enhancement of global UDS resilience to flooding. The developed GRA method provides a systematic and computationally efficient approach that enables evaluation of whole system resilience, where resilience concerns ‘beyond failure’ magnitude and duration, without prior knowledge of threat occurrence probabilities. The study results obtained by applying the developed method to the case studies suggest that by embedding the cost of failure in resilience-based evaluation, adaptation strategies which enhance system flexibility properties such as distributed storage and improved asset management are more cost-effective over the service life of UDSs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Kanbe, Yuichi. "Control of Alloy Composition and Evaluation of Macro Inclusions during Alloy Making". Doctoral thesis, KTH, Tillämpad processmetallurgi, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-27773.

Texto completo
Resumen
In order to obtain a good performance and predict the properties of alloys, it is necessary to control the contents of alloying elements and to evaluate a largest inclusion in the product. Thus, improved techniques for both control of alloy elements and evaluation of the large inclusion in products will enable us to provide better qualities of the final products. In the case of one Ni alloy, (NW2201, >99 mass%Ni), the precise control technique of Mg content is important to obtain a good hot-workability. Hereby, the slag/metal reaction experiments in a laboratory have been carried out at 1873 K, so that the equilibrium Mg content and kinetic behavior can be understood. More addition of Al in the melt as well as higher CaO/Al2O3 value of slag resulted in higher amount of Mg content in Ni. For the same conditions of Al content and slag composition, the mass transfer coefficient of Mg in molten Ni was determined as 0.0175 cm/s. By applying several countermeasures regarding the equilibrium and kinetic process to the plant trials, the value of the standard deviation for the Mg content in an alloy was decreased till 0.003 from 0.007 mass%. The size measurements of largest inclusions in the various alloys (an Fe-10mass%Ni alloy, 17CrMo4 of low-C steel and 304 stainless steel) were carried out by using statistics of extreme values (SEV). In order to improve the prediction accuracy of this method, three dimensional (3D) observations were applied after electrolytic extraction. In addition, the relationship of extreme value distribution (EVD) in the different stages of the production processes was studied. This was done to predict the largest inclusion in the products at an early stage of the process. A comparison of EVDs for single Al2O3 inclusion particles obtained by 2D and 3D observations has clarified that 3D observations result in more accurate EVD because of the absence of pores. Also, it was found that EVD of clusters were larger than that of single particles. In addition, when applying SEV to sulfide inclusions with various morphologies, especially for elongated sulfides, the real maximum sizes of them were able to be measured by 3D observations. Geometrical considerations of these particles clarified the possibility of an appearance of the real maximum inclusion sizes on a cross section to be low. The EVDs of deoxidation products in 304 stainless steel showed good agreement between the molten steel and slab samples of the same heat. Furthermore, the EVD of fractured inclusion lengths in the rolled steel were estimated from the initial sizes of undeformed inclusions which were equivalent with fragmented inclusions. On the other hand, from the viewpoint of inclusion width, EVD obtained from perpendicular cross section of strips was found to be useful to predict the largest inclusion in the final product with less time consumption compared to a slab sample. In summary, it can be concluded that the improvement of the techniques by this study has enabled to precisely control of alloy compositions as well as to evaluate the largest inclusion size in them more accurately and at an earlier stage of the production process.
QC 20101222
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Dogs, Carsten y Timo Klimmer. "An Evaluation of the Usage of Agile Core Practices : How they are used in industry and what we can learn from their usage". Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4725.

Texto completo
Resumen
In this thesis we investigate the usage of several agile software development methods as well as the usage of certain agile core practices. By conducting a web survey, we examine what makes these practices beneficial and what tends to make them rather less suitable for certain situations. Based on the results, we finally set up some recommendations for practitioners to reflect upon and improve their own software development process. Concerning these recommendations as well as the list of the investigated practices, we hope (and are almost sure) that there are some practices or ideas contained which are worth at least thinking about. The main findings of this thesis are: - Agile software development methods have already entered the professional market but they are still no cure-all. In many cases they also produce only middle-quality software. Nevertheless, there is – even if only little – evidence that at least XP projects meet the requirements of the customer better than traditional, non-agile methods. - For a successful software development project it is important that it has a suitable requirements engineering process, that the produced software is tested sufficiently (using automated regression testing among other types of testing), that there is a good communication between the customer and the developer side, that the risks of the project are considered, that the pros and cons of practices are considered and that processes are improved continuously. - Besides, it is important to consider the whole context when implementing a certain practice. For some contexts, certain practices do not fit for their purpose and this has to be realized. However, certain shortcomings of a specific practice might be reduced or even eliminated if implemented in combination with other practices.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Simões, Ana Carolina Quirino. "Planejamento, gerenciamento e análise de dados de microarranjos de DNA para identificação de biomarcadores de diagnóstico e prognóstico de cânceres humanos". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/95/95131/tde-12092013-172649/.

Texto completo
Resumen
Nesta tese, apresentamos nossas estratégias para desenvolver um ambiente matemático e computacional para análises em larga-escala de dados de expressão gênica obtidos pela tecnologia de microarranjos de DNA. As análises realizadas visaram principalmente à identificação de marcadores moleculares de diagnóstico e prognóstico de cânceres humanos. Apresentamos o resultado de diversas análises implementadas através do ambiente desenvolvido, as quais conduziram a implementação de uma ferramenta computacional para a anotação automática de plataformas de microarranjos de DNA e de outra ferramenta destinada ao rastreamento da análise de dados realizada em ambiente R. Programação eXtrema (eXtreme Programming, XP) foi utilizada como técnica de planejamento e gerenciamento dos projetos de análise dados de expressão gênica. Todos os conjuntos de dados foram obtidos por nossos colaboradores, utilizando-se duas diferentes plataformas de microarranjos de DNA: a primeira enriquecida em regiões não-codificantes do genoma humano, em particular regiões intrônicas, e a segunda representando regiões exônicas de genes humanos. A primeira plataforma foi utilizada para avaliação do perfil de expressão gênica em tumores de próstata e rim humanos, sendo que análises utilizando SAM (Significance Analysis of Microarrays) permitiram a proposição de um conjunto de 49 sequências como potenciais biomarcadores de prognóstico de tumores de próstata. A segunda plataforma foi utilizada para avaliação do perfil de transcritos expressos em sarcomas, carcinomas epidermóide e carcinomas epidermóides de cabeça e pescoço. As análises com sarcomas permitiram a identificação de um conjunto de 12 genes relacionados à agressividade local e metástase. As análises com carcinomas epidermóides de cabeça e pescoço permitiram a identificação de 7 genes relacionados à metástase linfonodal.
In this PhD Thesis, we present our strategies to the development of a mathematical and computational environment aiming the analysis of large-scale microarray datasets. The analyses focused mainly on the identification of molecular markers for diagnosis and prognosis of human cancers. Here we show the results of several analyses implemented using this environment, which led to the development of a computational tool for automatic annotation of DNA microarray platforms and a tool for tracking the analysis within R environment. We also applied eXtreme Programming (XP) as a tool for planning and management of gene expression analyses projects. All data sets were obtained by our collaborators using two different microarray platforms. The first is enriched in non-coding human sequences, particularly intronic sequences. The second one represents exonic regions of human genes. Using the first platform, we evaluated gene expression profiles of prostate and kidney human tumors. Applying SAM to prostate tumor data revealed 49 potential molecular markers for prognosis of this disease. Gene expression in samples of sarcomas, epidermoid carcinomas and head and neck epidermoid carcinomas was investigated using the second platform. A set of 12 genes were identified as potential biomarkers for local aggressiveness and metastasis in sarcoma. In addition, the analyses of data obtained from head and neck epidermoid carcinomas allowed the identification of 7 potential biomarkers for lymph-nodal metastases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Bernardini, Flávia Cristina. "Combinação de classificadores simbólicos utilizando medidas de regras de conhecimento e algoritmos genéticos". Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-29092006-110806/.

Texto completo
Resumen
A qualidade das hipóteses induzidas pelos atuais sistemas de aprendizado de máquina supervisionado depende da quantidade dos exemplos no conjunto de treinamento. Por outro lado, muitos dos sistemas de aprendizado de máquina conhecidos não estão preparados para trabalhar com uma grande quantidade de exemplos. Grandes conjuntos de dados são típicos em mineração de dados. Uma maneira para resolver este problema consiste em construir ensembles de classificadores. Um ensemble é um conjunto de classificadores cujas decisões são combinadas de alguma maneira para classificar um novo caso. Apesar de melhorar o poder de predição dos algoritmos de aprendizado, ensembles podem ser compostos por muitos classificadores, o que pode ser indesejável. Ainda, apesar de ensembles classificarem novos exemplos melhor que cada classificador individual, eles se comportam como caixas pretas, no sentido de não oferecer ao usuário alguma explicação relacionada à classificação por eles fornecida. Assim, neste trabalho propomos uma abordagem que utiliza algoritmos de aprendizado simbólico para construir ensembles de classificadores simbólicos que explicam suas decisões de classificação e são tão ou mais precisos que o mais preciso dos seus classificadores individuais. Além disso, considerando que algoritmos de aprendizado simbólico utilizam métodos de busca local para induzir classificadores quanto que algoritmos genéticos utilizam métodos de busca global, propomos uma segunda abordagem para aprender conceitos simbólicos de grandes bases de dados utilizando algoritmos genéticos para evoluir classificadores simbólicos em um u´ nico classificador simbólico, de maneira que o classificador evoluído é mais preciso que os classificadores iniciais. Ambas propostas foram implementadas em dois sistemas computacionais. Diversos experimentos usando diferentes conjuntos de dados foram conduzidos para avaliar ambas as propostas. Ainda que os resultados experimenta das duas soluções propostas são promissores, os melhores resultados foram obtidos utilizando a abordagem relacionada a algoritmos genéticos
The quality of hypotheses induced by most of the available supervised machine learning algorithms depends on the quantity and quality of the instances in the training set. However, several well known learning algorithms are not able to manipulate many instances making it difficult to induce good classifiers from large databases, as are needed in the Data Mining process. One approach to overcome this problem is to construct ensembles of classifiers. An ensemble is a set of classifiers whose decisions are combined in some way to classify new cases (instances). However, although ensembles improve learning algorithms power prediction, ensembles may use an undesired large set of classifiers. Furthermore, despite classifying new cases better than each individual classifier, ensembles are generally a sort of ?black-box? classifier, not being able to explain their classification decisions. To this end, in this work we propose an approach that uses symbolic learning algorithms to construct ensembles of symbolic classifiers that can explain their classification decisions so that the ensemble is as accurate as or more accurate than the individual classifiers. Furthermore, considering that symbolic learning algorithms use local search methods to induce classifiers while genetic algorithms use global search methods, we propose a second approach to learn symbolic concepts from large databases using genetic algorithms to evolve symbolic classifiers into only one symbolic classifier so that the evolved classifier is more accurate than the initial ones. Both proposals were implemented in two computational systems. Several experiments using different databases were conducted in order to evaluate both proposals. Results show that although both proposals are promising, the approach using genetic algorithms produces better results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Olausson, Katrin. "On Evaluation and Modelling of Human Exposure to Vibration and Shock on Planing High-Speed Craft". Licentiate thesis, KTH, Marina system, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-159168.

Texto completo
Resumen
High speed in waves, necessary in for instance rescue or military operations, often result in severe loading on both the craft and the crew. To maximize the performance of the high-speed craft (HSC) system that the craft and crew constitute, balance between these loads is essential. There should be no overload or underuse of crew, craft or equipment. For small high-speed craft systems, man is often the weakest link. The human exposure to vibration and shock results in injuries and other adverse health effects, which increase the risks for non-safe operations and performance degradation of the crew and craft system. To achieve a system in balance, the human acceleration exposure must be considered early in ship design. It must also be considered in duty planning and in design and selection of vibration mitigation systems. The thesis presents a simulation-based method for prediction and evaluation of the acceleration exposure of the crew on small HSC. A numerical seat model, validated with experimental full-scale data, is used to determine the crew's acceleration exposure. The input to the model is the boat acceleration expressed in the time domain (simulated or measured), the total mass of the seated human, and seat specific parameters such as mass, spring stiffness and damping coefficients and the seat's longitudinal position in the craft. The model generates seat response time series that are evaluated using available methods for evaluation of whole-body vibration (ISO 2631-1 \& ISO 2631-5) and statistical methods for calculation of extreme values. The presented simulation scheme enables evaluation of human exposure to vibration and shock at an early stage in the design process. It can also be used as a tool in duty planning, requirements specification or for design of appropriate vibration mitigation systems. Further studies is proposed within three areas: investigation of the actual operational profiles of HSC, further development of seat models and investigation of the prevailing injuries and health problems among the crew of HSC.

QC 20150126

Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Liu, Qiang. "Microstructure Evaluation and Wear-Resistant Properties of Ti-alloyed Hypereutectic High Chromium Cast Iron". Doctoral thesis, KTH, Tillämpad processmetallurgi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-128532.

Texto completo
Resumen
High chromium cast iron (HCCI) is considered as one of the most useful wear resistance materials and their usage are widely spread in industry. The mechanical properties of HCCI mainly depend on type, size, number, morphology of hard carbides and the matrix structure (γ or α). The hypereutectic HCCI with large volume fractions of hard carbides is preferred to apply in wear applications. However, the coarser and larger primary M7C3 carbides will be precipitated during the solidification of the hypereutectic alloy and these will have a negative influence on the wear resistance. In this thesis, the Ti-alloyed hypereutectic HCCI with a main composition of Fe-17mass%Cr-4mass%C is studied based on the experimental results and calculation results. The type, size distribution, composition and morphology of hard carbides and martensite units are discussed quantitatively. For a as-cast condition, a 11.2μm border size is suggested to classify the primary M7C3 carbides and eutectic M7C3 carbides. Thereafter, the change of the solidification structure and especially the refinement of carbides (M7C3 and TiC) size by changing the cooling rates and Ti addition is determined and discussed. Furthermore, the mechanical properties of hypereutectic HCCI related to the solidification structure are discussed. Mechanical properties of HCCI can normally be improved by a heat treatment process. The size distribution and the volume fraction of carbides (M7C3 and TiC) as well as the matrix structure (martensite) were examined by means of scanning electron microscopy (SEM), in-situ observation by using Confocal Laser Scanning Microscope (CLSM), Transmission electron microscopy (TEM) and electron backscattered diffraction (EBSD). Especially for the matrix structure and secondary M7C3 carbides, EBSD and CLSM are useful tools to classify the fcc (γ) and bcc (α) phases and to study the dynamic behavior of secondary M7C3 carbides. In conclusion, low holding temperatures close to the eutectic temperature and long holding times are the best heat treatment strategies in order to improve wear resistance and hardness of Ti-alloyed hypereutectic HCCI. Finally, the maximum carbides size is estimated by using statistics of extreme values (SEV) method in order to complete the size distribution results. Meanwhile, the characteristic of different carbides types will be summarized and classified based on the shape factor.

QC 20130913

Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Michaelsson, Ludvig y Sebastian Quiroga. "Design and evaluation of an adaptive dairy cow indoor positioning system : A study of the trade-off between position accuracy and energy consumption in mobile units with extreme battery life". Thesis, KTH, Maskinkonstruktion (Inst.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190203.

Texto completo
Resumen
With growing farm sizes, increasing work load, and increasing social and legislative demands for loose housing, health monitoring of farm animals is playing a bigger role for farmers worldwide. A type of information that can be used to determine the health status of dairy cows is positional data. However, since dairy cows spend a lot of time indoors in protection from various weather conditions or to perform other activities, GPS solutions are not sufficient. Moreover, the devices that the dairy cows carry must have a long battery life in order to avoid frequent system maintenance. This thesis researches possible system solutions to enable indoor positioning of dairycows within loose housed freestall barns. The proposed system configuration is then optimized in terms of energy consumption, and the trade-off between dynamic energy consumption and position accuracy is investigated. Previous research has focused on one or the other, and the development of systems with extreme battery life has not been a priority. The proposed system uses a proprietary 433MHz radio frequency band to estimate the dairy cows’ positions, and accelerometer data to adaptively alter estimation frequency to minimize energy consumption. After the optimization process, the proposed system is shown to have a battery life of at least two years with an accuracy of approximately 7–8m and a precision of 11–12 mutilizing four anchor nodes in an experimental barn. The theorized correlation of the position accuracy and energy consumption could not be found. Keywords: indoor positioning, dairy cows, weighted non-linear least squares, energyconsumption, agriculture, system design, optimization, positioning accuracy, Sub GHzradio, battery life
Med växande gårdsstorlekar, ökande arbetsbelastning, påtryckningar från samhället och lagstiftning för lösdrift, gör att hälsoövervakning av gårdsdjur spelar en större roll för jordbrukare världen över. En typ av information som kan användas för att bestämma mjölkkors hälsa är positioneringsdata. Eftersom mjölkkor spenderar mycket tid inomhus för att skyddas mot väder, eller för att utföra andra aktiviteter, så lämpar sig inte lösningar baserade på GPS. Utöver det så krävs det att enheterna som korna bär med sig har en lång batteritid för att undvika frekventa systemunderhåll. Den här masteruppsatsen undersöker potentiella systemlösningar för att möjliggöra inomhuspositionering av mjölkkor i lösdriftsladugårdar. Den valda konfigurationen är sedan optimerad med avseende på energikonsumtion. Därefter undersöks förhållandet mellan dynamisk energikonsumtion och lokaliseringsnoggrannhet, tidigare forskning har fokuserat på antingen eller. Utvecklingen av system med lång livslängd har inte heller varit en prioritet. Det föreslagna systemet använder sig utav proprietära radiotekniker på 433MHz-bandet för att skatta mjölkkornas position. Dessutom används accelerometerdata till att adaptivt justera skattningsfrekvensen för att minimera energikonsumtion. Efter optimeringsprocessen har det föreslagna systemet en batteritid på minst två år, med en noggrannhet på ungefär 7–8m och en precision på 11–12m, när endast fyra ankarnoder användes i en experimentladugård. Den teoretiserade korrelationen mellan lokaliseringsnoggrannhet och energikonsumtion kunde ej påvisas. Nyckelord: inomhuspositionering, mjölkkor, viktad icke-linjär minstakvadratmetod,energikonsumtion, jordbruk, systemdesign, optimering, lokaliseringsnoggrannhet, SubGHz radio, batteritid
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Hmad, Ouadie. "Evaluation et optimisation des performances de fonctions pour la surveillance de turboréacteurs". Thesis, Troyes, 2013. http://www.theses.fr/2013TROY0029.

Texto completo
Resumen
Cette thèse concerne les systèmes de surveillance des turboréacteurs. Le développement de tels systèmes nécessite une phase d’évaluation et d’optimisation des performances, préalablement à la mise en exploitation. Le travail a porté sur cette phase, et plus précisément sur les performances des fonctions de détection et de pronostic de deux systèmes. Des indicateurs de performances associés à chacune de ces fonctions ainsi que leur estimation ont été définis. Les systèmes surveillés sont d’une part la séquence de démarrage pour la fonction de détection et d’autre part la consommation d’huile pour la fonction de pronostic. Les données utilisées venant de vols en exploitation sans dégradations, des simulations ont été nécessaires pour l’évaluation des performances. L’optimisation des performances de détection a été obtenue par réglage du seuil sur la statistique de décision en tenant compte des exigences des compagnies aériennes exprimées en termes de taux de bonne détection et de taux d’alarme fausse. Deux approches ont été considérées et leurs performances ont été comparées pour leurs meilleures configurations. Les performances de pronostic de surconsommations d’huile, simulées à l’aide de processus Gamma, ont été évaluées en fonction de la pertinence de la décision de maintenance induite par le pronostic. Cette thèse a permis de quantifier et d’améliorer les performances des fonctions considérées pour répondre aux exigences. D’autres améliorations possibles sont proposées comme perspectives pour conclure ce mémoire
This thesis deals with monitoring systems of turbojet engines. The development of such systems requires a performance evaluation and optimization phase prior to their introduction in operation. The work has been focused on this phase, and more specifically on the performance of the detection and the prognostic functions of two systems. Performances metrics related to each of these functions as well as their estimate have been defined. The monitored systems are, on the one hand, the start sequence for the detection function and on the other hand, the oil consumption for the prognostic function. The used data come from flights in operation without degradation, simulations of degradation were necessary for the performance assessment. Optimization of detection performance was obtained by tuning a threshold on the decision statistics taking into account the airlines requirements in terms of good detection rate and false alarm rate. Two approaches have been considered and their performances have been compared for their best configurations. Prognostic performances of over oil consumption, simulated using Gamma processes, have been assessed on the basis of the relevance of maintenance decision induced by the prognostic. This thesis has allowed quantifying and improving the performance of the two considered functions to meet the airlines requirements. Other possible improvements are proposed as prospects to conclude this thesis
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Backman, Emil y David Petersson. "Evaluation of methods for quantifying returns within the premium pension". Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288499.

Texto completo
Resumen
Pensionsmyndigheten's (the Swedish Pensions Agency) current calculation of the internal rate of return for 7.7 million premium pension savers is both time and resource consuming. This rate of return mirrors the overall performance of the funded part of the pension system and is analyzed internally, but also reported to the public monthly and yearly based on differently sized data samples. This thesis aims to investigate the possibility of utilizing other approaches in order to improve the performance of these calculations. Further, the study aims to verify the results stemming from said calculations and investigate their robustness. In order to investigate competitive matrix methods, a sample of approaches are compared to the more classical numerical methods. The approaches are compared in different scenarios aimed to mirror real practice. The robustness of the results are then analyzed by a stochastic modeling approach, where a small error term is introduced aimed to mimic possible errors which could arise in data management. It is concluded that a combination of Halley's method and the Jacobi-Davidson algorithm is the most robust and high performing method. The proposed method combines the speed and robustness from numerical and matrix methods, respectively. The result show a performance improvement of 550% in time, while maintaining the accuracy of the current server computations. The analysis of error propagation suggests the output error to be less than 0.12 percentage points in 99 percent of the cases, considering an introduced error term of large proportions. In this extreme case, the modeled expected number of individuals with an error exceeding 1 percentage point is estimated to be 212 out of the whole population.
Pensionsmyndighetens nuvarande beräkning av internräntan för 7,7 miljoner pensionssparare är både tid- och resurskrävande. Denna avkastning ger en översikt av hur väl den fonderade delen av pensionssystemet fungerar. Detta analyseras internt men rapporteras även till allmänheten varje månad samt årligen baserat på olika urval av data. Denna uppsats avser att undersöka möjligheten att använda andra tillvägagångssätt för att förbättra prestanda för denna typ av beräkningar. Vidare syftar studien till att verifiera resultaten som härrör från dessa beräkningar och undersöka deras stabilitet. För att undersöka om det finns konkurrerande matrismetoder jämförs ett urval av tillvägagångssätt med de mer klassiska numeriska metoderna. Metoderna jämförs i flera olika scenarier som syftar till att spegla verklig praxis. Stabiliteten i resultaten analyseras med en stokastisk modellering där en felterm införs för att efterlikna möjliga fel som kan uppstå i datahantering. Man drar slutsatsen att en kombination av Halleys metod och Jacobi-Davidson-algoritmen är den mest robusta och högpresterande metoden. Den föreslagna metoden kombinerar hastigheten från numeriska metoder och tillförlitlighet från matrismetoder. Resultatet visar en prestandaförbättring på 550 % i tid, samtidigt som samma noggrannhet som ses i de befintliga serverberäkningarna bibehålls. Analysen av felutbredning föreslår att felet i 99 procent av fallen är mindre än 0,12 procentenheter i det fall där införd felterm har stora proportioner. I detta extrema fall uppskattas det förväntade antalet individer med ett fel som överstiger 1 procentenhet vara 212 av hela befolkningen.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Souza, Crisla Serra. "Avaliação da produção de etanol em temperaturas elevadas por uma linhagem de S. cerevisiae". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/87/87131/tde-05082009-171501/.

Texto completo
Resumen
A metodologia de superfície de resposta foi utilizada para otimizar as condições e obter maiores produção de etanol e viabilidade para a linhagem de S. cerevisiae 63M em processo descontínuo, resultando nas condições: 200 g.L-1 de sacarose, 40 g.L-1 de inóculo a 30 °C. Diferentes tipos de processos foram comparados e o processo que apresentou maiores viabilidade, produtividade e rendimento foi o descontínuo alimentado por pulsos de volumes decrescentes de sacarose a 30 °C. A redução da concentração de sacarose foi uma estratégia que permitiu aumentar a temperatura até 37 °C sem perdas em viabilidades. Uma linhagem utilizada nas destilarias brasileiras foi comparada com a linhagem 63M em temperaturas elevadas e observou-se que a 63M produziu maior produtividade e rendimento. Oito ciclos sucessivos de fermentação com reutilização de células da linhagem 63M foram realizados em meio sintético em processo descontínuo alimentado por pulsos de sacarose a 37 °C e uma perda gradual de viabilidade foi observada, mas o etanol final permaneceu constante nos oitos ciclos.
Surface response methodology was used to optimize the conditions and to obtain higher ethanol production and viability to strain 63M of S. cerevisiae in batch culture, resulting in the conditions: 200 g.L-1 sucrose, 40 g.L-1 inoculum at 30 °C. Different types of processes were compared and the process that presented higher viability, productivity and yield was pulse fed-batch using five decreasing pulses of sucrose at 30 °C. The reduction of the sucrose concentration was a strategy that allowed increasing the temperature up to 37 °C without losses in viabilities. An industrial strain used in Brazilian distilleries was compared with strain 63M at high temperatures and it was observed that strain 63M produced higher productivity and yield. Eight successive cycles of fermentation with reuse of cells of strain 63M were carried out in synthetic medium in fed-batch process using sucrose pulses at 37 °C and a gradual loss of viability was observed, but the final ethanol was kept constant in the eight fermentation cycles.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Сальник, К. О. "Інформаційно-аналітична система адаптації навчального контенту до вимог ринку праці. Функціонування системи в режимі моніторингу". Master's thesis, Сумський державний університет, 2020. https://essuir.sumdu.edu.ua/handle/123456789/79560.

Texto completo
Resumen
Розроблено програмне забезпечення інформаційно-аналітичної системи адаптації навчального контенту випускової кафедри до вимог ринку праці за допомогою об’єктно- орієнтованої мови C#. Алгоритм роботи системи реалізовано згідно положень інформаційно-екстремальної інтелектуальної технології з метою забезпечення її функціонування в режимі моніторингу. Сформовано вхідний математичний опис, розглянуто та реалізовано категорійну модель машинного навчання з оптимізацією параметрів за допомогою інформаційних критеріїв, виконано алгоритм функціонування системи на етапі екзамену. Програма дозволяє провести опитування випускників кафедри та оцінити якість навчального контенту завдяки проведенню етапів навчання та екзамену. Для роботи з системою розроблено інтерфейс, який дає змогу користувачеві налаштувати основні параметри навчання, отримати детальну аналітику усіх процесів та візуалізувати результати оптимізації параметрів навчання.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Farvacque, Manon. "Evaluation quantitative du risque rocheux : de la formalisation à l'application sur les linéaires et les zones urbanisées ). How argest wildfire events in France? A Bayesian assessment based on extreme value theory ). Hows rockfall risk impacted by land-use and land-cover changes? Insights from the French Alps. Quantitative risk assessment in a rockfall-prone area: the case study of the Crolles municipality (Massif de la Chartreuse, French Alps)". Thesis, Université Grenoble Alpes, 2020. https://tel.archives-ouvertes.fr/tel-02860296.

Texto completo
Resumen
L’aléa chute de blocs est caractérisé par le détachement brutal d’une masse rocheuse, depuis une paroi (sub)verticale, qui se propage rapidement vers l’aval par rebonds successifs. Ces événements, fréquents en zones de montagne, représentent un aléa majeur pour les infrastructures collectives et les habitations, et induisent fréquemment de graves accidents. En France, par exemple, le détachement d’un volume rocheux de 30 m3 en 2014 a provoqué le déraillement du train touristique des Pignes, faisant deux victimes et neuf blessés. En 2015, l’endommagement des voies et la perturbation du trafic ferroviaire suite à un événement rocheux survenu entre Moûtiers et Bourg-Saint-Maurice a induit 1.34M€ de réparations, et 5.4M€ de dommages indirects.Ces différents événements illustrent bien notre vulnérabilité face aux événements rocheux, et soulignent que les collectivités locales et les pouvoirs publics sont encore fréquemment démunis en matière de méthode de diagnostic et d’analyse du risque de chute de blocs. Dans ce contexte, l’évaluation des risques par une approche de type quantitative, appelée QRA (Quantitative Risk Assessment), est devenue incontournable pour l’aménagement des territoires de montagne et le choix des mesures de mitigation. Chaque terme de l’équation du risque, dont les composantes principales sont l’aléa, la vulnérabilité, et l’exposition, sont alors fidèlement quantifiés, offrant des informations sur les dommages potentiels.Malgré le vif intérêt alloué aux approches de type QRA pour la gestion des risques rocheux, de telles applications restent encore inhabituelles. La rareté de ces approches est principalement liée à la difficulté à évaluer précisément chacune des composantes du risque. De plus, les quelques études qui proposent une approche QRA dans le domaine rocheux font généralement l’hypothèse de la stationnarité du processus, alors que l’étalement urbain, ou l’évolution de l’occupation des sols, qui modifient le fonctionnement du processus ne sont pas intégrés. Enfin, le risque rocheux – comme la plupart des autres risques naturels – est exprimé par la moyenne des dommages. Cependant, cette moyenne arithmétique est associée à plusieurs faiblesses, et n’offre qu’une seule valeur du risque, généralement inadaptée aux différentes contraintes auxquelles doivent faire face les gestionnaires. Dans ce contexte, l’objectif de cette thèse est de renforcer les bases formelles du calcul du risque dans le domaine des chutes de blocs, d’évaluer les effets des changements environnementaux sur le risque rocheux, et de proposer une méthode où le risque de chutes de blocs est quantifié à partir de mesures de risque alternatives à la moyenne arithmétique. A cet effet, nous proposons une procédure holistique de QRA où le risque rocheux est quantifié en combinant un modèle de simulation trajectographique avec des courbes de vulnérabilité et un large spectre de volume rocheux et de zones de départ de chutes de blocs. La faisabilité et l’intérêt de cette procédure est illustré sur deux cas d’études réels : la commune de Crolles (Alpes Françaises), et la vallée de l’Uspallata (Cordillère des Andes). Par ailleurs, nous mesurons l’effet des changements environnementaux sur le risque de chutes de blocs en appliquant la QRA dans différents contextes d’utilisation et d’occupation des sols. Enfin, nous proposons une approche innovante où deux mesures de risque, dites "quantile-based measures", sont introduites. Ces dernières permettent une meilleure prise en compte des événements extrêmes et permettent d’envisager la gestion du risque à divers horizons temporels
Rockfalls are a common type of fast moving landslide, corresponding to the detachment of individual rocks and boulders of different sizes from a vertical or sub-vertical cliff, and to their travel down the slope by free falling, bouncing and/or rolling. Every year, in the Alpine environment, rockfalls reach urbanized areas causing damage to structures and injuring people. Precise rockfall risk analysis has therefore become an essential tool for authorities and stakeholders in land-use planning.To this aim, quantitative risk assessment (QRA) procedures originally developed for landslides have been adapted to rockfall processes. In QRAs, rockfall risk for exposed elements is estimated by coupling the hazard, exposure and vulnerability components. However in practice, the estimation of the different components of risk is challenging, and methods for quantifying risk in rockfall-prone regions remain scarce. Similarly, the few studies which so far performed QRAs for rockfall assume stationary, precluding reliable anticipation of the risk in a context where environmental and societal conditions are evolving rapidly and substantially. Moreover, rockfall risk remains - as for most of natural hazards - always defined as the loss expectation. This metric offers a unique risk value, usually inconsistent with short/long term constraints or trade-offs faced by decision-makers.On this basis, this PhD thesis therefore aims at (i) reinforcing the basis of QRA, (ii) assessing the effect of environmental changes on rockfall risk, and (iii) proposing method for quantifying rockfall risk from measures of risk alternative to the standard loss expectation. In that respect, we propose a QRA procedure where the rockfall risk is quantified by combining a rockfall simulation model with the physical vulnerability of potentially affected structures and a wide spectrum of rockfall volumes as well as release areas. The practicability and interest of this procedure is illustrated on two real case studies, i.e. the municipality of Crolles, in the French Alps, and the Uspallata valley, in the central Andes mountains. Similarly, the effect of environmental changes on rockfall risk is considered by comparing rockfall risk values in different land-use and land-cover contexts. Last, we implement in our procedure on an individual basis two quantile-based measures, namely the value-at-risk and the expected-shortfall, so as to assess rockfall risk for different risk-management horizon periods. All in all, this PhD thesis clearly demonstrates the added value of QRA procedure in the field of rockfall, and reinforces its basis by implementing analytical, statistical or numerical models. The resulting panel of risk maps, also proposed under non-stationary contexts, are of major interest for stakeholders in charge of risk management, and constitute appropriate basis for land-use planning and prioritizing of mitigation strategies
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Hamdi, Haykel. "Théorie des options et fonctions d'utilité : stratégies de couverture en présence des fluctuations non gaussiennes". Thesis, Paris 2, 2011. http://www.theses.fr/2011PA020006/document.

Texto completo
Resumen
L'approche traditionnelle des produits dérivés consiste, sous certaines hypothèses bien définies, à construire des stratégies de couverture à risque strictement nul. Cependant,dans le cas général ces stratégies de couverture "parfaites" n'existent pas,et la théorie doit plutôt s'appuyer sur une idée de minimisation du risque. Dans ce cas, la couverture optimale dépend de la quantité du risque à minimiser. Dans lecadre des options, on considère dans ce travail une nouvelle mesure du risque vial'approche de l'utilité espérée qui tient compte, à la fois, du moment d'ordre quatre,qui est plus sensible aux grandes fluctuations que la variance, et de l'aversion aurisque de l'émetteur d'une option vis-à-vis au risque. Comparée à la couverture endelta, à l'optimisation de la variance et l'optimisation du moment d'ordre quatre,la stratégie de couverture, via l'approche de l'utilité espérée, permet de diminuer lasensibilité de la couverture par rapport au cours du sous-jacent. Ceci est de natureà réduire les coûts des transactions associées
The traditional approach of derivatives involves, under certain clearly defined hypothesis, to construct hedging strategies for strictly zero risk. However, in the general case these perfect hedging strategies do not exist, and the theory must be rather based on the idea of risk minimization. In this case, the optimal hedging strategy depends on the amount of risk to be minimized. Under the options approach, we consider here a new measure of risk via the expected utility approach that takes into account both, the moment of order four, which is more sensitive to fluctuations than large variance, and risk aversion of the investor of an option towards risk. Compared to delta hedging, optimization of the variance and maximizing the moment of order four, the hedging strategy, via the expected utilitiy approach, reduces the sensitivy of the hedging approach reported in the underlying asset price. This is likely to reduce the associated transaction costs
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Dočekal, Martin. "Porovnání klasifikačních metod". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403211.

Texto completo
Resumen
This thesis deals with a comparison of classification methods. At first, these classification methods based on machine learning are described, then a classifier comparison system is designed and implemented. This thesis also describes some classification tasks and datasets on which the designed system will be tested. The evaluation of classification tasks is done according to standard metrics. In this thesis is presented design and implementation of a classifier that is based on the principle of evolutionary algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Simmons, Kenneth Rulon. "EXTREME HEAT EVENT RISK MAP CREATION USING A RULE-BASED CLASSIFICATION APPROACH". Thesis, 2012. http://hdl.handle.net/1805/2762.

Texto completo
Resumen
Indiana University-Purdue University Indianapolis (IUPUI)
During a 2011 summer dominated by headlines about an earthquake and a hurricane along the East Coast, extreme heat that silently killed scores of Americans largely went unnoticed by the media and public. However, despite a violent spasm of tornadic activity that claimed over 500 lives during the spring of the same year, heat-related mortality annually ranks as the top cause of death incident to weather. Two major data groups used in researching vulnerability to extreme heat events (EHE) include socioeconomic indicators of risk and factors incident to urban living environments. Socioeconomic determinants such as household income levels, age, race, and others can be analyzed in a geographic information system (GIS) when formatted as vector data, while environmental factors such as land surface temperature are often measured via raster data retrieved from satellite sensors. The current research sought to combine the insights of both types of data in a comprehensive examination of heat susceptibility using knowledge-based classification. The use of knowledge classifiers is a non-parametric approach to research involving the creation of decision trees that seek to classify units of analysis by whether they meet specific rules defining the phenomenon being studied. In this extreme heat vulnerability study, data relevant to the deadly July 1995 heat wave in Chicago’s Cook County was incorporated into decision trees for 13 different experimental conditions. Populations vulnerable to heat were identified in five of the 13 conditions, with predominantly low-income African-American communities being particularly at-risk. Implications for the results of this study are given, along with direction for future research in the area of extreme heat event vulnerability.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Tsao, Chin-Chen y 曹晉誠. "An Evaluation of Service Quality and Leisure Benefit for the Extreme Sports Stadium". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/85021493595839478149.

Texto completo
Resumen
碩士
亞洲大學
休閒與遊憩管理學系
104
Low utilization has become a problem of most extreme sports stadiums in Taiwan. In order to enhance the utilization of stadiums, government authorized external units or firms to operate some of extreme sports stadiums. It is critical to explore how the firm can enhance the utilization of the sports stadium for avoiding of becoming empty. If the firm can meet customers’ leisure demands by designing and offering leisure sports services with considering the characteristics of equipment and facilities in the stadium, it can pull people’s demands of extreme sports stadiums. This study attempts to offer the managerial strategies related to customer needs and service qualities to the firm for increasing the utilization of the sports stadiums. In order to understand whether the firm’s leisure services and service quality can create leisure benefits for customers and thus satisfy their demands, this study attempts to explore the relationship between the service quality and leisure benefit. Specifically, this study tries to understand the expected leisure benefits of various segments of customers (including various ages, genders, occupations etc.) before experiencing services. In addition, this study also investigates the relationship between the service quality and leisure benefit based on customers’ experiences for identifying important quality item. Based on the findings, the firm that operating the sports stadiums can design the customer-oriented leisure services for enhancing the utilization of the stadium.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Vila, Verde Francisca Viçoso. "Peer-to-peer lending: Evaluation of credit risk using Machine Learning". Master's thesis, 2021. http://hdl.handle.net/10362/127084.

Texto completo
Resumen
Dissertation presented as the partial requirement for obtaining a Master's degree in Statistics and Information Management, specialization in Risk Analysis and Management
Peer-to-peer lenders have transformed the credit market by being an alternative to traditional financial services and taking advantage of the most advanced analytics techniques. Credit scoring and accurate assessment of borrower’s creditworthiness is crucial to managing credit risk and having the capacity of adapting to current market conditions. The Logistic Regression has long been recognised as the benchmark model for credit scoring, so this dissertation aims to evaluate and compare its capabilities to predict loan defaults with other parametric and non-parametric methods, to assess the improvement in predictive power between the most modern techniques and the traditional models in a peer-to-peer lending context. We compare the performance of four different algorithms, the single classifiers Decision Trees and K-Nearest Neighbours, and the ensemble classifiers Random Forest and XGBoost against a benchmark model, the Logistic Regression, using six performance evaluation measures. This dissertation also includes a review of related work, an explanation of the pre-processing involved, and a description of the models. The research reveals that both XGBoost and Random Forest outperform the benchmark’s predictive capacity and that the KNN and the Decision Tree models have weaker performance compared to the benchmark. Hence, it can be concluded that it still makes sense to use this benchmark model, however, the more modern techniques should also be taken into consideration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Hong, Yu-Ting y 洪郁婷. "Evaluation on the Influence Zone of rainfall induced Debris Flow under Extreme Climate Condition". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/29674439554412281298.

Texto completo
Resumen
碩士
國立臺灣海洋大學
河海工程學系
98
Taiwan is located at the center of Circumpacific belt. Many faults and the fracture of geological conditions are need considering due to the Philippine Sea and the Eurasia plate action. When the typhoon season coming during June to Oct., the rainfall intensity is plentiful to concentrate and could be caused the different slope hazards. Practically, the hazard of debris flow will cause large range damages and impact at short duration. In recent years, due to climate changing, the frequency of extreme climate condition is double. To order to find out the relationship between extreme climate induce rainfall intensity and influence zone of debris flow, the program of FLO-2D is adopted to simulate the two-dimensional floods, and the influence zone and accumulation depth of debris flows. The study areas are five potential debris flow torrent, which located at Chen-Yu-Lan River in Nantou County. The rainfall data of Typhoon Mindulle, Toraji and Morakot are collected. All of these typhoons caused many times of debris flow and serious disaster. From the analysis of the properties of rainfall, we can indicate the main reason of Toraji typhoon induce-damage is highly rainfall intensity large than 100mm per hour. And Mindulle typhoon and Morakot typhoon induce damage are caused by huge accumulated rainfall. From the results with rainfall properties and watershed area, it could find the influence area and accumulation depth of debris flow are much correlated with rainfall intensity and watershed area. It also can find that the rainfall intensity effect is indistinct to a large watershed area.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Nai-YuYang y 楊乃玉. "An Evaluation Study for the Impact of Discretization Methods on the Performance of Naive Bayesian Classifiers with Prior Distributions". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/40262687553999457567.

Texto completo
Resumen
碩士
國立成功大學
工業與資訊管理學系專班
98
Na?ve Bayesian classifiers are widely employed for classification tasks, because of their computational efficiency and competitive accuracy. Discretization is a major approach for processing continuous attributes for na?ve Bayesian classifiers. In addition, the prior distributions of attributes in the na?ve Bayesian classifier are implicitly or explicitly assumed to follow either Dirichlet or generalized Dirichlet distributions. Previous studies have found that discretization methods for continuous attributes do not have significant impact on the performance of the na?ve Bayesian classifier with noninformative Dirichlet priors. Since generalized Dirichlet distribution is a more appropriate prior for the na?ve Bayesian classifier, the purpose of this thesis is to investigate the impact of four well-known discretization methods, equal width, equal frequency, proportional, and minimization entropy, on the performance of na?ve Bayesian classifiers with either noninformative Dirichlet or noninformative generalized Dirichlet priors. The experimental results on 23 data sets demonstrate that the equal width, the equal frequency, and the proportional discretization methods can achieve a higher classification accuracy when priors follow generalize Dirichlet distributions. However, generalized Dirichlet and Dirichlet priors have similar performance for the minimization entropy discretization method. The experimental results suggest that noninformative generalized Dirichlet priors can be employed for the minimization entropy discretization method only when neither the number of classes nor the number of intervals is small.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Chih, Chen Ping y 陳秉志. "The Evaluation of Value at Risk for Real Estate Investment Trusts with Extreme Value Model". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/88380207208394416745.

Texto completo
Resumen
碩士
真理大學
財經研究所
96
Based on 6 domestic Real Estate Investment Trusts(Hence REITs), we furthermore compares the performance of Value-at-Risk and Extreme Value Model. We consider different volatilities model in Variance-Covariance Method. On the other side, we also apply GEV and GDP Extreme Value Models for estimating Value-at-Risk under the confidence level of 99%, 97.5% and 95% respectively. Then, we check the performances of all VaR models through Back testing. The empirical results show that Extreme Value Model is a substantial improvement for Variance-Covariance Method and the GDP’s performance is better than GEV.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Kao, Juo-Han y 高若涵. "The Evaluation of Value at Risk for Automobile Physical Damage Insurance with Extreme Value Model". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/06094839656768259872.

Texto completo
Resumen
碩士
真理大學
財經研究所
95
This research apply the extreme value model to evaluate the VaR of automobile physical damage insurance and hope to find out the distribution in automobile physical damage insurance and estimate its VaR through the extreme value theory. We first check the VaR of the automobile physical damage insurance via P-P Plot and Q-Q Plot, we compare normal, log- normal, index and student t distribution to discover the amount of losing money and probability above were unable to mix rightly. So, it can be work to put this model into practice of estimating VaR. We further to use the GEV model and GPD model to assess the VaR for automobile physical damage insurance. Empirical results reveal that the two models are quite close in some cases. In last, we use the backtesting to examine the performance of the two models, we also obtain different outcomes under different significant levels.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Huang, Zhao-Kai y 黃照凱. "Extreme Learning Machine for Automatic License Plate Recognition With Imbalance Data Set and Performance Evaluation". Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5441070%22.&searchmode=basic.

Texto completo
Resumen
碩士
國立中興大學
電機工程學系所
107
This paper uses Extreme Learning Machines for license plate recognition. It also uses edge statistics, template comparison, Radial Basis Function, and Support Vector Machine (SVM) for comparison. The Extreme Learning Machine (ELM) only needs to propose image features for the neural network, and input the image features into the neural network for training, and obtain the output results, which can effectively improve the operation speed. In order to solve a large amount of training time, this study uses principal component analysis (PCA) to reduce the training time, so that the training data will be reduced from 288 dimensions to 192 dimensions, and deleting unnecessary feature vectors will greatly reduce the training time Using the Extreme Learning Machine (ELM), the calculation time is about 0.2 seconds, which is obviously superior to other methods in the recognition speed. The identification rate in the license plate recognition (EI-ELM) is up to 84.12%. In the experimental results, the confusion matrix is used. Compared with the performance evaluation, the ELM performance indicators are more advantageous than the ELM.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Nieh, Chih-Chien y 聶至謙. "Evaluation of accuracy of the Anisotropic Analytical Algorithm (AAA) under extreme inhomogeneities using Monte Carlo (MC) simulations". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/24944445854171147607.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Cope, Julia Lee. "Evaluation of Microbial Communities from Extreme Environments as Inocula in a Carboxylate Platform for Biofuel Production from Cellulosic Biomass". Thesis, 2013. http://hdl.handle.net/1969.1/151350.

Texto completo
Resumen
The carboxylate biofuels platform (CBP) involves the conversion of cellulosic biomass into carboxylate salts by a mixed microbial community. Chemical engineering approaches to convert these salts to a variety of fuels (diesel, gasoline, jet fuel) are well established. However, prior to initiation of this project, little was known about the influence of inoculum source on platform performance. The studies in this dissertation test the hypothesis that microbial communities from particular environments in nature (e.g. saline and/or thermal sediments) are pre-adapted to similar industrial process conditions and, therefore, exhibit superior performances. We screened an extensive collection of sediment samples from extreme environments across a wide geographic range to identify and characterize microbial communities with superior performances in the CBP. I sought to identify aspects of soil chemistry associated with superior CBP fermentation performance. We showed that CBP productivity was influenced by both fermentation conditions and inocula, thus is clearly reasonable to expect both can be optimized to target desired outcomes. Also, we learned that fermentation performance is not as simple as finding one soil parameter that leads to increases in all performance parameters. Rather, there are complex multivariate relationships that are likely indicative of trade-offs associated within the microbial communities. An analysis of targeted locus pyrosequence data for communities with superior performances in the fermentations provides clear associations between particular bacterial taxa and particular performance parameters. Further, I compared microbial community compositions across three different process screen technologies employed in research to understand and optimize CBP fermentations. Finally, we assembled and characterized an isolate library generated from a systematic culture approach. Based on partial 16S rRNA gene sequencing, I estimated operational taxonomic units (OTUs), and inferred a phylogeny of the OTUs. This isolate library will serve as a tool for future studies of assembled communities and bacterial adaptations useful within the CBP fermentations. Taken together the tools and results developed in this dissertation provide for refined hypotheses for optimizing inoculum identification, community composition, and process conditions for this important second generation biofuel platform.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Newton, Brandi Wreatha. "An evaluation of winter hydroclimatic variables conducive to snowmelt and the generation of extreme hydrologic events in western Canada". Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/9965.

Texto completo
Resumen
The frequency, magnitude, and atmospheric drivers of winter hydroclimatic conditions conducive to snowmelt in western Canada were evaluated. These hydroclimatic variables were linked to the mid-winter break-up of river ice that included the creation of a comprehensive database including 46 mid-winter river ice break-up events in western Canada (1950-2008) and six events in Alaska (1950-2014). Widespread increases in above-freezing temperatures and spatially diverse increases in rainfall were detected over the study period (1946-2012), particularly during January and March. Critical elevation zones representing the greatest rate of change were identified for major river basins. Specifically, low-elevation (500-1000 m) temperature changes dominated the Stikine, Nass, Skeena, and Fraser river basins and low to mid-elevation changes (700-1500 m) dominated the Peace, Athabasca, Saskatchewan, and Columbia river basins. The greatest increases in rainfall were seen below 700 m and between 1200-1900 m in the Fraser and at mid- to high-elevations (1500-2200 m) in the Peace, Athabasca, and Saskatchewan river basins. Daily synoptic-scale atmospheric circulation patterns were classified using Self-Organizing Maps (SOM) and corresponding hydroclimatic variables were evaluated. Frequency, persistence, and preferred shifts of identified synoptic types provided additional insight into characteristics of dominant atmospheric circulation patterns. Trend analyses revealed significant (p < 0.05) decreases in two dominant synoptic types: a ridge of high pressure over the Pacific Ocean and adjacent trough of low pressure over western Canada, which directs the movement of cold, dry air over the study region, and zonal flow with westerly flow from the Pacific Ocean over the study region. Conversely, trend analyses revealed an increase in the frequency and persistence of a ridge of high pressure over western Canada over the study period. However, step-change analysis revealed a decrease in zonal flows and an increase in the occurrence of high-pressure ridges over western Canada in 1977, coinciding with a shift to a positive Pacific Decadal Oscillation regime. A ridge of high pressure over western Canada was associated with a high frequency and magnitude of above-freezing temperatures and rainfall in the study region. This pattern is highly persistent and elicits a strong surface climate response. A ridge of high pressure and associated above-freezing temperatures and rainfall was also found to be the primary driver of mid-winter river ice break-up with rainfall being a stronger driver west of the Rocky Mountains and temperature to the east. These results improve our understanding of the drivers of threats to snowpack integrity and the generation of extreme hydrologic events.
Graduate
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

"Evaluation of Flood Mitigation Strategies for the Santa Catarina Watershed using a Multi-model Approach". Master's thesis, 2016. http://hdl.handle.net/2286/R.I.38363.

Texto completo
Resumen
abstract: The increasingly recurrent extraordinary flood events in the metropolitan area of Monterrey, Mexico have led to significant stakeholder interest in understanding the hydrologic response of the Santa Catarina watershed to extreme events. This study analyzes a flood mitigation strategy proposed by stakeholders through a participatory workshop and are assessed using two hydrological models: The Hydrological Modeling System (HEC-HMS) and the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). The stakeholder-derived flood mitigation strategy consists of placing new hydraulic infrastructure in addition to the current flood controls in the basin. This is done by simulating three scenarios: (1) evaluate the impact of the current structure, (2) implementing a large dam similar to the Rompepicos dam and (3) the inclusion of three small detention dams. These mitigation strategies are assessed in the context of a major flood event caused by the landfall of Hurricane Alex in July 2010 through a consistent application of the two modeling tools. To do so, spatial information on topography, soil, land cover and meteorological forcing were assembled, quality-controlled and input into each model. Calibration was performed for each model based on streamflow observations and maximum observed reservoir levels from the National Water Commission in Mexico. Simulation analyses focuses on the differential capability of the two models in capturing the spatial variability in rainfall, topographic conditions, soil hydraulic properties and its effect on the flood response in the presence of the different flood mitigation structures. The implementation of new hydraulic infrastructure is shown to have a positive impact on mitigating the flood peak with a more favorable reduction in the peak at the outlet from the larger dam (16.5% in tRIBS and 23% in HEC-HMS) than the collective effect from the small structures (12% in tRIBS and 10% in HEC-HMS). Furthermore, flood peak mitigation depends strongly on the number and locations of the new dam sites in relation to the spatial distribution of rainfall and flood generation. Comparison of the two modeling approaches complements the analysis of available observations for the flood event and provides a framework within which to derive a multi-model approach for stakeholder-driven solutions.
Dissertation/Thesis
Masters Thesis Civil and Environmental Engineering 2016
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

"Athletic Surfaces’ Influence on the Thermal Environment: An Evaluation of Wet Bulb Globe Temperature in the Phoenix Metropolitan Area". Master's thesis, 2020. http://hdl.handle.net/2286/R.I.57303.

Texto completo
Resumen
abstract: Exertional heat stroke continues to be one of the leading causes of illness and death in sport in the United States, with an athlete’s experienced microclimate varying by venue design and location. A limited number of studies have attempted to determine the relationship between observed wet bulb globe temperature (WBGT) and WBGT derived from regional weather station data. Moreover, only one study has quantified the relationship between regionally modeled and on-site measured WBGT over different athletic surfaces (natural grass, rubber track, and concrete tennis court). The current research expands on previous studies to examine how different athletic surfaces influence the thermal environment in the Phoenix Metropolitan Area using a combination of fieldwork, modeling, and statistical analysis. Meteorological data were collected from 0700–1900hr across 6 days in June and 5 days in August 2019 in Tempe, Arizona at various Sun Devil Athletics facilities. This research also explored the influence of surface temperatures on WBGT and the changes projected under a future warmer climate. Results indicate that based on American College of Sports Medicine guidelines practice would not be cancelled in June (WBGT≥32.3°C); however, in August, ~33% of practice time was lost across multiple surfaces. The second-tier recommendations (WBGT≥30.1°C) to limit intense exercise were reached an average of 7 hours each day for all surfaces in August. Further, WBGT was calculated using data from four Arizona Meteorological Network (AZMET) weather stations to provide regional WBGT values for comparison. The on-site (field/court) WBGT values were consistently higher than regional values and significantly different (p<0.05). Thus, using regionally-modeled WBGT data to guide activity or clothing modification for heat safety may lead to misclassification and unsafe conditions. Surface temperature measurements indicate a maximum temperature (170°F) occurring around solar noon, yet WBGT reached its highest level mid-afternoon and on the artificial turf surface (2–5PM). Climate projections show that WBGT values are expected to rise, further restricting the amount of practice and games than can take place outdoors during the afternoon. The findings from this study can be used to inform athletic trainers and coaches about the thermal environment through WBGT values on-field.
Dissertation/Thesis
Masters Thesis Geography 2020
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Beerval, Ravichandra Kavya Urs. "Spatiotemporal analysis of extreme heat events in Indianapolis and Philadelphia for the years 2010 and 2011". Thesis, 2014. http://hdl.handle.net/1805/4083.

Texto completo
Resumen
Indiana University-Purdue University Indianapolis (IUPUI)
Over the past two decades, northern parts of the United States have experienced extreme heat conditions. Some of the notable heat wave impacts have occurred in Chicago in 1995 with over 600 reported deaths and in Philadelphia in 1993 with over 180 reported deaths. The distribution of extreme heat events in Indianapolis has varied since the year 2000. The Urban Heat Island effect has caused the temperatures to rise unusually high during the summer months. Although the number of reported deaths in Indianapolis is smaller when compared to Chicago and Philadelphia, the heat wave in the year 2010 affected primarily the vulnerable population comprised of the elderly and the lower socio-economic groups. Studying the spatial distribution of high temperatures in the vulnerable areas helps determine not only the extent of the heat affected areas, but also to devise strategies and methods to plan, mitigate, and tackle extreme heat. In addition, examining spatial patterns of vulnerability can aid in development of a heat warning system to alert the populations at risk during extreme heat events. This study focuses on the qualitative and quantitative methods used to measure extreme heat events. Land surface temperatures obtained from the Landsat TM images provide useful means by which the spatial distribution of temperatures can be studied in relation to the temporal changes and socioeconomic vulnerability. The percentile method used, helps to determine the vulnerable areas and their extents. The maximum temperatures measured using LST conversion of the original digital number values of the Landsat TM images is reliable in terms of identifying the heat-affected regions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Ojumu, Adefolake Mayokun. "Transport of nitrogen oxides and nitric acid pollutants over South Africa and air pollution in Cape Town". Diss., 2013. http://hdl.handle.net/10500/11911.

Texto completo
Resumen
The deteriorating air quality in Cape Town (CT) is a threat to the social and economic development of the city. Although previous studies have shown that most of the pollutants are emitted in the city, it is not clear how the transport of pollutants from neighbouring cities may contribute to the pollution. This thesis studies the transport of atmospheric nitrogen oxides (NOx) and nitric acid (HNO3) pollutants over South Africa and examines the role of pollutant transport from the Mpumalanga Highveld on pollution in CT. The study analysed observation data (2001 - 2008) from the CT air quality network and from regional climate model simulation (2001 - 2004) over South Africa. The model simulations account for the influences of complex topography, atmospheric conditions, and atmospheric chemistry on transport of the pollutants over South Africa. Flux budget analysis was used to examine whether the city is a net source or sink for NOx and HNO3. The results show that north-easterly flow transports pollutants (NOx and HNO3) at low level (i.e., surface to 850 hPa) from the Mpumalanga Highveld towards CT. In April, a tongue of high concentration of HNO3 extends from the Mpumalanga Highveld to CT, along the southern coast. The flux budget analysis shows that CT can be a net sink for NOx and HNO3 during extreme pollution events. The study infers that, apart from the local emission of the pollutants in CT, the accumulation of pollutants transported from other areas may contribute to pollution in the city.
Environmental Sciences
M. Sc. (Environmental Science)
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía