To see the other types of publications on this topic, follow the link: Given data.

Dissertations / Theses on the topic 'Given data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 27 dissertations / theses for your research on the topic 'Given data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

McLaughlin, N. R. "Robust multimodal person identification given limited training data." Thesis, Queen's University Belfast, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.579747.

Full text
Abstract:
Abstract This thesis presents a novel method of audio-visual fusion, known as multi- modal optimal feature fusion (MOFF), for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowl- edge about the corruption. Furthermore, it is assumed there is a limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new multimodal feature rep- resentation and a modified cosine similarity are introduced for combining and comparing bimodal features with limited training data, as well as vastly differing data rates and feature sizes. Similarity-based optimal feature selection and multi- condition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Low-level feature fusion is performed using optimal feature selection, which automatically changes the weighting given to each modality based on the level of corruption. The framework for robust person identification is also applied to noise robust speaker identification, given very limited training data. Experiments have been carried out on a bimodal data set created from the SPIDRE speaker recogni- tion database and AR face recognition database, with variable noise corruption of speech and occlusion in the face images. Combining both modalities using MOFF, leads to significantly improved identification accuracy compared to the component unimodal systems, even with simultaneous corruption of both modal- ities. A novel piecewise-constant illumination model (PCIlVI) is then introduced for illumination invariant facial recognition. This method can be used given a single training facial image for each person, and assuming no prior knowledge of the illumination conditions of both the training and testing images. Small areas of the face are represented using magnitude Fourier features, which takes advan- tage of the shift-invariance of the magnitude Fourier representation, to increase robustness to small misalignment errors and small facial expression changes. Fi- nally, cosine similarity is used as an illumination invariant similarity measure, to compare small facial areas. Experiments have been carried out on the YaleB, ex- tended YaleB and eMU-PIE facial illumination databases. Facial identification accuracy using PCIlVI is comparable to or exceeds that of the literature.
APA, Harvard, Vancouver, ISO, and other styles
2

Friesch, Pius. "Generating Training Data for Keyword Spotting given Few Samples." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254960.

Full text
Abstract:
Speech recognition systems generally need a large quantity of highly variable voice and recording conditions in order to produce robust results. In the specific case of keyword spotting, where only short commands are recognized instead of large vocabularies, the resource-intensive task of data acquisition has to be repeated for each keyword individually. Over the past few years, neural methods in speech synthesis and voice conversion made tremendous progress and generate samples that are realistic to the human ear. In this work, we explore the feasibility of using such methods to generate training data for keyword spotting methods. In detail, we want to evaluate if the generated samples are indeed realistic or only sound so and if a model trained on these generated samples can generalize to real samples. We evaluated three neural network speech synthesis and voice conversion techniques : (1) Speaker Adaptive VoiceLoop, (2) Factorized Hierarchical Variational Autoencoder (FHVAE), (3) Vector Quantised-Variational AutoEncoder (VQVAE). These three methods are evaluated as data augmentation or data generation techniques on a keyword spotting task. The performance of the models is compared to a baseline of changing the pitch, tempo, and speed of the original sample. The experiments show that using the neural network techniques can provide an up to 20% relative accuracy improvement on the validation set. The baseline augmentation technique performs at least twice as good. This seems to indicate that using multi-speaker speech synthesis or voice conversation naively does not yield varied or realistic enough samples.
Taligenkänningssystem behöver generellt en stor mängd träningsdata med varierande röstoch inspelningsförhållanden för att ge robusta resultat. I det specifika fallet med nyckelordsidentifiering, där endast korta kommandon känns igen i stället för stora vokabulärer, måste resurskrävande datainsamling göras för varje sökord individuellt. Under de senaste åren har neurala metoder i talsyntes och röstkonvertering gjort stora framsteg och genererar tal som är realistiskt för det mänskliga örat. I det här arbetet undersöker vi möjligheten att använda sådana metoder för att generera träningsdata för nyckelordsidentifiering. I detalj vill vi utvärdera om det genererade träningsdatat verkligen är realistiskt eller bara låter så, och om en modell tränad på dessa genererade exempel generaliserar väl till verkligt tal. Vi utvärderade tre metoder för neural talsyntes och röstomvandlingsteknik: (1) Speaker Adaptive VoiceLoop, (2) Factorized Hierarchical Variational Autoencoder (FHVAE), (3) Vector Quantised-Variational AutoEncoder (VQVAE).Dessa tre metoder används för att antingen generera träningsdata från text (talsyntes) eller att berika ett befintligt dataset för att simulera flera olika talare med hjälp av röstkonvertering, och utvärderas i ett system för nyckelordsidentifiering. Modellernas prestanda jämförs med en baslinje baserad på traditionell signalbehandling där tonhöjd och tempo varieras i det ursprungliga träningsdatat. Experimenten visar att man med hjälp av neurala nätverksmetoder kan ge en upp till 20% relativ noggrannhetsförbättring på valideringsuppsättningen jämfört med ursprungligt träningsdata. Baslinjemetoden baserad på signalbehandling ger minst dubbelt så bra resultat. Detta tycks indikera att användningen av talsyntes eller röstkonvertering med flera talare inte ger tillräckligt varierade eller representativa träningsdata.
APA, Harvard, Vancouver, ISO, and other styles
3

Fan, Hang. "Species Tree Likelihood Computation Given SNP Data Using Ancestral Configurations." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1385995244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ren, Chunfeng. "LATENT VARIABLE MODELS GIVEN INCOMPLETELY OBSERVED SURROGATE OUTCOMES AND COVARIATES." VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3473.

Full text
Abstract:
Latent variable models (LVMs) are commonly used in the scenario where the outcome of the main interest is an unobservable measure, associated with multiple observed surrogate outcomes, and affected by potential risk factors. This thesis develops an approach of efficient handling missing surrogate outcomes and covariates in two- and three-level latent variable models. However, corresponding statistical methodologies and computational software are lacking efficiently analyzing the LVMs given surrogate outcomes and covariates subject to missingness in the LVMs. We analyze the two-level LVMs for longitudinal data from the National Growth of Health Study where surrogate outcomes and covariates are subject to missingness at any of the levels. A conventional method for efficient handling of missing data is to reexpress the desired model as a joint distribution of variables, including the surrogate outcomes that are subject to missingness conditional on all of the covariates that are completely observable, and estimate the joint model by maximum likelihood, which is then transformed to the desired model. The joint model, however, identifies more parameters than desired, in general. The over-identified joint model produces biased estimates of LVMs so that it is most necessary to describe how to impose constraints on the joint model so that it has a one-to-one correspondence with the desired model for unbiased estimation. The constrained joint model handles missing data efficiently under the assumption of ignorable missing data and is estimated by a modified application of the expectation-maximization (EM) algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Cao, Haoliang. "Automating Question Generation Given the Correct Answer." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287460.

Full text
Abstract:
In this thesis, we propose an end-to-end deep learning model for a question generation task. Given a Wikipedia article written in English and a segment of text appearing in the article, the model can generate a simple question whose answer is the given text segment. The model is based on an encoder-decoder architecture. Our experiments show that a model with a fine-tuned BERT encoder and a self-attention decoder give the best performance. We also propose an evaluation metric for the question generation task, which evaluates both syntactic correctness and relevance of the generated questions. According to our analysis on sampled data, the new metric is found to give better evaluation compared to other popular metrics for sequence to sequence tasks.
I den här avhandlingen presenteras en djup neural nätverksmodell för en frågeställningsuppgift. Givet en Wikipediaartikel skriven på engelska och ett textsegment i artikeln kan modellen generera en enkel fråga vars svar är det givna textsegmentet. Modellen är baserad på en kodar-avkodararkitektur (encoderdecoder architecture). Våra experiment visar att en modell med en finjusterad BERT-kodare och en självuppmärksamhetsavkodare (self-attention decoder) ger bästa prestanda. Vi föreslår också en utvärderingsmetrik för frågeställningsuppgiften, som utvärderar både syntaktisk korrekthet och relevans för de genererade frågorna. Enligt vår analys av samplade data visar det sig att den nya metriken ger bättre utvärdering jämfört med andra populära metriker för utvärdering.
APA, Harvard, Vancouver, ISO, and other styles
6

GIOIA, PAOLA. "Towards more accurate measures of global sensitivity analysis. Investigation of first and total order indices." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/45695.

Full text
Abstract:
A new technique for estimating variance–based total sensitivity indices from given data is developed. It is also develped a new approach for the estimation of the first order effects given a specific sample design. This method adopts the RBD approach published by Tarantola et al., (2007) for the computation of first order sensitivity indices in association to Quasi–Random numbers.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Fenglei. "Detection and estimation of connection splice events in fiber optics given noisy OTDR data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0009/MQ36049.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Stoor, John-Bernhard. "Utveckling av GUI utifrån en given affärsprocess." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205667.

Full text
Abstract:
Preem verkar inom drivmedelsbranschen som styrs av en mycket hög omsättning med förhållandevis små marginaler. Det är därför viktig med en väl fungerande lagerkontroll. Det man idag är ute efter är en ökad precision i lageruppföljningen och därmed behövs ett nytt systemstöd till detta. Jag fick i min roll hos Preem uppgiften att titta närmre på GUI och processerna bakom. Det resulterade i denna rapport med ett framtaget förslag på användargränssnittet till det nya systemet och dess affärsprocesser som är kopplat till detta. Arbetet fokuserar på hur de två framställs i förhållande till varandra och vilka metoder som används för att konstruera dessa. Arbetet inkluderar illustrationer av as-is och to-be processmodeller enligt BPMN specifikation, wireframes och en sitemap. Denna rapport visar ett förslag på ett sammanfört system och ett intuitivare GUI med nya processer bakom, för just Preem, och hur balansgången går mellan att skapa affärsprocesser och ett grafiskt användargränssnitt ur varandra, beroende av vilka som är involverade i projektet.
Preem is a Swedish company that operates in the oil industry, which is controlled by very high sales and with relatively small margins. It’s therefore essential with a well working stock control. Today they are looking for a better way of monitoring the stock with a higher precision and therefore they need a new system for that. My role at Preem was to take a closer look at the GUI and the processes behind that. It resulted in this thesis that includes the user interface for the new system and the business processes that are linked to this. The work will focus on how these two methods are produced in relation to each other, and the methods that are used to construct them. The thesis includes illustrations of as-is and to-be process models of the BPMN specification, wireframes and a sitemap. This report shows a proposal of an integrating system and an intuitive GUI with new processes, at Preem, and how the balance is between creating business processes and a graphical user interface out of each other, depending on the people who are involved in the project.
APA, Harvard, Vancouver, ISO, and other styles
9

Subramaniam, Rajesh. "Exploring Frameworks for Rapid Visualization of Viral Proteins Common for a Given Host." Thesis, North Dakota State University, 2019. https://hdl.handle.net/10365/31716.

Full text
Abstract:
Viruses are unique organisms that lack the protein machinery necessary for its propagation (like polymerase) yet possess other proteins that facilitate its propagation (like host cell anchoring proteins). This study explores seven different frameworks to assist rapid visualization of proteins that are common to viruses residing in a given host. The proposed frameworks rely only on protein sequence information. It was found that the sequence similarity-based framework with an associated profile hidden Markov model was a better tool to assist visualization of proteins common to a given host than other proposed frameworks based only on amino acid composition or other amino acid properties. The lack of knowledge of profile hidden Markov models for many protein structures limit the utility of the proposed protein sequence similarity-based framework. The study concludes with an attempt to extrapolate the utility of the proposed framework to predict viruses that may pose potential human health risks.
APA, Harvard, Vancouver, ISO, and other styles
10

George, Andrew Winston. "A Bayesian analysis for the mapping of a quantitative trait locus given half-sib data." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wengrzik, Joanna [Verfasser], Jürgen [Akademischer Betreuer] Timm, and Werner [Akademischer Betreuer] Brannath. "Parameter Estimation for Mixture Models Given Grouped Data / Joanna Wengrzik. Gutachter: Jürgen Timm ; Werner Brannath. Betreuer: Jürgen Timm." Bremen : Staats- und Universitätsbibliothek Bremen, 2012. http://d-nb.info/1071993518/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hallström, Richard. "Estimating Loss-Given-Default through Survival Analysis : A quantitative study of Nordea's default portfolio consisting of corporate customers." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-122914.

Full text
Abstract:
In Sweden, all banks must report their regulatory capital in their reports to the market and their models for calculating this capital must be approved by the financial authority, Finansinspektionen. The regulatory capital is the capital that a bank has to hold as a security for credit risk and this capital should serve as a buffer if they would loose unexpected amounts of money in their lending business. Loss-Given-Default (LGD) is one of the main drivers of the regulatory capital and the minimum required capital is highly sensitive to the reported LGD. Workout LGD is based on the discounted future cash flows obtained from defaulted customers. The main issue with workout LGD is the incomplete workouts, which in turn results in two problems for banks when they calculate their workout LGD. A bank either has to wait for the workout period to end, in which some cases take several years, or to exclude or make rough assumptions about those incomplete workouts in their calculations. In this study the idea from Survival analysis (SA) methods has been used to solve these problems. The mostly used SA model, the Cox proportional hazards model (Cox model), has been applied to investigate the effect of covariates on the length of survival for a monetary unit. The considered covariates are Country of booking, Secured/Unsecured, Collateral code, Loan-To-Value, Industry code, Exposure-At- Default and Multi-collateral. The data sample was first split into 80 % training sample and 20 % test sample. The applied Cox model was based on the training sample and then validated with the test sample through interpretation of the Kaplan-Meier survival curves for risk groups created from the prognostic index (PI). The results show that the model correctly rank the expected LGD for new customers but is not always able to distinguish the difference between risk groups. With the results presented in the study, Nordea can get an expected LGD for newly defaulted customers, given the customers’ information on the considered covariates in this study. They can also get a clear picture of what factors that drive a low respectively high LGD.
I Sverige måste alla banker rapportera sitt lagstadgade kapital i deras rapporter till marknaden och modellerna för att beräkna detta kapital måste vara godkända av den finansiella myndigheten, Finansinspektionen. Det lagstadgade kapitalet är det kapital som en bank måste hålla som en säkerhet för kreditrisk och den agerar som en buffert om banken skulle förlora oväntade summor pengar i deras utlåningsverksamhet. Loss- Given-Default (LGD) är en av de främsta faktorerna i det lagstadgade kapitalet och kravet på det minimala kapitalet är mycket känsligt för det rapporterade LGD. Workout LGD är baserat på diskonteringen av framtida kassaflöden från kunder som gått i default. Det huvudsakliga problemet med workout LGD är ofullständiga workouts, vilket i sin tur resulterar i två problem för banker när de ska beräkna workout LGD. Banken måste antingen vänta på att workout-perioden ska ta slut, vilket i vissa fall kan ta upp till flera år, eller så får banken exkludera eller göra grova antaganden om dessa ofullständiga workouts i sina beräkningar. I den här studien har idén från Survival analysis (SA) metoder använts för att lösa dessa problem. Den mest använda SA modellen, Cox proportional hazards model (Cox model), har applicerats för att undersöka effekten av kovariat på livslängden hos en monetär enhet. De undersökta kovariaten var Land, Säkrat/Osäkrat, Kollateral-kod, Loan-To-Value, Industri-kod Exposure-At-Default och Multipla-kollateral. Dataurvalet uppdelades först i 80 % träningsurval och 20 % testurval. Den applicerade Cox modellen baserades på träningsurvalet och validerades på testurvalet genom tolkning av Kaplan-Meier överlevnadskurvor för riskgrupperna skapade från prognosindexet (PI). Med de presenterade resultaten kan Nordea beräkna ett förväntat LGD för nya kunder i default, givet informationen i den här studiens undersökta kovariat. Nordea kan också få en klar bild över vilka faktorer som driver ett lågt respektive högt LGD.
APA, Harvard, Vancouver, ISO, and other styles
13

Brown, Iain Leonard Johnston. "Basel II compliant credit risk modelling : model development for imbalanced credit scoring data sets, loss given default (LGD) and exposure at default (EAD)." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/341517/.

Full text
Abstract:
The purpose of this thesis is to determine and to better inform industry practitioners to the most appropriate classification and regression techniques for modelling the three key credit risk components of the Basel II minimum capital requirement; probability of default (PD), loss given default (LGD), and exposure at default (EAD). The Basel II accord regulates risk and capital management requirements to ensure that a bank holds enough capital proportional to the exposed risk of its lending practices. Under the advanced internal ratings based (IRB) approach Basel II allows banks to develop their own empirical models based on historical data for each of PD, LGD and EAD. In this thesis, first the issue of imbalanced credit scoring data sets, a special case of PD modelling where the number of defaulting observations in a data set is much lower than the number of observations that do not default, is identified, and the suitability of various classification techniques are analysed and presented. As well as using traditional classification techniques this thesis also explores the suitability of gradient boosting, least square support vector machines and random forests as a form of classification. The second part of this thesis focuses on the prediction of LGD, which measures the economic loss, expressed as a percentage of the exposure, in case of default. In this thesis, various state-of-the-art regression techniques to model LGD are considered. In the final part of this thesis we investigate models for predicting the exposure at default (EAD). For off-balance-sheet items (for example credit cards) to calculate the EAD one requires the committed but unused loan amount times a credit conversion factor (CCF). Ordinary least squares (OLS), logistic and cumulative logistic regression models are analysed, as well as an OLS with Beta transformation model, with the main aim of finding the most robust and comprehensible model for the prediction of the CCF. Also a direct estimation of EAD, using an OLS model, will be analysed. All the models built and presented in this thesis have been applied to real-life data sets from major global banking institutions.
APA, Harvard, Vancouver, ISO, and other styles
14

Ankaräng, Fredrik, and Fabian Waldner. "Evaluating Random Forest and a Long Short-Term Memory in Classifying a Given Sentence as a Question or Non-Question." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-262209.

Full text
Abstract:
Natural language processing and text classification are topics of much discussion among researchers of machine learning. Contributions in the form of new methods and models are presented on a yearly basis. However, less focus is aimed at comparing models, especially comparing models that are less complex to state-of-the-art models. This paper compares a Random Forest with a Long-Short Term Memory neural network for the task of classifying sentences as questions or non-questions, without considering punctuation. The models were trained and optimized on chat data from a Swedish insurance company, as well as user comments data on articles from a newspaper. The results showed that the LSTM model performed better than the Random Forest. However, the difference was small and therefore Random Forest could still be a preferable alternative in some use cases due to its simplicity and its ability to handle noisy data. The models’ performances were not dramatically improved after hyper parameter optimization. A literature study was also conducted aimed at exploring how customer service can be automated using a chatbot and what features and functionality should be prioritized by management during such an implementation. The findings of the study showed that a data driven design should be used, where features are derived based on the specific needs and customers of the organization. However, three features were general enough to be presented the personality of the bot, its trustworthiness and in what stage of the value chain the chatbot is implemented.
Språkteknologi och textklassificering är vetenskapliga områden som tillägnats mycket uppmärksamhet av forskare inom maskininlärning. Nya metoder och modeller presenteras årligen, men mindre fokus riktas på att jämföra modeller av olika karaktär. Den här uppsatsen jämför Random Forest med ett Long Short-Term Memory neuralt nätverk genom att undersöka hur väl modellerna klassificerar meningar som frågor eller icke-frågor, utan att ta hänsyn till skiljetecken. Modellerna tränades och optimerades på användardata från ett svenskt försäkringsbolag, samt kommentarer från nyhetsartiklar. Resultaten visade att LSTM-modellen presterade bättre än Random Forest. Skillnaden var dock liten, vilket innebär att Random Forest fortfarande kan vara ett bättre alternativ i vissa situationer tack vare dess enkelhet. Modellernas prestanda förbättrades inte avsevärt efter hyperparameteroptimering. En litteraturstudie genomfördes även med målsättning att undersöka hur arbetsuppgifter inom kundsupport kan automatiseras genom införandet av en chatbot, samt vilka funktioner som bör prioriteras av ledningen inför en sådan implementation. Resultaten av studien visade att en data-driven approach var att föredra, där funktionaliteten bestämdes av användarnas och organisationens specifika behov. Tre funktioner var dock tillräckligt generella för att presenteras personligheten av chatboten, dess trovärdighet och i vilket steg av värdekedjan den implementeras.
APA, Harvard, Vancouver, ISO, and other styles
15

Mainey, Alexander J. "The mechanisms of moisture driven backout of nailplate connections. Solutions for outdoor environments and numerical modelling and predictions of moisture driven backout given climatic data." Thesis, Griffith University, 2021. http://hdl.handle.net/10072/404468.

Full text
Abstract:
Nailplated timber trusses, manufactured from timber members connected by nailplates, are widely used in the domestic and international housing market as part of the roofing and flooring systems. The extensive use of these structural elements is driven primarily by their cost efficiency, resulting from an efficient manufacturing process, and their structural efficiency. The use of such trusses however is typically limited to protected (or indoor) environments due to a phenomenon called “nailplate backout”. Nailplate backout is where the steel nailplate, used to connect two timber members together, separates from the parent timber. It is primarily caused by the repeated shrinking and swelling of the timber in response to changing environmental conditions. This moisture driven backout presents a significant hurdle to the expansion of the nailplated timber truss market for external use (and potentially application in the emerging mid-rise timber building market where the internal climatic conditions are relatively unknown). As part as a collaborative project between the industry, Griffith University and Queensland Department of Agriculture and Fisheries (DAF), this thesis aims at investigating solutions to both prevent backout of the nailplates and increase the performance of trussed joints when exposed to large moisture content variations and to develop the understanding of moisture driven nailplate backout through experimental and numerical modelling. Initially an investigation into redesigning the nailplate tooth to reduce the moisture driven backout was conducted. The proposed tooth redesign considered (i) two mechanical approaches consisting of redesigning the tooth profile and (ii) the application of an adhesive to a redesigned tooth profile. The effectiveness of the new designs was assessed using single nails, representative of a single nailplate tooth, with respect to their ability to resist moisture driven backout and their quasi-static withdrawal resistance after an increasing number of moisture cycles. Results indicated that the mechanical and adhesive approaches could effectively reduce the backout and obtain a higher withdrawal strength than currently used profiles. The re-designed tooth profiles were then adapted and implemented into a full nailplate to investigate if the results from single tooth would translate to a nailplate joint, particularly after the joints were subjected to severe accelerated moisture cycles. One mechanical-based and one adhesive-based nailplate design were considered and compared to currently commercially available nailplates. The backout of the nailplate was recorded at discrete intervals and the tensile capacity of the joint was also investigated. Findings included a reduction in the rate of backout and a statistically significant increase in tensile capacity in most cases for both proposed designs. Finally, this thesis proposes an analytical model to predict moisture driven backout as a function of the timber properties, tooth profile and climatic conditions that the joint will be exposed to. The model was validated against experimental data where the backout of a single tooth was pressed into a timber piece and monitored in real time using digital image correlation. The application of the model is then demonstrated by predicting the expected range of nailplate backout in two roof spaces.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Eng & Built Env
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
16

Heredia, Guzman Maria Belen. "Contributions to the calibration and global sensitivity analysis of snow avalanche numerical models." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALU028.

Full text
Abstract:
Une avalanche de neige est un danger naturel défini comme une masse de neige en mouvement rapide. Depuis les années 30, scientifiques conçoivent des modèles d'avalanche de neige pour décrire ce phénomène. Cependant, ces modèles dépendent de certains paramètres d'entrée mal connus qui ne peuvent pas être mesurés. Pour mieux comprendre les paramètres d'entrée du modèle et les sorties du modèle, les objectifs de cette thèse sont (i) de proposer un cadre pour calibrer les paramètres d'entrée et (ii) de développer des méthodes pour classer les paramètres d'entrée en fonction de leur importance dans le modèle en tenant compte la nature fonctionnelle des sorties. Dans ce cadre, nous développons des méthodes statistiques basées sur l'inférence bayésienne et les analyses de sensibilité globale. Nos développements sont illustrés sur des cas de test et des données réelles des avalanches de neige.D'abord, nous proposons une méthode d'inférence bayésienne pour récupérer la distribution des paramètres d'entrée à partir de séries chronologiques de vitesse d'avalanche ayant été collectées sur des sites de test expérimentaux. Nos résultats montrent qu'il est important d'inclure la structure d'erreur (dans notre cas l'autocorrélation) dans la modélisation statistique afin d'éviter les biais dans l'estimation des paramètres de frottement.Deuxièmement, pour identifier les paramètres d'entrée importants, nous développons deux méthodes basées sur des mesures de sensibilité basées sur la variance. Pour la première méthode, nous supposons que nous avons un échantillon de données et nous voulons estimer les mesures de sensibilité avec cet échantillon. Dans ce but, nous développons une procédure d'estimation non paramétrique basée sur l'estimateur de Nadaraya-Watson pour estimer les indices agrégés de Sobol. Pour la deuxième méthode, nous considérons le cadre où l'échantillon est obtenu à partir de règles d'acceptation/rejet correspondant à des contraintes physiques. L'ensemble des paramètres d'entrée devient dépendant du fait de l'échantillonnage d'acceptation-rejet, nous proposons donc d'estimer les effets de Shapley agrégés (extension des effets de Shapley à des sorties multivariées ou fonctionnelles). Nous proposons également un algorithme pour construire des intervalles de confiance bootstrap. Pour l'application du modèle d'avalanche de neige, nous considérons différents scénarios d'incertitude pour modéliser les paramètres d'entrée. Dans nos scénarios, la position et le volume de départ de l'avalanche sont les entrées les plus importantes.Nos contributions peuvent aider les spécialistes des avalanches à (i) prendre en compte la structure d'erreur dans la calibration du modèle et (ii) proposer un classementdes paramètres d'entrée en fonction de leur importance dans les modèles en utilisant des approches statistiques
Snow avalanche is a natural hazard defined as a snow mass in fast motion. Since the thirties, scientists have been designing snow avalanche models to describe snow avalanches. However, these models depend on some poorly known input parameters that cannot be measured. To understand better model input parameters and model outputs, the aims of this thesis are (i) to propose a framework to calibrate input parameters and (ii) to develop methods to rank input parameters according to their importance in the model taking into account the functional nature of outputs. Within these two purposes, we develop statistical methods based on Bayesian inference and global sensitivity analyses. All the developments are illustrated on test cases and real snow avalanche data.First, we propose a Bayesian inference method to retrieve input parameter distribution from avalanche velocity time series having been collected on experimental test sites. Our results show that it is important to include the error structure (in our case the autocorrelation) in the statistical modeling in order to avoid bias for the estimation of friction parameters.Second, to identify important input parameters, we develop two methods based on variance based measures. For the first method, we suppose that we have a given data sample and we want to estimate sensitivity measures with this sample. Within this purpose, we develop a nonparametric estimation procedure based on the Nadaraya-Watson kernel smoother to estimate aggregated Sobol' indices. For the second method, we consider the setting where the sample is obtained from acceptance/rejection rules corresponding to physical constraints. The set of input parameters become dependent due to the acceptance-rejection sampling, thus we propose to estimate aggregated Shapley effects (extension of Shapley effects to multivariate or functional outputs). We also propose an algorithm to construct bootstrap confidence intervals. For the snow avalanche model application, we consider different uncertainty scenarios to model the input parameters. Under our scenarios, the release avalanche position and volume are the most crucial inputs.Our contributions should help avalanche scientists to (i) account for the error structure in model calibration and (ii) rankinput parameters according to their importance in the models using statistical methods
APA, Harvard, Vancouver, ISO, and other styles
17

Aslan, Yasemin. "Which Method Gives The Best Forecast For Longitudinal Binary Response Data?: A Simulation Study." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612582/index.pdf.

Full text
Abstract:
Panel data, also known as longitudinal data, are composed of repeated measurements taken from the same subject over different time points. Although it is generally used in time series applications, forecasting can also be used in panel data due to its time dimension. However, there is limited number of studies in this area in the literature. In this thesis, forecasting is studied for panel data with binary response because of its increasing importance and increasing fundamental roles. A simulation study is held to compare the efficiency of different methods and to find the one that gives the optimal forecast values. In this simulation, 21 different methods, including naï
ve and complex ones, are used by the help of R software. It is concluded that transition models and random effects models with no lag of response can be chosen for getting the most accurate forecasts, especially for the first two years of forecasting.
APA, Harvard, Vancouver, ISO, and other styles
18

Kornfeil, Vojtěch. "Soubor úloh pro kurs Sběr, analýza a zpracování dat." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217707.

Full text
Abstract:
This thesis proposes tasks of exercises for mentioned course and design and creation of automated evaluation system for these exercises. This thesis focuses on discussion and exemplary solutions of possible tasks of each exercise and description of created automated evaluation system. For evaluation program are made tests with chosen special data sets, which will prove it’s functionality in general data sets.
APA, Harvard, Vancouver, ISO, and other styles
19

Bahman, Abdul-Redha Majeed. "Comparisons of date-palm leaves with barley straw and brackish water with fresh water for dairy cows given a high concentrate diet in Kuwait." Thesis, University of Aberdeen, 1991. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU602309.

Full text
Abstract:
The main objectives of the work described in this thesis were to determine the technical feasibility of utilising date palm leaves as a potential source of roughage, and to study the effects of providing brackish water as a source of drinking water for dairy cows. Additionally, to investigate the effect of feeding a high concentrate diet on the performance of Friesian cows. Four experiments were performed during the course of three years (November 1988 - June 1991). Three experiments were carried out in Kuwait and one in the North of Scotland. Experiment 1 was designed to compare the effects of feeding locally produced date palm leaves (DPL) with imported barley straw (S) as roughages to milking cows given a high concentrate diet. Fifty-six cows were used from the fifth week of lactation for 12 weeks. Experiment 2 studied the performance of thirty eight non-lactating pregnant cows for about 15 weeks, with the same objective as experiment 1. Each of these experiments included a small trial for more detailed studies. In experiment 3 eight milking cows in the sixth week of lactation were studied for over six months to compare the effects of drinking brackish water (BW) with fresh water (FW) on the performance of the cows fed on a high concentrate diet in addition to DPL and freshly cut alfalfa. Experiment 4 was conducted in the North of Scotland to investigate the effects of feeding a high concentrate diet, similar to that of Kuwait, on the productivity and the ruminal fermentation of high yielding cows. The general conclusions drawn from these four experiments are: 1. Despite the low qulaity of DPL, it might be a suitable alternative to straw as a source of roughage for dairy cows in Kuwait. 2. Brackish water is a palatable and harmless source of drinking water, and its mineral content may be beneficial in contributing to the dairy cows dietary requirements. 3. Feeding a high level of concentrate in diets based on grass silage increases milk yield and favours body gain at the expense of milk fat content. 4. There is a need for better utilization and processing of local agricultural by-products in Kuwait for the feeding of ruminants. 5. Further research is required to investigate the performance of dairy cows under different environments, especially hot-arid conditions.
APA, Harvard, Vancouver, ISO, and other styles
20

Kang, Lei. "Reduced-Dimension Hierarchical Statistical Models for Spatial and Spatio-Temporal Data." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1259168805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Stensholt, Håkon Meyer. "Sound Meets Type : Exploring the form generating qualities of sound as input for a new typography." Thesis, Konstfack, Grafisk Design & Illustration, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:konstfack:diva-4761.

Full text
Abstract:
How can you create new letterforms using sound as input? In Sound meets Type, have I studied the form generating qualities of sound as input for a new typography. Through history the technological development has provoked new approaches to type design, which in turn has evolved letterforms. By using generative systems to search for letterforms in a contemporary and technological context, I have created a customized software that uses the data inherent in sound as a form generator for possible new letterforms. The software is developed by using a language called Javascript.  The thesis consist of a written part and a creative part. The creative part is documented within this thesis.
APA, Harvard, Vancouver, ISO, and other styles
22

Andersson, Emilia. "Using a Serious Game as an Educational Tool about Obligation to Give Notice : A Game Collaboration with Tidaholm Municipality." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-14572.

Full text
Abstract:
The focus of this thesis was to use serious games as a tool to teach about obligation to give notice. Obligation to give notice means that certain professionals need to report to social services if a child is being harmed. This thesis studied if case-based storytelling could bring a relevant teaching experience, if storytelling could help participants learn about obligation to give notice and how instant and delayed feedback affect the learning. Participants played a story-based game with either instant or delayed feedback and answered three questionnaires about obligation to give notice. The study found that participants did find that the storytelling was useful for learning and gaining more knowledge about about obligation to give notice. For the feedback it was found that both types of feedback made the participants learn significantly more but there was no significant difference when comparing the feedbacks to each other.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhang, Yunlei. "Two new algorithms for nonparametric analysis given incomplete data." 1997. http://catalog.hathitrust.org/api/volumes/oclc/39456668.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

"Indexing methods for multimedia data objects given pair-wise distances." 1997. http://library.cuhk.edu.hk/record=b5889136.

Full text
Abstract:
by Chan Mei Shuen Polly.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.
Includes bibliographical references (leaves 67-70).
Abstract --- p.ii
Acknowledgement --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Definitions --- p.3
Chapter 1.2 --- Thesis Overview --- p.5
Chapter 2 --- Background and Related Work --- p.6
Chapter 2.1 --- Feature-Based Index Structures --- p.6
Chapter 2.2 --- Distance Preserving Methods --- p.8
Chapter 2.3 --- Distance-Based Index Structures --- p.9
Chapter 2.3.1 --- The Vantage-Point Tree Method --- p.10
Chapter 3 --- The Problem of Distance Preserving Methods in Querying --- p.12
Chapter 3.1 --- Some Experimental Results --- p.13
Chapter 3.2 --- Discussion --- p.15
Chapter 4 --- Nearest Neighbor Search in VP-trees --- p.17
Chapter 4.1 --- The sigma-factor Algorithm --- p.18
Chapter 4.2 --- The Constant-α Algorithm --- p.22
Chapter 4.3 --- The Single-Pass Algorithm --- p.24
Chapter 4.4 --- Discussion --- p.25
Chapter 4.5 --- Performance Evaluation --- p.26
Chapter 4.5.1 --- Experimental Setup --- p.27
Chapter 4.5.2 --- Results --- p.28
Chapter 5 --- Update Operations on VP-trees --- p.41
Chapter 5.1 --- Insert --- p.41
Chapter 5.2 --- Delete --- p.48
Chapter 5.3 --- Performance Evaluation --- p.51
Chapter 6 --- Minimizing Distance Computations --- p.57
Chapter 6.1 --- A Single Vantage Point per Level --- p.58
Chapter 6.2 --- Reuse of Vantage Points --- p.59
Chapter 6.3 --- Performance Evaluation --- p.60
Chapter 7 --- Conclusions and Future Work --- p.63
Chapter 7.1 --- Future Work --- p.65
Bibliography --- p.67
APA, Harvard, Vancouver, ISO, and other styles
25

Hung, Tsai Wei, and 蔡韋弘. "A likelihood ratio test for any given difference between 2proportions of pair binary data." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/40957571009485506188.

Full text
Abstract:
碩士
國立臺北大學
統計學系
95
In clinical trials , we want to compare positive rates of two medical exams in a paired design . McNemar (1947) Test was suggested to test the equality of two correlated proportions when sample size is large . However , a Binomial Test was suggested to test when sample size is small . In this paper , we use a likelihood ratio statistic to test any given difference between 2 proportions of pair binary data for sample size between 5 and 30 . Then we calculate the p-value of all possible pair binary data when sample size are equal to 5,10,15,20,25,30 and are equal to -1, -0.9, …, -0.1, 0, 0.1, …, 0.9, 1 by MATLAB program . Finally , we can make a decision to judge whether a sample will be rejected by p-value when the significant level is given . Besides , we can also confer that a sample will be rejected when it falls on what kind of region and build a confidence interval of a sample .
APA, Harvard, Vancouver, ISO, and other styles
26

Hsu, Shih-Kai, and 徐士凱. "Two-Machine Flow Shops Scheduling to Minimize Job Independent Earliness and Tardiness Penalties with a Given Common Due Date." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/86456208992008676701.

Full text
Abstract:
碩士
國立中央大學
工業管理研究所
93
This study deals with the two-machine flow shops scheduling problem with the consideration of earliness and tardiness penalties. There are multiple jobs with a given common due date to be scheduled. All jobs have equal earliness and tardiness weights, and the weight of a job depends on whether the job is early or late, which are job-independent. The objective is to find a schedule that minimizes the weighted sum of earliness and tardiness penalties. We propose a number of propositions and revised Bagchi’s algorithm as a lower bound, which are implemented in our branch-and-bound algorithm to eliminate nodes efficiently in the branching tree. We also conduct computational analysis to show the validation and the effectiveness of our algorithm compared with enumeration.
APA, Harvard, Vancouver, ISO, and other styles
27

Ping-Wen, Lin, and 林秉汶. "The Research of Applying Data Mining in the Repair System-To give an example of information product repairing." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/35306488222479634314.

Full text
Abstract:
碩士
中華大學
科技管理研究所
90
Abstract The remarkable advances made in the IT industry has paved the way for the computerization of the production process. Today, all production related data are automatically stored in a database, including product deficiencies and their causes. From the information in the database, a systematic analysis of the reasons for the deficiencies can be formulated. This will allow technicians to detect deficiencies or errors accurately and more effectively. In view of this, an effective database management mechanism is essential. In the long run, the proposed data mining technique will enhance the company’s competitiveness. This thesis combines information from the ‘Knowledge Discovery in Database’ published by Fayyad in 1996, and the ‘Data Mining’ by Berry & Linoff in 1997 to create a data mining structure fit for the repair system. Applying the proposed data mining structure to analyze the database can detect the deficiency faster and improve the yield rate. There are seven sections in this dissertation, namely: Problem Definition, Data Resources and Data Selection, Data Investigation, Data Transformation, The Creation of Data Mining Mode, The Result Assessment and Explanation and Constructing the Repair Flow, The thesis aims to develop a specific company’s new repair flow by analyzing its database. The result implies that following this new repair flow can shorten the investigation scope and repair time.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography