Dissertations / Theses on the topic 'Random selection'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Random selection.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Tyrrell, Simon. "Random and rational methods for compound selection." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370002.
Full textStringer, Harold. "BEHAVIOR OF VARIABLE-LENGTH GENETIC ALGORITHMS UNDER RANDOM SELECTION." Master's thesis, University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2657.
Full textM.S.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science MS
Choukri, Sam. "Selection of malaria-specific epitopes from random peptide libraries /." free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9962513.
Full textFrondana, Iara Moreira. "Model selection for discrete Markov random fields on graphs." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-02022018-151123/.
Full textNesta tese propomos um critério de máxima verossimilhança penalizada para estimar o grafo de dependência condicional de um campo aleatório Markoviano discreto. Provamos a convergência quase certa do estimador do grafo no caso de um conjunto finito ou infinito enumerável de variáveis. Nosso método requer condições mínimas na distribuição de probabilidade e contrariamente a outras abordagens da literatura, a condição usual de positividade não é necessária. Introduzimos alguns exemplos com um conjunto finito de vértices e estudamos o desempenho do estimador em dados simulados desses exemplos. Também propomos um procedimento empírico baseado no método de validação cruzada para selecionar o melhor valor da constante na definição do estimador, e mostramos a aplicação deste procedimento em dois conjuntos de dados reais.
Ushan, Wardah. "Portfolio selection using Random Matrix theory and L-Moments." Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/16921.
Full textMarkowitz's (1952) seminal work on Modern Portfolio Theory (MPT) describes a methodology to construct an optimal portfolio of risky stocks. The constructed portfolio is based on a trade-off between risk and reward, and will depend on the risk- return preferences of the investor. Implementation of MPT requires estimation of the expected returns and variances of each of the stocks, and the associated covariances between them. Historically, the sample mean vector and variance-covariance matrix have been used for this purpose. However, estimation errors result in the optimised portfolios performing poorly out-of-sample. This dissertation considers two approaches to obtaining a more robust estimate of the variance-covariance matrix. The first is Random Matrix Theory (RMT), which compares the eigenvalues of an empirical correlation matrix to those generated from a correlation matrix of purely random returns. Eigenvalues of the random correlation matrix follow the Marcenko-Pastur density, and lie within an upper and lower bound. This range is referred to as the "noise band". Eigenvalues of the empirical correlation matrix falling within the "noise band" are considered to provide no useful information. Thus, RMT proposes that they be filtered out to obtain a cleaned, robust estimate of the correlation and covariance matrices. The second approach uses L-moments, rather than conventional sample moments, to estimate the covariance and correlation matrices. L-moment estimates are more robust to outliers than conventional sample moments, in particular, when sample sizes are small. We use L-moments in conjunction with Random Matrix Theory to construct the minimum variance portfolio. In particular, we consider four strategies corresponding to the four different estimates of the covariance matrix: the L-moments estimate and sample moments estimate, each with and without the incorporation of RMT. We then analyse the performance of each of these strategies in terms of their risk-return characteristics, their performance and their diversification.
Wonkye, Yaa Tawiah. "Innovations of random forests for longitudinal data." Bowling Green State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1563054152739397.
Full textTran, The Truyen. "On conditional random fields: applications, feature selection, parameter estimation and hierarchical modelling." Curtin University of Technology, Dept. of Computing, 2008. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=18614.
Full textOn the theory side, the thesis addresses three important theoretical issues of CRFs: feature selection, parameter estimation and modelling recursive sequential data. These issues are all addressed under a general setting of partial supervision in that training labels are not fully available. For feature selection, we introduce a novel learning algorithm called AdaBoost.CRF that incrementally selects features out of a large feature pool as learning proceeds. AdaBoost.CRF is an extension of the standard boosting methodology to structured and partially observed data. We demonstrate that the AdaBoost.CRF is able to eliminate irrelevant features and as a result, returns a very compact feature set without significant loss of accuracy. Parameter estimation of CRFs is generally intractable in arbitrary network structures. This thesis contributes to this area by proposing a learning method called AdaBoost.MRF (which stands for AdaBoosted Markov Random Forests). As learning proceeds AdaBoost.MRF incrementally builds a tree ensemble (a forest) that cover the original network by selecting the best spanning tree at a time. As a result, we can approximately learn many rich classes of CRFs in linear time. The third theoretical work is on modelling recursive, sequential data in that each level of resolution is a Markov sequence, where each state in the sequence is also a Markov sequence at the finer grain. One of the key contributions of this thesis is Hierarchical Conditional Random Fields (HCRF), which is an extension to the currently popular sequential CRF and the recent semi-Markov CRF (Sarawagi and Cohen, 2004). Unlike previous CRF work, the HCRF does not assume any fixed graphical structures.
Rather, it treats structure as an uncertain aspect and it can estimate the structure automatically from the data. The HCRF is motivated by Hierarchical Hidden Markov Model (HHMM) (Fine et al., 1998). Importantly, the thesis shows that the HHMM is a special case of HCRF with slight modification, and the semi-Markov CRF is essentially a flat version of the HCRF. Central to our contribution in HCRF is a polynomial-time algorithm based on the Asymmetric Inside Outside (AIO) family developed in (Bui et al., 2004) for learning and inference. Another important contribution is to extend the AIO family to address learning with missing data and inference under partially observed labels. We also derive methods to deal with practical concerns associated with the AIO family, including numerical overflow and cubic-time complexity. Finally, we demonstrate good performance of HCRF against rivals on two applications: indoor video surveillance and noun-phrase chunking.
Linusson, Henrik, Robin Rudenwall, and Andreas Olausson. "Random forest och glesa datarespresentationer." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-16672.
Full textProgram: Systemarkitekturutbildningen
Patel, Richa. "Random mutagenesis and selection for RubisCO function in the photosynthetic bacterium rhodobacter capsulatus." Connect to resource, 2008. http://hdl.handle.net/1811/32176.
Full textPeng, Xiaoling. "Methods of variable selection and their applications in quantitative structure-property relationship (QSPR)." HKBU Institutional Repository, 2005. http://repository.hkbu.edu.hk/etd_ra/594.
Full textDu, Ye Ting. "Simultaneous fixed and random effects selection in finite mixtures of linear mixed-effects models." Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=110592.
Full textLes modèles linéaires mixtes (LME) sont fréquemment employés pour la modélisation des données longitudinales. Un facteur qui complique l'analyse de ce genre de données est que les échantillons sont parfois obtenus à partir d'une population d'importante hétérogénéité sous-jacente, qui serait difficile à capter par un seul LME. De tels problèmes peuvent être surmontés par un mélange fini de modèles linéaires mixtes (FMLME), qui segmente la population en sous-populations et modélise chacune de ces dernières par un LME distinct. Souvent, un grand nombre de variables explicatives sont introduites dans la phase initiale d'une étude. Cependant, leurs associations à la variable réponse varient d'un composant à l'autre du modèle FMLME. Afin d'améliorer la prévisibilité et de recueillir un modèle parcimonieux, il est d'un grand intérêt pratique d'identifier les effets importants, tant fixes qu'aléatoires, dans le modèle. Les techniques conventionnelles de sélection de variables telles que la suppression progressive et la sélection de sous-ensembles sont informatiquement chères, même lorsque le nombre de composants et de covariables est relativement modeste. La présente thèse introduit une approche basée sur la vraisemblance pénalisée et propose un algorithme EM imbriqué qui est computationnellement efficace. On démontre aussi que les estimateurs possèdent des propriétés telles que la cohérence, la parcimonie et la normalité asymptotique. On illustre la performance de la méthode proposée au moyen de simulations et d'une application sur un vrai jeu de données.
Chen, Juan. "Model selection for IRT equating of Testlet-based tests in the random groups design." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1439.
Full textStrobl, Carolin, Anne-Laure Boulesteix, Achim Zeileis, and Torsten Hothorn. "Bias in Random Forest Variable Importance Measures: Illustrations, Sources and a Solution." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2006. http://epub.wu.ac.at/1274/1/document.pdf.
Full textSeries: Research Report Series / Department of Statistics and Mathematics
Hjerpe, Adam. "Computing Random Forests Variable Importance Measures (VIM) on Mixed Numerical and Categorical Data." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-185496.
Full textRandom Forest (RF) är en populär prediktormodell som visat goda resultat vid en stor uppsättning applikationsstudier. Modellen ger hög prediktionsprecision, har förmåga att modellera komplex högdimensionell data och modellen har vidare visat goda resultat vid interkorrelerade prediktorvariabler. Detta projekt undersöker ett mått, variabel importance measure (VIM) erhållna från RF modellen, för att beräkna graden av association mellan prediktorvariabler och målvariabeln. Projektet undersöker känsligheten hos VIM vid kvalitativt prediktorbrus och undersöker VIMs förmåga att differentiera prediktiva variabler från variabler som endast, med aveende på målvariableln, beskriver brus. Att differentiera prediktiva variabler vid övervakad inlärning kan användas till att öka robustheten hos klassificerare, öka prediktionsprecisionen, reducera data dimensionalitet och VIM kan användas som ett verktyg för att utforska relationer mellan prediktorvariabler och målvariablel.
Sokolovska, Nataliya. "Contributions to the estimation of probabilistic discriminative models: semi-supervised learning and feature selection." Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00006257.
Full textHu, Renjie. "Random neural networks for dimensionality reduction and regularized supervised learning." Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/6960.
Full textChakravorty, Hirak. "Equilibrium and non-equilibrium analysis of folding and sequence selection in mean field random heteropolymers." Thesis, King's College London (University of London), 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399162.
Full textAl, Maathidi M. M. "Optimal feature selection and machine learning for high-level audio classification : a random forests approach." Thesis, University of Salford, 2017. http://usir.salford.ac.uk/44338/.
Full textEriksson, Viktor. "Bayesian Model Selection with Intrinsic Bayes Factor for Location-Scale Model and Random Effects Model." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-85152.
Full textNew, T. M. "Random road analysis and improved gear ratio selection of a front wheel drive drag racing car." Connect to this title online, 2008. http://etd.lib.clemson.edu/documents/1211387456/.
Full textCarter, Kristina A. "A Comparison of Variable Selection Methods for Modeling Human Judgment." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1552494031580848.
Full textFrühwirth-Schnatter, Sylvia, and Regina Tüchler. "Bayesian parsimonious covariance estimation for hierarchical linear mixed models." Institut für Statistik und Mathematik, WU Vienna University of Economics and Business, 2004. http://epub.wu.ac.at/774/1/document.pdf.
Full textSeries: Research Report Series / Department of Statistics and Mathematics
Lin, Hui-Fen. "A Comparison of Three Item Selection Methods in Criterion-Referenced Tests." Thesis, University of North Texas, 1988. https://digital.library.unt.edu/ark:/67531/metadc332327/.
Full textKudella, Patrick [Verfasser], and Dieter [Akademischer Betreuer] Braun. "Sequence self-selection by the network dynamics of random ligating oligomer pools / Patrick Kudella ; Betreuer: Dieter Braun." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2021. http://d-nb.info/123264546X/34.
Full textBlaha, Jeffrey. "Variable Selection Methods for Residential Real Estate Markets: An Exploration of Random Forest Trees in Spatial Economics." University of Toledo / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1503330225924692.
Full textSingh, Vivek. "Contributions to automatic particle identification in electron micrographs: Algorithms, implementation, and applications." Doctoral diss., University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2107.
Full textPh.D.
School of Computer Science
Engineering and Computer Science
Computer Science
Edgel, Robert John. "Habitat Selection and Response to Disturbance by Pygmy Rabbits in Utah." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3928.
Full textBoone, Edward L. "Bayesian Methodology for Missing Data, Model Selection and Hierarchical Spatial Models with Application to Ecological Data." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26141.
Full textPh. D.
Kisamore, Jennifer L. "Validity Generalization and Transportability: An Investigation of Distributional Assumptions of Random-Effects Meta-Analytic Methods." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000060.
Full textRönnegård, Lars. "Selection, maternal effects and inbreeding in reindeer husbandry." Uppsala : Dept. of Animal Breeding and Genetics, Swedish Univ. of Agricultural Sciences, 2003. http://epsilon.slu.se/a370.pdf.
Full textKamath, Vidya. "Use of Random Subspace Ensembles on Gene Expression Profiles in Survival Prediction for Colon Cancer Patients." Scholar Commons, 2005. https://scholarcommons.usf.edu/etd/715.
Full textArnroth, Lukas, and Dennis Jonni Fiddler. "Supervised Learning Techniques : A comparison of the Random Forest and the Support Vector Machine." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-274768.
Full textSöderberg, Max Joel, and Axel Meurling. "Feature selection in short-term load forecasting." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259692.
Full textI denna rapport undersöks korrelation och betydelsen av olika attribut för att förutspå energiförbrukning 24 timmar framåt. Attributen härstammar från tre kategorier: väder, tid och tidigare energiförbrukning. Korrelationerna tas fram genom att utföra Pearson Correlation och Mutual Information. Detta resulterade i att de högst korrelerade attributen var de som representerar tidigare energiförbrukning, följt av temperatur och månad. Två identiska attributmängder erhölls genom att ranka attributen över korrelation. Tre attributmängder skapades manuellt. Den första mängden innehåll sju attribut som representerade tidigare energiförbrukning, en för varje dag, sju dagar innan datumet för prognosen av energiförbrukning. Den andra mängden bestod av väderoch tidsattribut. Den tredje mängden bestod av alla attribut från den första och andra mängden. Dessa mängder jämfördes sedan med hjälp av olika maskininlärningsmodeller. Resultaten visade att mängden med alla attribut och den med tidigare energiförbrukning gav bäst resultat för samtliga modeller.
Michelfelder, Stefan. "Selection and characterization of targeted vector capsids from random adeno-associated virus type 2 (AAV-2) display peptide libraries." [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:352-opus-72406.
Full textFrot, Benjamin. "Graphical model selection for Gaussian conditional random fields in the presence of latent variables : theory and application to genetics." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:0a6799ed-fca1-48b2-89cd-ad6f2c0439af.
Full textRuss, Ricardo. "Service Level Achievments - Test Data for Optimal Service Selection." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-50538.
Full textKörber, Julian [Verfasser], and Susanne [Akademischer Betreuer] Rässler. "Bayesian Analysis of Network Data. Model Selection and Evaluation of the Exponential Random Graph Model / Julian Körber ; Betreuer: Susanne Rässler." Bamberg : Otto-Friedrich-Universität Bamberg, 2018. http://d-nb.info/1160938849/34.
Full textPeck, Riley D. "Seasonal Habitat Selection by Greater Sage Grouse in Strawberry Valley Utah." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/3180.
Full textZhang, Qing Frankowski Ralph. "An empirical evaluation of the random forests classifier models for variable selection in a large-scale lung cancer case-control study /." See options below, 2006. http://proquest.umi.com/pqdweb?did=1324365481&sid=1&Fmt=2&clientId=68716&RQT=309&VName=PQD.
Full textHermann, Philipp [Verfasser], and Hajo [Akademischer Betreuer] Holzmann. "High-dimensional, robust, heteroscedastic variable selection with the adaptive LASSO, and applications to random coefficient regression / Philipp Hermann ; Betreuer: Hajo Holzmann." Marburg : Philipps-Universität Marburg, 2021. http://d-nb.info/1236692187/34.
Full textMeyer, Patrick E. "Information-theoretic variable selection and network inference from microarray data." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210396.
Full textdata. In a lot of emerging fields, like bioinformatics, they are confronted with datasets
having thousands of variables, a lot of noise, non-linear dependencies and, only, tens of
samples. The detection of functional relationships, when such uncertainty is contained in
data, constitutes a major challenge.
Our work focuses on variable selection and network inference from datasets having
many variables and few samples (high variable-to-sample ratio), such as microarray data.
Variable selection is the topic of machine learning whose objective is to select, among a
set of input variables, those that lead to the best predictive model. The application of
variable selection methods to gene expression data allows, for example, to improve cancer
diagnosis and prognosis by identifying a new molecular signature of the disease. Network
inference consists in representing the dependencies between the variables of a dataset by
a graph. Hence, when applied to microarray data, network inference can reverse-engineer
the transcriptional regulatory network of cell in view of discovering new drug targets to
cure diseases.
In this work, two original tools are proposed MASSIVE (Matrix of Average Sub-Subset
Information for Variable Elimination) a new method of feature selection and MRNET (Minimum
Redundancy NETwork), a new algorithm of network inference. Both tools rely on
the computation of mutual information, an information-theoretic measure of dependency.
More precisely, MASSIVE and MRNET use approximations of the mutual information
between a subset of variables and a target variable based on combinations of mutual informations
between sub-subsets of variables and the target. The used approximations allow
to estimate a series of low variate densities instead of one large multivariate density. Low
variate densities are well-suited for dealing with high variable-to-sample ratio datasets,
since they are rather cheap in terms of computational cost and they do not require a large
amount of samples in order to be estimated accurately. Numerous experimental results
show the competitiveness of these new approaches. Finally, our thesis has led to a freely
available source code of MASSIVE and an open-source R and Bioconductor package of
network inference.
Doctorat en sciences, Spécialisation Informatique
info:eu-repo/semantics/nonPublished
Tanyildiz, Zeynep Esra. "The Effects of Networks on Institution Selection by Foreign Doctoral Students in the U.S." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/pmap_diss/25.
Full textCattoglio, Claudia. "Target site selection of retroviral vectors in the human genome : viral and genomic determinants of non-random integration patterns in hematopoietic cells." Thesis, Open University, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.494505.
Full textJiménez, Montero José Antonio. "Selección genómica en poblaciones reducidas de vacuno de leche." Doctoral thesis, Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/27649.
Full textJiménez Montero, JA. (2013). Selección genómica en poblaciones reducidas de vacuno de leche [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/27649
TESIS
Wang, Xing. "Time Dependent Kernel Density Estimation: A New Parameter Estimation Algorithm, Applications in Time Series Classification and Clustering." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6425.
Full textKaze, Joshua Taft. "Habitat Selection by Two K-Selected Species: An Application to Bison and Sage Grouse." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4284.
Full textTanyildiz, Zeynep Esra. "Effects of networks on U.S. institution selection by foreign doctoral students in science and engineering." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22644.
Full textCommittee Chair: Paula E. Stephan; Committee Member: Albert J. Sumell; Committee Member: Erdal Tekin; Committee Member: Gregory B. Lewis; Committee Member: Mary Frank Fox.
Cole, James Jacob. "Assessing Nonlinear Relationships through Rich Stimulus Sampling in Repeated-Measures Designs." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1587.
Full textDuan, Haoyang. "Applying Supervised Learning Algorithms and a New Feature Selection Method to Predict Coronary Artery Disease." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31113.
Full textKhan, Syeduzzaman. "A PROBABILISTIC MACHINE LEARNING FRAMEWORK FOR CLOUD RESOURCE SELECTION ON THE CLOUD." Scholarly Commons, 2020. https://scholarlycommons.pacific.edu/uop_etds/3720.
Full text