Tesis sobre el tema "Computational methods"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Computational methods".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Vakili, Mohammadjavad. "Methods in Computational Cosmology". Thesis, New York University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10260795.
Texto completoState of the inhomogeneous universe and its geometry throughout cosmic history can be studied by measuring the clustering of galaxies and the gravitational lensing of distant faint galaxies. Lensing and clustering measurements from large datasets provided by modern galaxy surveys will forever shape our understanding of the how the universe expands and how the structures grow. Interpretation of these rich datasets requires careful characterization of uncertainties at different stages of data analysis: estimation of the signal, estimation of the signal uncertainties, model predictions, and connecting the model to the signal through probabilistic means. In this thesis, we attempt to address some aspects of these challenges.
The first step in cosmological weak lensing analyses is accurate estimation of the distortion of the light profiles of galaxies by large scale structure. These small distortions, known as the cosmic shear signal, are dominated by extra distortions due to telescope optics and atmosphere (in the case of ground-based imaging). This effect is captured by a kernel known as the Point Spread Function (PSF) that needs to be fully estimated and corrected for. We address two challenges a head of accurate PSF modeling for weak lensing studies. The first challenge is finding the centers of point sources that are used for empirical estimation of the PSF. We show that the approximate methods for centroiding stars in wide surveys are able to optimally saturate the information content that is retrievable from astronomical images in the presence of noise.
The fist step in weak lensing studies is estimating the shear signal by accurately measuring the shapes of galaxies. Galaxy shape measurement involves modeling the light profile of galaxies convolved with the light profile of the PSF. Detectors of many space-based telescopes such as the Hubble Space Telescope (HST) sample the PSF with low resolution. Reliable weak lensing analysis of galaxies observed by the HST camera requires knowledge of the PSF at a resolution higher than the pixel resolution of HST. This PSF is called the super-resolution PSF. In particular, we present a forward model of the point sources imaged through filters of the HST WFC3 IR channel. We show that this forward model can accurately estimate the super-resolution PSF. We also introduce a noise model that permits us to robustly analyze the HST WFC3 IR observations of the crowded fields.
Then we try to address one of the theoretical uncertainties in modeling of galaxy clustering on small scales. Study of small scale clustering requires assuming a halo model. Clustering of halos has been shown to depend on halo properties beyond mass such as halo concentration, a phenomenon referred to as assembly bias. Standard large-scale structure studies with halo occupation distribution (HOD) assume that halo mass alone is sufficient to characterize the connection between galaxies and halos. However, assembly bias could cause the modeling of galaxy clustering to face systematic effects if the expected number of galaxies in halos is correlated with other halo properties. Using high resolution N-body simulations and the clustering measurements of Sloan Digital Sky Survey (SDSS) DR7 main galaxy sample, we show that modeling of galaxy clustering can slightly improve if we allow the HOD model to depend on halo properties beyond mass.
One of the key ingredients in precise parameter inference using galaxy clustering is accurate estimation of the error covariance matrix of clustering measurements. This requires generation of many independent galaxy mock catalogs that accurately describe the statistical distribution of galaxies in a wide range of physical scales. We present a fast and accurate method based on low-resolution N-body simulations and an empirical bias model for generating mock catalogs. We use fast particle mesh gravity solvers for generation of dark matter density field and we use Markov Chain Monti Carlo (MCMC) to estimate the bias model that connects dark matter to galaxies. We show that this approach enables the fast generation of mock catalogs that recover clustering at a percent-level accuracy down to quasi-nonlinear scales.
Cosmological datasets are interpreted by specifying likelihood functions that are often assumed to be multivariate Gaussian. Likelihood free approaches such as Approximate Bayesian Computation (ABC) can bypass this assumption by introducing a generative forward model of the data and a distance metric for quantifying the closeness of the data and the model. We present the first application of ABC in large scale structure for constraining the connections between galaxies and dark matter halos. We present an implementation of ABC equipped with Population Monte Carlo and a generative forward model of the data that incorporates sample variance and systematic uncertainties. (Abstract shortened by ProQuest.)
Steklova, Klara. "Computational methods in hydrogeophysics". Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/60815.
Texto completoScience, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
af, Klinteberg Ludvig. "Computational methods for microfluidics". Licentiate thesis, KTH, Numerisk analys, NA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-116384.
Texto completoQC 20130124
Chernyshenko, Dmitri. "Computational methods in micromagnetics". Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/398126/.
Texto completoArgamon, Shlomo. "Computational methods for counterterrorism". Berlin Heidelberg Springer, 2009. http://d-nb.info/993136176/04.
Texto completoZhu, Tulong. "Meshless methods in computational mechanics". Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/11795.
Texto completoHugtenburg, Richard P. "Computational methods in radiation oncology". Thesis, University of Canterbury. Physics and Astronomy, 1998. http://hdl.handle.net/10092/6796.
Texto completoBertolani, Steve James. "Computational Methods for Modeling Enzymes". Thesis, University of California, Davis, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10928544.
Texto completoEnzymes play a crucial role in modern biotechnology, industry, food processing and medical applications. Since their first discovered industrial use, man has attempted to discover new enzymes from Nature to catalyze different chemical reactions. In modern times, with the advent of computational methods, protein structure solutions, protein sequencing and DNA synthesis methods, we now have the tools to enable new approaches to rational enzyme engineering. With an enzyme structure in hand, a researcher may run an in silico experiment to sample different amino acids in the active site in order to identify new combinations which likely stabilize a transition-state-enzyme model. A suggested mutation can then be encoded into the desired enzyme gene, ordered, synthesized and tested. Although this truly astonishing feat of engineering and modern biotechnology allows the redesign of existing enzymes to acquire a new substrate specificity, it still requires a large amount of time, capital and technical capabilities.
Concurrently, while making strides in computational protein design, the cost of sequencing DNA plummeted after the turn of the century. With the reduced cost of sequencing, the number of sequences in public databases of naturally occurring proteins has grown exponentially. This new, large source of information can be utilized to enable rational enzyme design, as long as it can be coupled with accurate modeling of the protein sequences.
This work first describes a novel approach to reengineering enzymes (Genome Enzyme Orthologue Mining; GEO) that utilizes the vast amount of protein sequences in modern databases along with extensive computation modeling and achieves comparable results to the state-of-the-art rational enzyme design methods. Then, inspired by the success of this new method and aware of it's reliance on the accuracy of the protein models, we created a computational benchmark to both measure the accuracy of our models as well as improve it by encoding additional information about the structure, derived from mechanistic studies (Catalytic Geometry constraints; CG). Lastly, we use the improved accuracy method to automatically model hundreds of putative enzymes sequences and dock substrates into them to extract important features that are then used to inform experiments and design. This is used to reengineer a ribonucleotide reductase to catalyze a aldehyde deformylating oxygenase reaction.
These chapters advance the field of rational enzyme engineering, by providing a novel technique that may enable efficient routes to rationally design enzymes for reactions of interest. These chapters also advance the field of homology modeling, in the specific domain in which the researcher is modeling an enzyme with a known chemical reaction. Lastly, these chapters and techniques lead to an example which utilizes highly accurate computational models to create features which can help guide the rational design of enzyme catalysts.
Syed, Zeeshan Hassan 1980. "Computational methods for physiological data". Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54671.
Texto completoAuthor is also affiliated with the MIT Dept. of Electrical Engineering and Computer Science. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 177-188).
Large volumes of continuous waveform data are now collected in hospitals. These datasets provide an opportunity to advance medical care, by capturing rare or subtle phenomena associated with specific medical conditions, and by providing fresh insights into disease dynamics over long time scales. We describe how progress in medicine can be accelerated through the use of sophisticated computational methods for the structured analysis of large multi-patient, multi-signal datasets. We propose two new approaches, morphologic variability (MV) and physiological symbolic analysis, for the analysis of continuous long-term signals. MV studies subtle micro-level variations in the shape of physiological signals over long periods. These variations, which are often widely considered to be noise, can contain important information about the state of the underlying system. Symbolic analysis studies the macro-level information in signals by abstracting them into symbolic sequences. Converting continuous waveforms into symbolic sequences facilitates the development of efficient algorithms to discover high risk patterns and patients who are outliers in a population. We apply our methods to the clinical challenge of identifying patients at high risk of cardiovascular mortality (almost 30% of all deaths worldwide each year). When evaluated on ECG data from over 4,500 patients, high MV was strongly associated with both cardiovascular death and sudden cardiac death. MV was a better predictor of these events than other ECG-based metrics. Furthermore, these results were independent of information in echocardiography, clinical characteristics, and biomarkers.
(cont.) Our symbolic analysis techniques also identified groups of patients exhibiting a varying risk of adverse outcomes. One group, with a particular set of symbolic characteristics, showed a 23 fold increased risk of death in the months following a mild heart attack, while another exhibited a 5 fold increased risk of future heart attacks.
by Zeeshan Hassan Syed.
Ph.D.
Fei, Bingxin. "Computational Methods for Option Pricing". Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/381.
Texto completoYang, Xin Barlow Jesse L. Zha Hongyuan. "Computational methods for manifold learning". [University Park, Pa.] : Pennsylvania State University, 2007. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-2256/index.html.
Texto completoBlount, Steven Michael 1958. "Computational methods for stochastic epidemics". Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/288714.
Texto completoMontanucci, Ludovica <1978>. "Computational methods for genome screening". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/847/1/Tesi_Montanucci_Ludovica.pdf.
Texto completoMontanucci, Ludovica <1978>. "Computational methods for genome screening". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/847/.
Texto completoCarnimeo, Ivan. "Computational methods for spectroscopic properties". Doctoral thesis, Scuola Normale Superiore, 2014. http://hdl.handle.net/11384/85802.
Texto completoSALIS, SAMUELE. "Computational methods for transport properties". Doctoral thesis, Università degli Studi di Cagliari, 2017. http://hdl.handle.net/11584/248738.
Texto completoThrough antimicrobial resistance many bacteria can survive to an ever larger number of antibiotics. This is true in particular for a category of bacteria classified as gram–negative. These kinds of bacteria differ from the other ones by the presence of an outer membrane, which is able to protect them from the fast access (and consequently the action) of any antibiotics. The increasing capability of antibiotics to survive to many kinds of drugs has given rise to the Multiple Drug Resistance (MDR). New antibiotics could help to mitigate the MDR problem, but the poor understanding of permeability through outer membranes has given an ever littler number of new patented antibiotics. This is due to a lack of experimental methods which are able to explain with a sufficient detail the permeation and, on the other side, to the difficulty in reaching the typical time scales (ms or even more) of these processes. The category of antibiotics studied in this thesis can permeate the membrane crossing some porins (beta barrel proteins nestled in bacterial outer membrane) so the permeation happens when we observe a transport of the antibiotic through a porin. In this thesis we will focus on some computational methods, which are suitable to increase our understanding of transport processes. We will start with a post elaboration algorithm, that can be used to extract from an electrophysiology time series transport events apparently lower than the experimental device temporal sensitivity, continuing with another post elaboration algorithm that allows to extract the real transition time from a metadynamics simulation, skipping in this way the timescale problem in computer simulations, and we will finish with an ultra coarse grained model, that can be used to study the transport properties through a bacterial channel. Finally we will list the results obtained using the three aforementioned methods and we will summarise this thesis with the conclusions.
Robertz, Daniel. "Formal computational methods for control theory". [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=981070019.
Texto completoTaher, Leila. "Computational methods for splice site prediction". [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=978938631.
Texto completoKolev, Tzanio Valentinov. "Least-squares methods for computational electromagnetics". Texas A&M University, 2004. http://hdl.handle.net/1969.1/1115.
Texto completoLeung, Chi-Yin. "Computational methods for integral equation analysis". Thesis, Imperial College London, 1995. http://hdl.handle.net/10044/1/8139.
Texto completoMiller, David J. Ghosh Avijit. "New methods in computational systems biology /". Philadelphia, Pa. : Drexel University, 2008. http://hdl.handle.net/1860/2810.
Texto completoTaghi, Hajian Mozafar. "Computational methods for discrete programming problems". Thesis, Brunel University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315490.
Texto completoLi, Limin y 李丽敏. "Machine learning methods for computational biology". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44546749.
Texto completoMeng, Lingling y 孟玲玲. "Computational electromagnetics methods for IC modeling". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/195993.
Texto completopublished_or_final_version
Electrical and Electronic Engineering
Master
Master of Philosophy
SANTI, MARCIO RODRIGUES DE. "COMPUTATIONAL METHODS FOR GEOLOGICAL SECTIONS RESTAURATION". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2002. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=2795@1.
Texto completoEste trabalho apresenta uma nova abordagem para o balanceamento de seções geológicas baseada em modelagem física e simulação numérica. O objetivo principal é introduzir alguns conceitos da Mecânica do Contínuo no processo de restauração geológica, de forma a considerar as propriedades físicas dos materiais geológicos durante a simulação do movimento de um bloco de rocha sobre uma falha. A estratégia adotada utiliza-se de um algoritmo de Relaxação Dinâmica acoplado ao Método dos Elementos Finitos para resolver sistemas de equações, com condições de contorno específicas para a movimentação do bloco sobre a falha.Foi adotado como ambiente de desenvolvimento um sistema de balanceamento de seções geológicas composto por um conjunto de transformações geométricas comuns na abordagem clássica do problema. O sistema utiliza uma tecnologia de modelagem geométrica baseada em uma estrutura de dados que permite a representação topológica completa de uma subdivisão planar.A simulação numérica do balanceamento de seções geológicas proposta é implementada dentro desse ambiente e integra três módulos distintos: um módulo de pré- processamento no qual os dados requeridos podem ser facilmente gerados, um módulo de análise onde o método de Relaxação Dinâmica foi implementado e, finalmente, um módulo de pósprocessamento em que podem ser visualizados os resultados obtidos da simulação numérica. Considera-se ainda a natureza palinspática do problema de restauração através de uma interface gráfica amigável do ponto de vista do usuário. Neste sentido, foi realizada uma reorganização completa da interface gráfica e das classes de atributos geológicos associados às entidades topológicas (linhas e regiões) da seção geológica. Esta organização teve dois objetivos: o primeiro, implementar um processo gráfico baseado em uma árvore de decisões para o gerenciamento das tarefas do balanceamento, que envolve passos arbitrários de tentativa e erro, e, o segundo, possibilitar a implementação da simulação numérica dentro do processo de balanceamento.As idéias propostas podem ser consideradas como o primeiro passo para o desenvolvimento de um sistema de balanceamento de seções geológicas, cujas medidas de deformação representem de forma mais aproximada o comportamento mecânico das rochas, além de ser mais automatizado, o que sugere futuramente a implementação de um sistema tridimensional, no qual seja menos exigida a interação com o usuário.
This work presents a new approach for the restoration of geological cross-sections that is based on physical modeling and numerical simulation. The main purpose is to introduce Continuum Mechanics concepts into the geological restoration process in order to consider physical properties of the materials during the simulation of the movement of a rock block along a fault. The adopted strategy uses a dynamic relaxation algorithm to solve the equation system that arises from the numerical simulation based on the Finite Element Method, together with some specific boundary conditions to represent the movement of the rock block over the fault.As development environment, a cross-section restoration system was adopted, composed by a group of usual geometric transformations from the classical approach of the problem. This system adopts a geometric modeling technology based on a data structure that is capable of completely representing the topology of a planar subdivision. The proposed numerical simulation is implemented inside this system and integrates with three different modules: a pre-processing module, where the required input data can be easily generated; an analysis module, in which the dynamic relaxation method has been implemented; and a post- processing module, where the results of the numerical simulation can be viewed. The palinspatic nature of the restoration problem is taken into account by means of a user-friendly graphics interface that was specifically designed for the system. The graphics interface and the geological attribute classes were completely re-organized with two purposes. First, to implement a graphical interface based on a decision tree to manage user tasks involved in the restoration process, which includes trial- and-error steps. Second, to provide support for the implementation of numerical simulation in the restoration process.The ideas proposed herein can be considered as a first step towards a complete geological cross-section restoration system in which more consistent deformation measures can be incorporated into the governing equations to better represent the mechanical behavior of the rocks, and is also an expansion of the presented system to a three-dimensional environment, currently under investigation.
Molaro, Mark Christopher. "Computational statistical methods in chemical engineering". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/111286.
Texto completoCataloged from PDF version of thesis.
Includes bibliographical references (pages 175-182).
Recent advances in theory and practice, have introduced a wide variety of tools from machine learning that can be applied to data intensive chemical engineering problems. This thesis covers applications of statistical learning spanning a range of relative importance of data versus existing detailed theory. In each application, the quantity and quality of data available from experimental systems are used in conjunction with an understanding of the theoretical physical laws governing system behavior to the extent they are available. A detailed generative parametric model for optical spectra of multicomponent mixtures is introduced. The application of interest is the quantification of uncertainty associated with estimating the relative abundance of mixtures of carbon nanotubes in solution. This work describes a detailed analysis of sources of uncertainty in estimation of relative abundance of chemical species in solution from optical spectroscopy. In particular, the quantification of uncertainty in mixtures with parametric uncertainty in pure component spectra is addressed. Markov Chain Monte Carlo methods are utilized to quantify uncertainty in these situations and the inaccuracy and potential for error in simpler methods is demonstrated. Strategies to improve estimation accuracy and reduce uncertainty in practical experimental situations are developed including when multiple measurements are available and with sequential data. The utilization of computational Bayesian inference in chemometric problems shows great promise in a wide variety of practical experimental applications. A related deconvolution problem is addressed in which a detailed physical model is not available, but the objective of analysis is to map from a measured vector valued signal to a sum of an unknown number of discrete contributions. The data analyzed in this application is electrical signals generated from a free surface electro-spinning apparatus. In this information poor system, MAP estimation is used to reduce the variance in estimates of the physical parameters of interest. The formulation of the estimation problem in a probabilistic context allows for the introduction of prior knowledge to compensate for a high dimensional ill-conditioned inverse problem. The estimates from this work are used to develop a productivity model expanding on previous work and showing how the uncertainty from estimation impacts system understanding. A new machine learning based method for monitoring for anomalous behavior in production oil wells is reported. The method entails a transformation of the available time series of measurements into a high-dimensional feature space representation. This transformation yields results which can be treated as static independent measurements. A new method for feature selection in one-class classification problems is developed based on approximate knowledge of the state of the system. An extension of features space transformation methods on time series data is introduced to handle multivariate data in large computationally burdensome domains by using sparse feature extraction methods. As a whole these projects demonstrate the application of modern statistical modeling methods, to achieve superior results in data driven chemical engineering challenges.
by Mark Christopher Molaro.
Ph. D.
Kao, Chung-Yao 1972. "Efficient computational methods for robustness analysis". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/29258.
Texto completoIncludes bibliographical references (p. 209-215).
Issues of robust stability and performance have dominated the field of systems and control theory because of their practical importance. The recently developed Integral Quadratic Constraint (IQC) based analysis method provides a framework for systematically checking robustness properties of large complex dynamical systems. In IQC analysis, the system to be analyzed is represented as a nominal, Linear Time-Invariant (LTI) subsystem interconnected with a perturbation term. The perturbation is characterized in terms of IQCs. The robustness condition is expressed as a feasibility problem which can be solved using interiorpoint algorithms. Although the systems to be analyzed have nominal LTI subsystems in many applications, this is not always the case. A typical example is the problem of robustness analysis of the oscillatory behavior of nonlinear systems, where the nominal subsystem is generally Linear Periodically Time-Varying (LPTV). The objective of the first part of this thesis is to develop new techniques for robustness analysis of LPTV systems. Two different approaches are proposed. In the first approach, the harmonic terms of the LPTV nominal model are extracted, and the system is transformed into the standard setup for robustness analysis. Robustness analysis is then performed on the transformed system based on the IQC analysis method. In the second approach, we allow the nominal system to remain periodic, and we extend the IQC analysis method to include the case where the nominal system is periodically time-varying.
(cont.) The robustness condition of this new approach is posed as semi-infinite convex feasibility problems which requires a new method to solve. A computational algorithm is developed for checking the robustness condition.In the second part of the thesis, we consider the optimization problems arising from IQC analysis. The conventional way of solving these problems is to transform them into semi-definite programs which are then solved using interior-point algorithms. The disadvantage of this approach is that the transformation introduces additional decision variables. In many situations, these auxiliary decision variables become the main computational burden, and the conventional method then becomes very inefficient and time consuming. In the second part of the thesis, a number of specialized algorithms are developed to solve these problems in a more efficient fashion. The crucial advantage in this development is that it avoids the equivalent transformation. The results of numerical experiments confirm that these algorithms can solve a problem arising from IQC analysis much faster than the conventional approach does.
by Chung-Yao Kao.
Sc.D.
Elia, Nicola. "Computational methods for multi-objective control". Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10679.
Texto completoKotta, Anwesh. "Condition Monitoring : Using Computational intelligence methods". Master's thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-187100.
Texto completoAl-Amri, Ibrahim Rasheed. "Computational methods in permutation group theory". Thesis, University of St Andrews, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636485.
Texto completoLai, Liang Simon. "Defect correction methods for computational aeroacoustics". Thesis, University of Greenwich, 2013. http://gala.gre.ac.uk/11452/.
Texto completoZhu, Zhaochen. "Computational methods in air quality data". HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/402.
Texto completoJin, Yan. "Advanced computational methods in portfolio optimisation". Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/39023/.
Texto completoHolland, Chase Carlton. "Computational Methods for Estimating Rail Life". Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/41436.
Texto completoMaster of Science
Balsells, Alex T. "Computational Methods for Radiation Therapy Planning". Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1557844457085534.
Texto completoTeixeira, Bellina Ribau. "Computational methods for microarray data analysis". Master's thesis, Universidade de Aveiro, 2009. http://hdl.handle.net/10773/3989.
Texto completoOs microarrays de ácido desoxirribonucleico (ADN) são uma importante tecnologia para a análise de expressão genética. Permitem medir o nível de expressão de genes em várias amostras para, por exemplo, identificar genes cuja expressão varia com a administração de determinado medicamento. Um slide de microarray mede o nível de expressão de milhares de genes numa amostra ao mesmo tempo e uma experiência pode usar vários slides, surgindo assim muitos dados que é preciso processar e analisar, com recurso a meios informáticos. Esta dissertação inclui um levantamento de métodos e recursos de software utilizados na análise de dados de experiências de microarrays. Em seguida, descreve-se o desenvolvimento de um novo módulo de análise de dados que visa, usando métodos de identificação de genes diferencialmente expressos, identificar genes que se encontram diferencialmente expressos entre dois ou mais grupos experimentais. No final, é apresentado o trabalho resultante, a nível de interfaces gráficas e funcionamento.
Deoxyribonucleic acid (DNA) microarrays are an important technology for the analysis of gene expression. They allow measuring the expression of genes among several samples in order to, for example, identify genes whose expression varies with the administration of a certain drug. A microarray slide measures the expression level of thousands of genes in a sample at the same time, and an experiment can include various slides, leading to a lot of data to be processed and analyzed, with the aid of computerized means. This dissertation includes a review of methods and software tools used in the analysis of microarray experimental data. Then it is described the development of a new data analysis module that intends, using methods of identifying differentially expressed genes, to identify genes that are differentially expressed between two more groups. Finally, the resulting work is presented, describing its graphical interface and structural design.
Bai, Lihui. "Computational methods for toll pricing models". [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0006341.
Texto completoGibson, Michael Andrew Bruck Jehoshua. "Computational methods for stochastic biological systems /". Diss., Pasadena, Calif. : California Institute of Technology, 2000. http://resolver.caltech.edu/CaltechETD:etd-05132005-154222.
Texto completoKatz, Aaron Jon. "Meshless methods for computational fluid dynamics /". May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.
Texto completoViana, Simone A. "Meshless methods applied to computational electromagnetics". Thesis, University of Bath, 2006. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428380.
Texto completoSelega, Alina. "Computational methods for RNA integrative biology". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/29630.
Texto completoFlint, Christopher Robert. "Computational Methods of Lattice Boltzmann Mhd". W&M ScholarWorks, 2017. https://scholarworks.wm.edu/etd/1530192360.
Texto completoAramini, Riccardo. "Computational inverse scattering via qualitative methods". Doctoral thesis, Università degli studi di Trento, 2011. https://hdl.handle.net/11572/368061.
Texto completoAramini, Riccardo. "Computational inverse scattering via qualitative methods". Doctoral thesis, University of Trento, 2011. http://eprints-phd.biblio.unitn.it/556/1/PhD-Thesis-Aramini.pdf.
Texto completoZagordi, Osvaldo. "Statistical physics methods in computational biology". Doctoral thesis, SISSA, 2007. http://hdl.handle.net/20.500.11767/3971.
Texto completoRaden, Martin [Verfasser] y Rolf [Akademischer Betreuer] Backofen. "Computational Methods for Lattice Protein Models = Rechnerische Methoden für Gitterproteinmodelle". Freiburg : Universität, 2011. http://d-nb.info/1114828939/34.
Texto completoHuismann, Immo. "Computational fluid dynamics on wildly heterogeneous systems". TUDPress, 2018. https://tud.qucosa.de/id/qucosa%3A74002.
Texto completoHagdahl, Stefan. "Hybrid methods for computational electromagnetics in the frequency domain". Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1655.
Texto completoIn this thesis we study hybrid numerical methods to be usedin computational electromagnetics. We restrict the methods tospectral domain and scattering problems. The hybrids consist ofcombinations of Boundary Element Methods and Geometrical Theoryof Diffraction.
In the thesis three hybrid methods will be presented. Onemethod has been developped from a theoretical idea to anindustrial code. The two other methods will be presented mainlyfrom a theoretical perspective. We will also give shortintroductions to the Boundary Element Method and theGeometrical Theory of Diffraction from a theoretical andimplementational point of view.
Keywords:Maxwells equations, Geometrical Theoryof Diffraction, Boundary Element Method, Hybrid methods,Electromagnetic Scattering
Lin, Zhipeng. "Computational methods in fnancial mathematics course project". Worcester, Mass. : Worcester Polytechnic Institute, 2009. http://www.wpi.edu/Pubs/ETD/Available/etd-050509-115331/.
Texto completoYu, Jingyuan. "Discovering Twitter through Computational Social Science Methods". Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671609.
Texto completoVisibilizando la vida cotidiana de la gente, Twitter se ha convertido en una de las plataformas de intercambio de información más importantes y ha atraído rápidamente la atención de los científicos. Investigadores de todo el mundo se han centrado en las ciencias sociales y en los estudios de Internet con datos de Twitter como muestra del mundo real, y en la última década se han diseñado numerosas herramientas de análisis y algoritmos. La presente tesis doctoral consta de tres investigaciones, en primer lugar, dados los 14 años (hasta 2020) de historia desde la fundación de Twitter, hemos asistido a una explosión de publicaciones científicas relacionadas, pero el panorama actual de la investigación en esta plataforma de medios sociales seguía siendo desconocido. Para llenar este vacío de investigación, hicimos un análisis bibliométrico de los estudios relacionados con Twitter para analizar cómo evolucionaron los estudios de Twitter a lo largo del tiempo, y para proporcionar una descripción general del entorno académico de investigación de Twitter desde un nivel macro. En segundo lugar, dado que hay muchas herramientas de software analítico que están disponibles actualmente para la investigación en Twitter, una pregunta práctica para los investigadores junior es cómo elegir el software más apropiado para su propio proyecto de investigación. Para resolver este problema, hicimos una revisión del software para algunos de los sistemas integrados que se consideran más relevantes para la investigación en ciencias sociales. Dado que los investigadores junior en ciencias sociales pueden enfrentarse a posibles limitaciones financieras, redujimos nuestro alcance para centrarnos únicamente en el software gratuito y de bajo coste. En tercer lugar, dada la actual crisis de salud pública, hemos observado que los medios de comunicación social son una de las fuentes de información y noticias más accesibles para el público. Durante una pandemia, la forma en que se enmarcan los problemas de salud y las enfermedades en la prensa influye en la comprensión del público sobre el actual brote epidémico y sus actitudes y comportamientos. Por lo tanto, decidimos usar Twitter como una fuente de noticias de fácil acceso para analizar la evolución de los marcos de noticias españoles durante la pandemia COVID-19. En general, las tres investigaciones se han asociado estrechamente con la aplicación de métodos computacionales, incluyendo la recolección de datos en línea, la minería de textos, el análisis de redes y la visualización de datos. Este proyecto de doctorado ha mostrado cómo la gente estudia y utiliza Twitter desde tres niveles diferentes: el nivel académico, el nivel práctico y el nivel empírico.
As Twitter has covered up people’s daily life, it has became one of the most important information exchange platforms, and quickly attracted scientists’ attention. Researchers around the world have highly focused on social science and internet studies with Twitter data as a real world sample, and numerous analytics tools and algorithms have been designed in the last decade. The present doctoral thesis consists of three researches, first, given the 14 years (until 2020) of history since the foundation of Twitter, an explosion of related scientific publications have been witnessed, but the current research landscape on this social media platform remained unknown, to fill this research gap, we did a bibliometric analysis on Twitter-related studies to analyze how the Twitter studies evolved over time, and to provide a general description of the Twitter research academic environment from a macro level. Second, since there are many analytic software tools that are currently available for Twitter research, a practical question for junior researchers is how to choose the most appropriate software for their own research project, to solve this problem, we did a software review for some of the integrated frameworks that are considered most relevant for social science research, given that junior social science researchers may face possible financial constraints, we narrowed our scope to solely focus on the free and low-cost software. Third, given the current public health crisis, we have noticed that social media are one of the most accessed information and news sources for the public. During a pandemic, how health issues and diseases are framed in the news release impacts public’s understanding of the current epidemic outbreak and their attitudes and behaviors. Hence, we decided to use Twitter as an easy-access news source to analyze the evolution of the Spanish news frames during the COVID-19 pandemic. Overall, the three researches have closely associated with the application of computational methods, including online data collection, text mining, complex network and data visualization. And this doctoral project has discovered how people study and use Twitter from three different levels: the academic level, the practical level and the empirical level.