Rozprawy doktorskie na temat „Computational methods”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Computational methods.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Computational methods”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Vakili, Mohammadjavad. "Methods in Computational Cosmology". Thesis, New York University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10260795.

Pełny tekst źródła
Streszczenie:

State of the inhomogeneous universe and its geometry throughout cosmic history can be studied by measuring the clustering of galaxies and the gravitational lensing of distant faint galaxies. Lensing and clustering measurements from large datasets provided by modern galaxy surveys will forever shape our understanding of the how the universe expands and how the structures grow. Interpretation of these rich datasets requires careful characterization of uncertainties at different stages of data analysis: estimation of the signal, estimation of the signal uncertainties, model predictions, and connecting the model to the signal through probabilistic means. In this thesis, we attempt to address some aspects of these challenges.

The first step in cosmological weak lensing analyses is accurate estimation of the distortion of the light profiles of galaxies by large scale structure. These small distortions, known as the cosmic shear signal, are dominated by extra distortions due to telescope optics and atmosphere (in the case of ground-based imaging). This effect is captured by a kernel known as the Point Spread Function (PSF) that needs to be fully estimated and corrected for. We address two challenges a head of accurate PSF modeling for weak lensing studies. The first challenge is finding the centers of point sources that are used for empirical estimation of the PSF. We show that the approximate methods for centroiding stars in wide surveys are able to optimally saturate the information content that is retrievable from astronomical images in the presence of noise.

The fist step in weak lensing studies is estimating the shear signal by accurately measuring the shapes of galaxies. Galaxy shape measurement involves modeling the light profile of galaxies convolved with the light profile of the PSF. Detectors of many space-based telescopes such as the Hubble Space Telescope (HST) sample the PSF with low resolution. Reliable weak lensing analysis of galaxies observed by the HST camera requires knowledge of the PSF at a resolution higher than the pixel resolution of HST. This PSF is called the super-resolution PSF. In particular, we present a forward model of the point sources imaged through filters of the HST WFC3 IR channel. We show that this forward model can accurately estimate the super-resolution PSF. We also introduce a noise model that permits us to robustly analyze the HST WFC3 IR observations of the crowded fields.

Then we try to address one of the theoretical uncertainties in modeling of galaxy clustering on small scales. Study of small scale clustering requires assuming a halo model. Clustering of halos has been shown to depend on halo properties beyond mass such as halo concentration, a phenomenon referred to as assembly bias. Standard large-scale structure studies with halo occupation distribution (HOD) assume that halo mass alone is sufficient to characterize the connection between galaxies and halos. However, assembly bias could cause the modeling of galaxy clustering to face systematic effects if the expected number of galaxies in halos is correlated with other halo properties. Using high resolution N-body simulations and the clustering measurements of Sloan Digital Sky Survey (SDSS) DR7 main galaxy sample, we show that modeling of galaxy clustering can slightly improve if we allow the HOD model to depend on halo properties beyond mass.

One of the key ingredients in precise parameter inference using galaxy clustering is accurate estimation of the error covariance matrix of clustering measurements. This requires generation of many independent galaxy mock catalogs that accurately describe the statistical distribution of galaxies in a wide range of physical scales. We present a fast and accurate method based on low-resolution N-body simulations and an empirical bias model for generating mock catalogs. We use fast particle mesh gravity solvers for generation of dark matter density field and we use Markov Chain Monti Carlo (MCMC) to estimate the bias model that connects dark matter to galaxies. We show that this approach enables the fast generation of mock catalogs that recover clustering at a percent-level accuracy down to quasi-nonlinear scales.

Cosmological datasets are interpreted by specifying likelihood functions that are often assumed to be multivariate Gaussian. Likelihood free approaches such as Approximate Bayesian Computation (ABC) can bypass this assumption by introducing a generative forward model of the data and a distance metric for quantifying the closeness of the data and the model. We present the first application of ABC in large scale structure for constraining the connections between galaxies and dark matter halos. We present an implementation of ABC equipped with Population Monte Carlo and a generative forward model of the data that incorporates sample variance and systematic uncertainties. (Abstract shortened by ProQuest.)

Style APA, Harvard, Vancouver, ISO itp.
2

Steklova, Klara. "Computational methods in hydrogeophysics". Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/60815.

Pełny tekst źródła
Streszczenie:
Parameter and state estimation for groundwater models within a coupled hydrogeophysical framework has become common in the last few years as it has been shown that such estimates are usually better than those from a single data inversion. Different approaches have been suggested in literature to combine the essentially two different modalities in order to obtain better estimates for groundwater models, and improve monitoring of processes such as solute transport. However, the coupled approaches usually come at a price of higher computational cost and difficulties in coupling the geophysical and groundwater inverse problems. Unlike in other studies, we developed both the groundwater and geophysical models in the same computational environment in order to test different minimization strategies. When solving the coupled inverse problem, the objective function consists of data misfit and regularization terms as well as a coupling term that relates groundwater and geophysical states. We present a novel approach to solve the inverse problem using an Alternating Direction Method of Multipliers (ADMM) to minimize the coupled objective function. ADMM enables us to treat the groundwater and geophysical part separately and thus use existing software with minor changes. However, ADMM as well as many other coupled approaches relies on implementing some petrophysical relationship to couple the groundwater and geophysical variable. Such relationships are usually uncertain and hard to parametrize for a large region and can potentially produce solute mass errors in the final model estimates. Therefore, in this thesis we examine coupled approaches that replace the fixed petrophysical relationship by a more loose structure similarity constraint. Besides, we propose efficient computational methods to minimize the objective function when there is no explicit petrophysical constraint. All approaches were tested on 3D synthetic examples. In the solute tracer test we estimated hydraulic conductivity or solute distribution using a structure coupled inversion, and were able to reduce the errors compared to a single data inversion alone. For a more complex example of seawater intrusion we implemented the ADMM method, and obtained better estimates for the solute distribution compared to just considering each data separately, or solving the problem with a simple coupled approach.
Science, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
3

af, Klinteberg Ludvig. "Computational methods for microfluidics". Licentiate thesis, KTH, Numerisk analys, NA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-116384.

Pełny tekst źródła
Streszczenie:
This thesis is concerned with computational methods for fluid flows on the microscale, also known as microfluidics. This is motivated by current research in biological physics and miniaturization technology, where there is a need to understand complex flows involving microscale structures. Numerical simulations are an important tool for doing this. The first paper of the thesis presents a numerical method for simulating multiphase flows involving insoluble surfactants and moving contact lines. The method is based on an explicit interface tracking method, wherein the interface between two fluids is decomposed into segments, which are represented locally on an Eulerian grid. The framework of this method provides a natural setting for solving the advection-diffusion equation governing the surfactant concentration on the interface. Open interfaces and moving contact lines are also incorporated into the method in a natural way, though we show that care must be taken when regularizing interface forces to the grid near the boundary of the computational domain. In the second paper we present a boundary integral formulation for sedimenting particles in periodic Stokes flow, using the completed double layer boundary integral formulation. The long-range nature of the particle-particle interactions lead to the formulation containing sums which are not absolutely convergent if computed directly. This is solved by applying the method of Ewald summation, which in turn is computed in a fast manner by using the FFT-based spectral Ewald method. The complexity of the resulting method is O(N log N), as the system size is scaled up with the number of discretization points N. We apply the method to systems of sedimenting spheroids, which are discretized using the Nyström method and a basic quadrature rule. The Ewald summation method used in the boundary integral method of the second paper requires a decomposition of the potential being summed. In the introductory chapters of the thesis we present an overview of the available methods for creating Ewald decompositions, and show how the methods and decompositions can be related to each other.

QC 20130124

Style APA, Harvard, Vancouver, ISO itp.
4

Chernyshenko, Dmitri. "Computational methods in micromagnetics". Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/398126/.

Pełny tekst źródła
Streszczenie:
With the continued growth of computational power, computational modelling has become an increasingly important part of modern science. The field of micromagnetism has benefited from the increase of computational power, leading in recent decades to the development of many important micromagnetic methods. This thesis aims to address some computational challenges relevant to the field of micromagnetism today. The computation of the demagnetising field is often the most time-consuming part of a micromagnetic simulation. In the finite difference method, this computation is usually done using the Fourier transform method, in which the demagnetising field is computed as the convolution of the magnetisation field with the demagnetising tensor. An analytical formula for the demagnetising tensor is available, however due to numerical cancellation errors it can only be applied for close distances between the interacting cells. For far distances between the interacting cells other approaches, such as asymptotic expansion, have to be used. In this thesis, we present a new method to compute the demagnetising tensor by means of numerical integration. The method offers improved accuracy over existing methods for the intermediate range of distances. In the finite element method, the computation of the demagnetising field is commonly done using the hybrid FEM/BEM method. The fast multipole method offers potential theoretical advantages over the hybrid FEM/BEM method particularly for the current and future generations of computing hardware. In micromagnetics, it has been applied to compute the demagnetising field in the finite difference setting and to compute the magnetostatic interaction between nanoparticles, however no implementation of the fast multipole method in finite elements is yet available. As one of the steps towards it, in this thesis we develop a new formula for the energy of the magnetostatic interaction between linearly magnetized polyhedrons. This formula can be used to compute the direct interaction between finite element cells in the fast multipole method. Ferromagnetic resonance is a popular experimental technique for probing the dynamical properties of magnetic systems. We extend the eigenvalue method for the computation of resonance modes to the computation of the FMR spectrum, and apply it to compute ferromagnetic resonance for a proposed FMR standard reference problem.
Style APA, Harvard, Vancouver, ISO itp.
5

Argamon, Shlomo. "Computational methods for counterterrorism". Berlin Heidelberg Springer, 2009. http://d-nb.info/993136176/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Zhu, Tulong. "Meshless methods in computational mechanics". Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/11795.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Hugtenburg, Richard P. "Computational methods in radiation oncology". Thesis, University of Canterbury. Physics and Astronomy, 1998. http://hdl.handle.net/10092/6796.

Pełny tekst źródła
Streszczenie:
This thesis examines computer technology in radiation oncology and the intimate role that it must now play in the dosimetry practices of this discipline. Aspects of the dosimetry and implementation of five radiation therapies are examined, namely total-body irradiation (TBI), total-skin electron therapy (TSET), electron therapy, superficial therapy and ophthalmic brachytherapy. Computational techniques, in particular, Monte Carlo and several other numerical methods are used. The Monte Carlo platform, EGS4, and the treatment planning system, GRATIS, have been implemented on SUN and Silicon Graphics workstations. Monte Carlo methods are used in the investigation of electron therapy planning and superficial and brachytherapy dosimetry practices. Monte Carlo techniques are used for radiation protection calculations of linear accelerator bunker design and for the optimisation of an in vivoX-ray fluorescence (XRF) technique used to measure platinum uptake associated with cisplatin chemotherapy. Inverse Monte Carlo methods have been examined and implemented. Inverse methods, applied to in-phantom dose measurements, are used to determine phase-space information. such as spectra, for an incident electron beam. Analogous methods are examined for megavoltage and superficial X-rays in particular, source parameterisation with attenuation and photoactivation techniques. Two linear accelerators, a Varian 2100C and a Varian 600C, provide Megavoltage X-rays or electrons. The TBI therapy uses a 6 MV X-ray beam. The TSET technique uses 6 MeV electrons which degraded to a lower energy by a screen placed in front of the patient at an extended source distance. The 9 Me V and the 20 Me V electron modalities are also closely examined. Two Philips superficial therapy units, RT100 and RT50 provide 10 through 100 kVp X-rays. 1251 seeds are used for the investigation of ophthalmic brachytherapy dosimetry. Methods of dosimetry incorporated in this work include in-phantom, ionisation chamber and diode measurements. Thermoluminescent dosimeters (TLDs), Silverhalide and radio chromic films are used. Measurements have been performed in water, solid water. polymethyl-methacrylate (PMMA), and polystyrene phantoms. Fricke, ferrous based gels are investigated as a method of dosimetry in a uniform medium. Three-dimensional dose distributions are examined for several radiation modalities. The concentration of radiation-induced ferric ions and hence dose is determined using magnetic resonance imaging (MRI). A high-purity germanium detector and a thalium doped sodium iodide detector are employed for the measurement of source spectra and for fluorescing and activated materials.
Style APA, Harvard, Vancouver, ISO itp.
8

Bertolani, Steve James. "Computational Methods for Modeling Enzymes". Thesis, University of California, Davis, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10928544.

Pełny tekst źródła
Streszczenie:

Enzymes play a crucial role in modern biotechnology, industry, food processing and medical applications. Since their first discovered industrial use, man has attempted to discover new enzymes from Nature to catalyze different chemical reactions. In modern times, with the advent of computational methods, protein structure solutions, protein sequencing and DNA synthesis methods, we now have the tools to enable new approaches to rational enzyme engineering. With an enzyme structure in hand, a researcher may run an in silico experiment to sample different amino acids in the active site in order to identify new combinations which likely stabilize a transition-state-enzyme model. A suggested mutation can then be encoded into the desired enzyme gene, ordered, synthesized and tested. Although this truly astonishing feat of engineering and modern biotechnology allows the redesign of existing enzymes to acquire a new substrate specificity, it still requires a large amount of time, capital and technical capabilities.

Concurrently, while making strides in computational protein design, the cost of sequencing DNA plummeted after the turn of the century. With the reduced cost of sequencing, the number of sequences in public databases of naturally occurring proteins has grown exponentially. This new, large source of information can be utilized to enable rational enzyme design, as long as it can be coupled with accurate modeling of the protein sequences.

This work first describes a novel approach to reengineering enzymes (Genome Enzyme Orthologue Mining; GEO) that utilizes the vast amount of protein sequences in modern databases along with extensive computation modeling and achieves comparable results to the state-of-the-art rational enzyme design methods. Then, inspired by the success of this new method and aware of it's reliance on the accuracy of the protein models, we created a computational benchmark to both measure the accuracy of our models as well as improve it by encoding additional information about the structure, derived from mechanistic studies (Catalytic Geometry constraints; CG). Lastly, we use the improved accuracy method to automatically model hundreds of putative enzymes sequences and dock substrates into them to extract important features that are then used to inform experiments and design. This is used to reengineer a ribonucleotide reductase to catalyze a aldehyde deformylating oxygenase reaction.

These chapters advance the field of rational enzyme engineering, by providing a novel technique that may enable efficient routes to rationally design enzymes for reactions of interest. These chapters also advance the field of homology modeling, in the specific domain in which the researcher is modeling an enzyme with a known chemical reaction. Lastly, these chapters and techniques lead to an example which utilizes highly accurate computational models to create features which can help guide the rational design of enzyme catalysts.

Style APA, Harvard, Vancouver, ISO itp.
9

Syed, Zeeshan Hassan 1980. "Computational methods for physiological data". Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54671.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2009.
Author is also affiliated with the MIT Dept. of Electrical Engineering and Computer Science. Cataloged from PDF version of thesis.
Includes bibliographical references (p. 177-188).
Large volumes of continuous waveform data are now collected in hospitals. These datasets provide an opportunity to advance medical care, by capturing rare or subtle phenomena associated with specific medical conditions, and by providing fresh insights into disease dynamics over long time scales. We describe how progress in medicine can be accelerated through the use of sophisticated computational methods for the structured analysis of large multi-patient, multi-signal datasets. We propose two new approaches, morphologic variability (MV) and physiological symbolic analysis, for the analysis of continuous long-term signals. MV studies subtle micro-level variations in the shape of physiological signals over long periods. These variations, which are often widely considered to be noise, can contain important information about the state of the underlying system. Symbolic analysis studies the macro-level information in signals by abstracting them into symbolic sequences. Converting continuous waveforms into symbolic sequences facilitates the development of efficient algorithms to discover high risk patterns and patients who are outliers in a population. We apply our methods to the clinical challenge of identifying patients at high risk of cardiovascular mortality (almost 30% of all deaths worldwide each year). When evaluated on ECG data from over 4,500 patients, high MV was strongly associated with both cardiovascular death and sudden cardiac death. MV was a better predictor of these events than other ECG-based metrics. Furthermore, these results were independent of information in echocardiography, clinical characteristics, and biomarkers.
(cont.) Our symbolic analysis techniques also identified groups of patients exhibiting a varying risk of adverse outcomes. One group, with a particular set of symbolic characteristics, showed a 23 fold increased risk of death in the months following a mild heart attack, while another exhibited a 5 fold increased risk of future heart attacks.
by Zeeshan Hassan Syed.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
10

Fei, Bingxin. "Computational Methods for Option Pricing". Digital WPI, 2011. https://digitalcommons.wpi.edu/etd-theses/381.

Pełny tekst źródła
Streszczenie:
This paper aims to practice the use of Monte Carlo methods to simulate stock prices in order to price European call options using control variates. American put options are priced using the binomial model separately. Finally, we use the information to form a portfolio position using an Interactive Brokers paper trading account.
Style APA, Harvard, Vancouver, ISO itp.
11

Yang, Xin Barlow Jesse L. Zha Hongyuan. "Computational methods for manifold learning". [University Park, Pa.] : Pennsylvania State University, 2007. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-2256/index.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Blount, Steven Michael 1958. "Computational methods for stochastic epidemics". Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/288714.

Pełny tekst źródła
Streszczenie:
Compartmental models constructed for stochastic epidemics are usually difficult to analyze mathematically or computationally. Researchers have mostly resorted to deterministic approximations or simulation to investigate these models. This dissertation describes three original computational methods for analyzing compartmental models of stochastic epidemics. The first method is the Markov Process Method which computes the probability law for the epidemic by solving the Chapman-Kolmogorov ordinary differential equations as an initial value problem using standard numerical analysis techniques. It is limited to models with small populations and few compartments and requires sophisticated numerical analysis tools and relatively extensive computer resources. The second method is the Probability Vector Method which can estimate the first few moments of a discrete time epidemic model over a limited time period (i.e. if Y(t) is the number of individuals in a given compartment then this method can estimate E[ Yr for small positive integers r. Size restrictions limit the maximum order of the moment that can be computed. For compartmental models with a constant, homogeneous population, this method requires modest computational resources to estimate the first two moments of Y(t). The third method is the Linear Extrapolation Method, which computes the moments of a compartmental model with a large population by extrapolating from the given moments of the same model with smaller populations. This method is limited to models that have some alternate way of calculating the moments for small populations. These moments should be computed exactly from probabilistic principles. When this is not practical, any method that can produce accurate estimates of these moments for small populations can be used. Two compartmental epidemic models are analyzed using these three methods. First, the simple susceptible/infective epidemic is used to illustrate each method and serves as a benchmark for accuracy and performance. These computations show that each algorithm is capable of producing acceptably accurate solutions (at least for the specific parameters that were used). Next, an HIV/AIDS model is analyzed and the numerical results are presented and compared with the deterministic and simulation solutions. Only the probability vector method could compete with simulation on the larger (i.e. more compartments) HIV/AIDS model.
Style APA, Harvard, Vancouver, ISO itp.
13

Montanucci, Ludovica <1978&gt. "Computational methods for genome screening". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/847/1/Tesi_Montanucci_Ludovica.pdf.

Pełny tekst źródła
Streszczenie:
Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.
Style APA, Harvard, Vancouver, ISO itp.
14

Montanucci, Ludovica <1978&gt. "Computational methods for genome screening". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/847/.

Pełny tekst źródła
Streszczenie:
Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.
Style APA, Harvard, Vancouver, ISO itp.
15

Carnimeo, Ivan. "Computational methods for spectroscopic properties". Doctoral thesis, Scuola Normale Superiore, 2014. http://hdl.handle.net/11384/85802.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

SALIS, SAMUELE. "Computational methods for transport properties". Doctoral thesis, Università degli Studi di Cagliari, 2017. http://hdl.handle.net/11584/248738.

Pełny tekst źródła
Streszczenie:
Through antimicrobial resistance many bacteria can survive to an ever larger number of antibiotics. This is true in particular for a category of bacteria classified as gram–negative. These kinds of bacteria differ from the other ones by the presence of an outer membrane, which is able to protect them from the fast access (and consequently the action) of any antibiotics. The increasing capability of antibiotics to survive to many kinds of drugs has given rise to the Multiple Drug Resistance (MDR). New antibiotics could help to mitigate the MDR problem, but the poor understanding of permeability through outer membranes has given an ever littler number of new patented antibiotics. This is due to a lack of experimental methods which are able to explain with a sufficient detail the permeation and, on the other side, to the difficulty in reaching the typical time scales (ms or even more) of these processes. The category of antibiotics studied in this thesis can permeate the membrane crossing some porins (beta barrel proteins nestled in bacterial outer membrane) so the permeation happens when we observe a transport of the antibiotic through a porin. In this thesis we will focus on some computational methods, which are suitable to increase our understanding of transport processes. We will start with a post elaboration algorithm, that can be used to extract from an electrophysiology time series transport events apparently lower than the experimental device temporal sensitivity, continuing with another post elaboration algorithm that allows to extract the real transition time from a metadynamics simulation, skipping in this way the timescale problem in computer simulations, and we will finish with an ultra coarse grained model, that can be used to study the transport properties through a bacterial channel. Finally we will list the results obtained using the three aforementioned methods and we will summarise this thesis with the conclusions.
Through antimicrobial resistance many bacteria can survive to an ever larger number of antibiotics. This is true in particular for a category of bacteria classified as gram–negative. These kinds of bacteria differ from the other ones by the presence of an outer membrane, which is able to protect them from the fast access (and consequently the action) of any antibiotics. The increasing capability of antibiotics to survive to many kinds of drugs has given rise to the Multiple Drug Resistance (MDR). New antibiotics could help to mitigate the MDR problem, but the poor understanding of permeability through outer membranes has given an ever littler number of new patented antibiotics. This is due to a lack of experimental methods which are able to explain with a sufficient detail the permeation and, on the other side, to the difficulty in reaching the typical time scales (ms or even more) of these processes. The category of antibiotics studied in this thesis can permeate the membrane crossing some porins (beta barrel proteins nestled in bacterial outer membrane) so the permeation happens when we observe a transport of the antibiotic through a porin. In this thesis we will focus on some computational methods, which are suitable to increase our understanding of transport processes. We will start with a post elaboration algorithm, that can be used to extract from an electrophysiology time series transport events apparently lower than the experimental device temporal sensitivity, continuing with another post elaboration algorithm that allows to extract the real transition time from a metadynamics simulation, skipping in this way the timescale problem in computer simulations, and we will finish with an ultra coarse grained model, that can be used to study the transport properties through a bacterial channel. Finally we will list the results obtained using the three aforementioned methods and we will summarise this thesis with the conclusions.
Style APA, Harvard, Vancouver, ISO itp.
17

Robertz, Daniel. "Formal computational methods for control theory". [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=981070019.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Taher, Leila. "Computational methods for splice site prediction". [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=978938631.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Kolev, Tzanio Valentinov. "Least-squares methods for computational electromagnetics". Texas A&M University, 2004. http://hdl.handle.net/1969.1/1115.

Pełny tekst źródła
Streszczenie:
The modeling of electromagnetic phenomena described by the Maxwell's equations is of critical importance in many practical applications. The numerical simulation of these equations is challenging and much more involved than initially believed. Consequently, many discretization techniques, most of them quite complicated, have been proposed. In this dissertation, we present and analyze a new methodology for approximation of the time-harmonic Maxwell's equations. It is an extension of the negative-norm least-squares finite element approach which has been applied successfully to a variety of other problems. The main advantages of our method are that it uses simple, piecewise polynomial, finite element spaces, while giving quasi-optimal approximation, even for solutions with low regularity (such as the ones found in practical applications). The numerical solution can be efficiently computed using standard and well-known tools, such as iterative methods and eigensolvers for symmetric and positive definite systems (e.g. PCG and LOBPCG) and reconditioners for second-order problems (e.g. Multigrid). Additionally, approximation of varying polynomial degrees is allowed and spurious eigenmodes are provably avoided. We consider the following problems related to the Maxwell's equations in the frequency domain: the magnetostatic problem, the electrostatic problem, the eigenvalue problem and the full time-harmonic system. For each of these problems, we present a natural (very) weak variational formulation assuming minimal regularity of the solution. In each case, we prove error estimates for the approximation with two different discrete least-squares methods. We also show how to deal with problems posed on domains that are multiply connected or have multiple boundary components. Besides the theoretical analysis of the methods, the dissertation provides various numerical results in two and three dimensions that illustrate and support the theory.
Style APA, Harvard, Vancouver, ISO itp.
20

Leung, Chi-Yin. "Computational methods for integral equation analysis". Thesis, Imperial College London, 1995. http://hdl.handle.net/10044/1/8139.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Miller, David J. Ghosh Avijit. "New methods in computational systems biology /". Philadelphia, Pa. : Drexel University, 2008. http://hdl.handle.net/1860/2810.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Taghi, Hajian Mozafar. "Computational methods for discrete programming problems". Thesis, Brunel University, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315490.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Li, Limin, i 李丽敏. "Machine learning methods for computational biology". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44546749.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Meng, Lingling, i 孟玲玲. "Computational electromagnetics methods for IC modeling". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/195993.

Pełny tekst źródła
Streszczenie:
Two kinds of computational electromagnetics (CEM) methodology are discussed for the challenges in integrated circuit (IC) and printed circuit board (PCB) design. One is an extension of Cagniard-de Hoop method that provides analytic time-domain expressions for the field constituents, making up some drawbacks of numerical techniques that would lose power in super-high frequency simulation or extreme fine structure. A modeling of line-source excited by electromagnetic pulse is analyzed for a thin sheet with high-contrast dielectric and conductive properties. The response of reflection and transmission to the power exponential pulse and the power exponential monocycle pulse are studied using this newly proposed method consequently as a benchmark for practical design. The other method focuses on improving the broadband accuracy of the mixed-form fast multipole algorithm (MF-FMA) by applying rotation techniques. Coordinate system rotation with rotation matrix and pseudo-spectral projection rotation based on fast Fourier transform (FFT) are discussed. Through rotation, translation matrices become very sparse, which enables us to save storage as well as the CPU time. Either of them has certain advantages depending on the number of harmonics used. Hence, increasing the number of multipoles in the low frequency regime to shift up the transition region of MF-FMA can be feasible to improve the overall accuracy significantly. Numerical results of inter connector and sphere with large number of unknowns are given to certify the effectiveness of rotation methods.
published_or_final_version
Electrical and Electronic Engineering
Master
Master of Philosophy
Style APA, Harvard, Vancouver, ISO itp.
25

SANTI, MARCIO RODRIGUES DE. "COMPUTATIONAL METHODS FOR GEOLOGICAL SECTIONS RESTAURATION". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2002. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=2795@1.

Pełny tekst źródła
Streszczenie:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Este trabalho apresenta uma nova abordagem para o balanceamento de seções geológicas baseada em modelagem física e simulação numérica. O objetivo principal é introduzir alguns conceitos da Mecânica do Contínuo no processo de restauração geológica, de forma a considerar as propriedades físicas dos materiais geológicos durante a simulação do movimento de um bloco de rocha sobre uma falha. A estratégia adotada utiliza-se de um algoritmo de Relaxação Dinâmica acoplado ao Método dos Elementos Finitos para resolver sistemas de equações, com condições de contorno específicas para a movimentação do bloco sobre a falha.Foi adotado como ambiente de desenvolvimento um sistema de balanceamento de seções geológicas composto por um conjunto de transformações geométricas comuns na abordagem clássica do problema. O sistema utiliza uma tecnologia de modelagem geométrica baseada em uma estrutura de dados que permite a representação topológica completa de uma subdivisão planar.A simulação numérica do balanceamento de seções geológicas proposta é implementada dentro desse ambiente e integra três módulos distintos: um módulo de pré- processamento no qual os dados requeridos podem ser facilmente gerados, um módulo de análise onde o método de Relaxação Dinâmica foi implementado e, finalmente, um módulo de pósprocessamento em que podem ser visualizados os resultados obtidos da simulação numérica. Considera-se ainda a natureza palinspática do problema de restauração através de uma interface gráfica amigável do ponto de vista do usuário. Neste sentido, foi realizada uma reorganização completa da interface gráfica e das classes de atributos geológicos associados às entidades topológicas (linhas e regiões) da seção geológica. Esta organização teve dois objetivos: o primeiro, implementar um processo gráfico baseado em uma árvore de decisões para o gerenciamento das tarefas do balanceamento, que envolve passos arbitrários de tentativa e erro, e, o segundo, possibilitar a implementação da simulação numérica dentro do processo de balanceamento.As idéias propostas podem ser consideradas como o primeiro passo para o desenvolvimento de um sistema de balanceamento de seções geológicas, cujas medidas de deformação representem de forma mais aproximada o comportamento mecânico das rochas, além de ser mais automatizado, o que sugere futuramente a implementação de um sistema tridimensional, no qual seja menos exigida a interação com o usuário.
This work presents a new approach for the restoration of geological cross-sections that is based on physical modeling and numerical simulation. The main purpose is to introduce Continuum Mechanics concepts into the geological restoration process in order to consider physical properties of the materials during the simulation of the movement of a rock block along a fault. The adopted strategy uses a dynamic relaxation algorithm to solve the equation system that arises from the numerical simulation based on the Finite Element Method, together with some specific boundary conditions to represent the movement of the rock block over the fault.As development environment, a cross-section restoration system was adopted, composed by a group of usual geometric transformations from the classical approach of the problem. This system adopts a geometric modeling technology based on a data structure that is capable of completely representing the topology of a planar subdivision. The proposed numerical simulation is implemented inside this system and integrates with three different modules: a pre-processing module, where the required input data can be easily generated; an analysis module, in which the dynamic relaxation method has been implemented; and a post- processing module, where the results of the numerical simulation can be viewed. The palinspatic nature of the restoration problem is taken into account by means of a user-friendly graphics interface that was specifically designed for the system. The graphics interface and the geological attribute classes were completely re-organized with two purposes. First, to implement a graphical interface based on a decision tree to manage user tasks involved in the restoration process, which includes trial- and-error steps. Second, to provide support for the implementation of numerical simulation in the restoration process.The ideas proposed herein can be considered as a first step towards a complete geological cross-section restoration system in which more consistent deformation measures can be incorporated into the governing equations to better represent the mechanical behavior of the rocks, and is also an expansion of the presented system to a three-dimensional environment, currently under investigation.
Style APA, Harvard, Vancouver, ISO itp.
26

Molaro, Mark Christopher. "Computational statistical methods in chemical engineering". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/111286.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 175-182).
Recent advances in theory and practice, have introduced a wide variety of tools from machine learning that can be applied to data intensive chemical engineering problems. This thesis covers applications of statistical learning spanning a range of relative importance of data versus existing detailed theory. In each application, the quantity and quality of data available from experimental systems are used in conjunction with an understanding of the theoretical physical laws governing system behavior to the extent they are available. A detailed generative parametric model for optical spectra of multicomponent mixtures is introduced. The application of interest is the quantification of uncertainty associated with estimating the relative abundance of mixtures of carbon nanotubes in solution. This work describes a detailed analysis of sources of uncertainty in estimation of relative abundance of chemical species in solution from optical spectroscopy. In particular, the quantification of uncertainty in mixtures with parametric uncertainty in pure component spectra is addressed. Markov Chain Monte Carlo methods are utilized to quantify uncertainty in these situations and the inaccuracy and potential for error in simpler methods is demonstrated. Strategies to improve estimation accuracy and reduce uncertainty in practical experimental situations are developed including when multiple measurements are available and with sequential data. The utilization of computational Bayesian inference in chemometric problems shows great promise in a wide variety of practical experimental applications. A related deconvolution problem is addressed in which a detailed physical model is not available, but the objective of analysis is to map from a measured vector valued signal to a sum of an unknown number of discrete contributions. The data analyzed in this application is electrical signals generated from a free surface electro-spinning apparatus. In this information poor system, MAP estimation is used to reduce the variance in estimates of the physical parameters of interest. The formulation of the estimation problem in a probabilistic context allows for the introduction of prior knowledge to compensate for a high dimensional ill-conditioned inverse problem. The estimates from this work are used to develop a productivity model expanding on previous work and showing how the uncertainty from estimation impacts system understanding. A new machine learning based method for monitoring for anomalous behavior in production oil wells is reported. The method entails a transformation of the available time series of measurements into a high-dimensional feature space representation. This transformation yields results which can be treated as static independent measurements. A new method for feature selection in one-class classification problems is developed based on approximate knowledge of the state of the system. An extension of features space transformation methods on time series data is introduced to handle multivariate data in large computationally burdensome domains by using sparse feature extraction methods. As a whole these projects demonstrate the application of modern statistical modeling methods, to achieve superior results in data driven chemical engineering challenges.
by Mark Christopher Molaro.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
27

Kao, Chung-Yao 1972. "Efficient computational methods for robustness analysis". Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/29258.

Pełny tekst źródła
Streszczenie:
Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2002.
Includes bibliographical references (p. 209-215).
Issues of robust stability and performance have dominated the field of systems and control theory because of their practical importance. The recently developed Integral Quadratic Constraint (IQC) based analysis method provides a framework for systematically checking robustness properties of large complex dynamical systems. In IQC analysis, the system to be analyzed is represented as a nominal, Linear Time-Invariant (LTI) subsystem interconnected with a perturbation term. The perturbation is characterized in terms of IQCs. The robustness condition is expressed as a feasibility problem which can be solved using interiorpoint algorithms. Although the systems to be analyzed have nominal LTI subsystems in many applications, this is not always the case. A typical example is the problem of robustness analysis of the oscillatory behavior of nonlinear systems, where the nominal subsystem is generally Linear Periodically Time-Varying (LPTV). The objective of the first part of this thesis is to develop new techniques for robustness analysis of LPTV systems. Two different approaches are proposed. In the first approach, the harmonic terms of the LPTV nominal model are extracted, and the system is transformed into the standard setup for robustness analysis. Robustness analysis is then performed on the transformed system based on the IQC analysis method. In the second approach, we allow the nominal system to remain periodic, and we extend the IQC analysis method to include the case where the nominal system is periodically time-varying.
(cont.) The robustness condition of this new approach is posed as semi-infinite convex feasibility problems which requires a new method to solve. A computational algorithm is developed for checking the robustness condition.In the second part of the thesis, we consider the optimization problems arising from IQC analysis. The conventional way of solving these problems is to transform them into semi-definite programs which are then solved using interior-point algorithms. The disadvantage of this approach is that the transformation introduces additional decision variables. In many situations, these auxiliary decision variables become the main computational burden, and the conventional method then becomes very inefficient and time consuming. In the second part of the thesis, a number of specialized algorithms are developed to solve these problems in a more efficient fashion. The crucial advantage in this development is that it avoids the equivalent transformation. The results of numerical experiments confirm that these algorithms can solve a problem arising from IQC analysis much faster than the conventional approach does.
by Chung-Yao Kao.
Sc.D.
Style APA, Harvard, Vancouver, ISO itp.
28

Elia, Nicola. "Computational methods for multi-objective control". Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10679.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Kotta, Anwesh. "Condition Monitoring : Using Computational intelligence methods". Master's thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-187100.

Pełny tekst źródła
Streszczenie:
Machine tool components are widely used in many industrial applications. In accordance with their usage, a reliable health monitoring system is necessary to detect defects in these components in order monitor machinery performance and avoid malfunction. Even though several techniques have been reported for fault detection and diagnosis, it is a challenging task to implement a condition monitoring system in real world applications due to their complexity in structure and noisy operating environment. The primary objective of this thesis is to develop novel intelligent algorithms for a reliable fault diagnosis of machine tool components. Another objective is to use Micro Electro Mechanical System (MEMS) sensor and interface it with Raspberry pi hardware for the real time condition monitoring. Primarily knowledge based approach with morphological operators and Fuzzy Inference System is proposed, the e˙ectiveness of this approach lies in the selection of structuring elements(SEs). When this is evaluated with di˙erent classes of bearing fault signals, it is able to detect the fault frequencies e˙ectively. Secondarily, An analytical approach with multi class support machine is proposed, this method has uniqueness of learning on its own with out any prior knowledge, the e˙ectiveness of this method lies on selected features and used kernel for converging. Results have shown that RBF (Radial Bias Function) kernel, which is commonly known as gauss kernel has good performance in identifying faults with less computation time. An idea of prototyping these methods has triggered in using Micro Electro Mechanical System (MEMS) sensor for data acquisition and real time Condition Monitoring. LIS3DH accelerometer sensor is used for the data acquisition of spindle for capturing high frequency fault signals. The measured data is analyzed and compared with the industrial sensor k-shear accelerometer type 8792A.
Style APA, Harvard, Vancouver, ISO itp.
30

Al-Amri, Ibrahim Rasheed. "Computational methods in permutation group theory". Thesis, University of St Andrews, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636485.

Pełny tekst źródła
Streszczenie:
In Chapters 2 and 3 of this thesis, we find the structure of all groups generated by an n-cycle and a 2-cycle or a 3-cycle. When these groups fail to be either Sn or An then we show that they form a certain wreath product or an extension of a wreath product. We also determine, in Chapters 4 and 5, the structure of all groups generated by an n-cycle and the product of two 2-cycles or a 4-cycle. The structure of these groups depends on the results obtained in the previous chapters. In Chapter 6 we give some general results of groups generated by an n-cycle and a k-cycle. In Chapter 7 we calculate the probability of generating a proper subgroup, other than the alternating group, by two elements one of which is an n-cyc1e and the other is chosen randomly. In Chapters 8 and 9 we give some of the programs written in GAP language, which used in the earlier work and which can be used by other workers in this area.
Style APA, Harvard, Vancouver, ISO itp.
31

Lai, Liang Simon. "Defect correction methods for computational aeroacoustics". Thesis, University of Greenwich, 2013. http://gala.gre.ac.uk/11452/.

Pełny tekst źródła
Streszczenie:
The idea of Defect Correction Method (DCM) has been around for a long time. It can be used in a number of different ways and can be applied to solve various linear and non-linear problems. Most defect correction related methods were used in conjunction with discretisation methods and two-level multigrid methods. This thesis examines how various iterative methods, both for linear and nonlinear problems, may be built into a unified framework through the use of defect correction. The framework is extended to the area of Computational Aeroacoustics (CAA) where sound waves generated by the pressure fluctuations are typically several orders of magnitude smaller than the pressure variations in the main flow field that accounts for flow acceleration. A decomposition of variables is used to break down the components of a typical flow variable into (1) the mean flow, (2) flow perturbations or aerodynamic sources of sound, and (3) the acoustic perturbation. The framework as discussed in this thesis would incorporate such variable decomposition. The basic principle of DCM can be applied to recover the propagating acoustic perturbation through a coupling technique. This provides an excellent concept in the re-use of existing commercial CFD software based on the framework and in the retrieval of acoustic pressure. Numerical examples demonstrating the defect correction framework for a typical car sun-roof problem was examined with promising numerical results. To this end the complete process of coupling Reynolds average Navier-Stokes and the Helmholtz equation is also presented using the DCM framework. The DCM framework is also extended to handle higher order numerical methods for the numerical solutions of partial differential equations leading to an easy re-use of existing software approximating derivatives with a lower order discretisation. Numerical experiments were performed to demonstrate the capability of the DCM framework. It is also used to a simplified 2-D problems aiming at the understanding of Large Eddy Simulation (LES) and filtering techniques. To this end the framework of DCM leads to an efficient and robust software implementation for many CFD and aeroacoustic computation in a simple nutshell.
Style APA, Harvard, Vancouver, ISO itp.
32

Zhu, Zhaochen. "Computational methods in air quality data". HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/402.

Pełny tekst źródła
Streszczenie:
In this thesis, we have investigated several computational methods on data assimilation for air quality prediction, especially on the characteristic of sparse matrix and the underlying information of gradient in the concentration of pollutant species. In the first part, we have studied the ensemble Kalman filter (EnKF) for chemical species simulation in air quality forecast data assimilation. The main contribution of this paper is to study the sparse data observations and make use of the matrix structure of the Kalman filter updated equations to design an algorithm to compute the analysis of chemical species in the air quality forecast system efficiently. The proposed method can also handle the combined observations from multiple species together. We have applied the proposed method and tested its performance for real air quality data assimilation. Numerical examples have demonstrated the efficiency of the proposed computational method for Kalman filter update, and the effectiveness of the proposed method for NO2, NO, CO, SO2, O3, PM2.5 and PM10 in air quality data assimilation. Within the third part, we have set up an automatic workflow to connect the management system of the chemical transport model - CMAQ with our proposed data assimilation methods. The setup has successfully integrated the data assimilation into the management system and shown that the accuracy of the prediction has risen to a new level. This technique has transformed the system into a real-time and high-precision system. When the new observations are available, the predictions can then be estimated almost instantaneously. Then the agencies are able to make the decisions and respond to the situations immediately. In this way, citizens are able to protect themselves effectively. Meanwhile, it allows the mathematical algorithm to be industrialized implying that the improvements on data assimilation have directly positive effects on the developments of the environment, the human health and the society. Therefore, this has become an inspiring indication to encourage us to study, achieve and even devote more research into this promising method.
Style APA, Harvard, Vancouver, ISO itp.
33

Jin, Yan. "Advanced computational methods in portfolio optimisation". Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/39023/.

Pełny tekst źródła
Streszczenie:
Portfolio optimisation is the process of making optimal investment decisions, where a set of assets are selected and invested with certain amount of the capital in the portfolio. Since the milestone work, Markowitz’s Mean-Variance (MV) model, it has boosted the research for new portfolio optimisation models and applications for last 60 years. Despite its theoretical values, the MV model has been widely criticised for underlying simplistic assumptions which ignore real world conditions and fail to take the market uncertainty of the mean and variance into account. To correct these, a large number of models have been developed. When additional features are extended to the traditional MV model, normally it makers the problem more difficult to solve, such as the introduction of some practical constraints makes the problem NP-hard. The aim of this thesis is to study various techniques for solving portfolio optimisation problems with different features. In the first stage of this thesis, it is mainly focused on portfolio optimisation problems based on MV model with gradually more complex real world constraints. Firstly, a hybrid approach is investigated which utilises exact and metaheuristic methods to optimise asset selection and capital allocation in portfolio optimisation with cardinality and quantity constraints considered respectively. The proposed method is composed of a mathematical programming application and customised population based incremental learning procedure. Then the metaheuristic technique is studied where a variable neighbourhood search approach with compound neighbourhoods is developed to solve the portfolio optimisation problem with four additional practical constraints (cardinality, quantity, pre-assignment and round-lot). Due to the fast development of the state-of-the-art commercial solver, it motivates us to study the performance of exact solver for various practical constrained MV model based problems. In the second stage of this thesis, my interest of the portfolio optimisation problems focuses on a more complicated domain where stochastic programming is considered to capture the market uncertainties in terms of future asset prices. In addition, an alternative risk measure, one of the most recent downside risk measures, CVaR is adopted. Consequently a two-stage recourse model with CVaR as risk measure and a comprehensive set of practical constraints is investigated by a hybrid scheme which utilises exact and metaheuristic methods. In this study, two hybrid approach are implemented and studied.
Style APA, Harvard, Vancouver, ISO itp.
34

Holland, Chase Carlton. "Computational Methods for Estimating Rail Life". Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/41436.

Pełny tekst źródła
Streszczenie:
In American rail operations, rails fail due to the combined effects of rail wear due to repetitive wheel contact and the growth of surface and sub-surface cracks and flaws. Rail maintenance includes frequent uncoupled wear and ultrasonic inspections that determine the amount of wear that the rail has undergone and the presence of cracks and flaws. A rail is removed from service when its wear reaches a pre-determined wear limit or a flaw is detected in its cross section. In rail research, the life of a rail is typically estimated using fracture mechanic or fatigue methods and an assumed flaw geometry. Multiple models ranging from complex elastic-plastic finite element models to simplified representations of a beam on an elastic foundation have been developed to predict the life of a rail. The majority of rail failure models do not incorporate rail wear into their analysis, and assume an unworn rail geometry. In order to account for rail wear, certain models adopt simplified rail geometries that uncouple rail wear into top-wear and side-wear. This thesis presents a rail failure model that describes the combined effects of rail wear and crack growth through the development of a functional relationship between input variables describing the geometry, loading, and material properties of a given rail and output variables describing the life characteristics of the rail. This relationship takes the form of multiple response surfaces estimating the desired output variables. Finite element models incorporating worn rail profiles and an assumed crack geometry corresponding to a detail fracture are combined to determine the state of stress and strain at the assumed flaw. Strain-life fatigue methods and fracture mechanic concepts are used to develop the output variables necessary to describe the life of the rail using the finite element model results. The goals of this research are to predict the remaining fatigue life and estimate the crack-growth rate of the rail based on the minimum number of geometry, loading, and material property independent variables. The outputs developed to describe the railâ s remaining life are intended to be used for the decision making for rail removal.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
35

Balsells, Alex T. "Computational Methods for Radiation Therapy Planning". Case Western Reserve University School of Graduate Studies / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=case1557844457085534.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Teixeira, Bellina Ribau. "Computational methods for microarray data analysis". Master's thesis, Universidade de Aveiro, 2009. http://hdl.handle.net/10773/3989.

Pełny tekst źródła
Streszczenie:
Mestrado em Engenharia de Computadores e Telemática
Os microarrays de ácido desoxirribonucleico (ADN) são uma importante tecnologia para a análise de expressão genética. Permitem medir o nível de expressão de genes em várias amostras para, por exemplo, identificar genes cuja expressão varia com a administração de determinado medicamento. Um slide de microarray mede o nível de expressão de milhares de genes numa amostra ao mesmo tempo e uma experiência pode usar vários slides, surgindo assim muitos dados que é preciso processar e analisar, com recurso a meios informáticos. Esta dissertação inclui um levantamento de métodos e recursos de software utilizados na análise de dados de experiências de microarrays. Em seguida, descreve-se o desenvolvimento de um novo módulo de análise de dados que visa, usando métodos de identificação de genes diferencialmente expressos, identificar genes que se encontram diferencialmente expressos entre dois ou mais grupos experimentais. No final, é apresentado o trabalho resultante, a nível de interfaces gráficas e funcionamento.
Deoxyribonucleic acid (DNA) microarrays are an important technology for the analysis of gene expression. They allow measuring the expression of genes among several samples in order to, for example, identify genes whose expression varies with the administration of a certain drug. A microarray slide measures the expression level of thousands of genes in a sample at the same time, and an experiment can include various slides, leading to a lot of data to be processed and analyzed, with the aid of computerized means. This dissertation includes a review of methods and software tools used in the analysis of microarray experimental data. Then it is described the development of a new data analysis module that intends, using methods of identifying differentially expressed genes, to identify genes that are differentially expressed between two more groups. Finally, the resulting work is presented, describing its graphical interface and structural design.
Style APA, Harvard, Vancouver, ISO itp.
37

Bai, Lihui. "Computational methods for toll pricing models". [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0006341.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Gibson, Michael Andrew Bruck Jehoshua. "Computational methods for stochastic biological systems /". Diss., Pasadena, Calif. : California Institute of Technology, 2000. http://resolver.caltech.edu/CaltechETD:etd-05132005-154222.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Katz, Aaron Jon. "Meshless methods for computational fluid dynamics /". May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Viana, Simone A. "Meshless methods applied to computational electromagnetics". Thesis, University of Bath, 2006. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.428380.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
41

Selega, Alina. "Computational methods for RNA integrative biology". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/29630.

Pełny tekst źródła
Streszczenie:
Ribonucleic acid (RNA) is an essential molecule, which carries out a wide variety of functions within the cell, from its crucial involvement in protein synthesis to catalysing biochemical reactions and regulating gene expression. Such diverse functional repertoire is indebted to complex structures that RNA can adopt and its flexibility as an interacting molecule. It has become possible to experimentally measure these two crucial aspects of RNA regulatory role with such technological advancements as next-generation sequencing (NGS). NGS methods can rapidly obtain the nucleotide sequence of many molecules in parallel. Designing experiments, where only the desired parts of the molecule (or specific parts of the transcriptome) are sequenced, allows to study various aspects of RNA biology. Analysis of NGS data is insurmountable without computational methods. One such experimental method is RNA structure probing, which aims to infer RNA structure from sequencing chemically altered transcripts. RNA structure probing data is inherently noisy, affected both by technological biases and the stochasticity of the underlying process. Most existing methods do not adequately address the issue of noise, resorting to heuristics and limiting the informativeness of their output. In this thesis, a statistical pipeline was developed for modelling RNA structure probing data, which explicitly captures biological variability, provides automated bias-correcting strategies, and generates a probabilistic output based on experimental measurements. The output of our method agrees with known RNA structures, can be used to constrain structure prediction algorithms, and remains robust to reduced sequence coverage, thereby increasing sensitivity of the technology. Another recent experimental innovation maps RNA-protein interactions at very high temporal resolution, making it possible to study rapid binding events happening on a minute time scale. In this thesis, a non-parametric algorithm was developed for identifying significant changes in RNA-protein binding time-series between different conditions. The method was applied to novel yeast RNA-protein binding time-course data to study the role of RNA degradation in stress response. It revealed pervasive changes in the binding to the transcriptome of the yeast transcription termination factor Nab3 and the cytoplasmic exoribonuclease Xrn1 under nutrient stress. This challenged the common assumption of viewing transcriptional changes as the major driver of changes in RNA expression during stress and highlighted the importance of degradation. These findings inspired a dynamical model for RNA expression, where transcription and degradation rates are modelled using RNA-protein binding time-series data.
Style APA, Harvard, Vancouver, ISO itp.
42

Flint, Christopher Robert. "Computational Methods of Lattice Boltzmann Mhd". W&M ScholarWorks, 2017. https://scholarworks.wm.edu/etd/1530192360.

Pełny tekst źródła
Streszczenie:
Lattice Boltzmann (LB) Methods are a somewhat novel approach to Computational Fluid Dynamics (CFD) simulations. These methods simulate Navier-Stokes and magnetohydrodynamics (MHD) equations on the mesoscopic (quasi-kinetic) scale by solving for a statistical distribution of particles rather than attempting to solve the nonlinear macroscopic equations directly. These LB methods allow for a highly parallelizable code since one replaces the difficult nonlinear convective derivatives of MHD by simple linear advection on a lattice. New developments in LB have significantly extended the numerical stability limits of its applicability. These developments include multiple relaxation times (MRT) in the collision operators, maximizing entropy to ensure positive definiteness in the distribution functions, as well as large eddy simulations of MHD turbulence. Improving the limits of this highly parallelizable simulation method allows it to become an ideal candidate for simulating various fluid and plasma problems; improving both the speed of the simulation and the spatial grid resolution of the LB algorithms on today's high performance supercomputers. Some of these LB extensions are discussed and tested against various problems in magnetized plasmas.
Style APA, Harvard, Vancouver, ISO itp.
43

Aramini, Riccardo. "Computational inverse scattering via qualitative methods". Doctoral thesis, Università degli studi di Trento, 2011. https://hdl.handle.net/11572/368061.

Pełny tekst źródła
Streszczenie:
This Ph.D. thesis presents a threefold revisitation and reformulation of the linear sampling method (LSM) for the qualitative solution of inverse scattering problems (in the resonance region and in time-harmonic regime): 1) from the viewpoint of its implementation (in a 3D setting), the LSM is recast in appropriate Hilbert spaces, whereby the set of algebraic systems arising from an angular discretization of the far-field equation (written for each sampling point of the numerical grid covering the investigation domain and for each sampling polarization) is replaced by a single functional equation. As a consequence, this 'no-sampling' LSM requires a single regularization procedure, thus resulting in an extremely fast algorithm: complex 3D objects are visualized in around one minute without loss of quality if compared to the traditional implementation; 2) from the viewpoint of its application (in a 2D setting), the LSM is coupled with the reciprocity gap functional in such a way that the influence of scatterers outside the array of receiving antennas is excluded and an inhomogeneous background inside them can be allowed for: then, the resulting 'no-sampling' algorithm proves able to detect tumoural masses inside numerical (but rather realistic) phantoms of the female breast by inverting the data of an appropriate microwave scattering experiment; 3) from the viewpoint of its theoretical foundation, the LSM is physically interpreted as a consequence of the principle of energy conservation (in a lossless background). More precisely, it is shown that the far-field equation at the basis of the LSM (which does not follow from physical laws) can be regarded as a constraint on the power flux of the scattered wave in the far-field region: if the flow lines of the Poynting vector carrying this flux verify some regularity properties (as suggested by numerical simulations), the information contained in the far-field constraint is back-propagated to each point of the background up to the near-field region, and the (approximate) fulfilment of such constraint forces the L^2-norm of any (approximate) solution of the far-field equation to behave as a good indicator function for the unknown scatterer, i.e., to be 'small' inside the scatterer itself and 'large' outside.
Style APA, Harvard, Vancouver, ISO itp.
44

Aramini, Riccardo. "Computational inverse scattering via qualitative methods". Doctoral thesis, University of Trento, 2011. http://eprints-phd.biblio.unitn.it/556/1/PhD-Thesis-Aramini.pdf.

Pełny tekst źródła
Streszczenie:
This Ph.D. thesis presents a threefold revisitation and reformulation of the linear sampling method (LSM) for the qualitative solution of inverse scattering problems (in the resonance region and in time-harmonic regime): 1) from the viewpoint of its implementation (in a 3D setting), the LSM is recast in appropriate Hilbert spaces, whereby the set of algebraic systems arising from an angular discretization of the far-field equation (written for each sampling point of the numerical grid covering the investigation domain and for each sampling polarization) is replaced by a single functional equation. As a consequence, this 'no-sampling' LSM requires a single regularization procedure, thus resulting in an extremely fast algorithm: complex 3D objects are visualized in around one minute without loss of quality if compared to the traditional implementation; 2) from the viewpoint of its application (in a 2D setting), the LSM is coupled with the reciprocity gap functional in such a way that the influence of scatterers outside the array of receiving antennas is excluded and an inhomogeneous background inside them can be allowed for: then, the resulting 'no-sampling' algorithm proves able to detect tumoural masses inside numerical (but rather realistic) phantoms of the female breast by inverting the data of an appropriate microwave scattering experiment; 3) from the viewpoint of its theoretical foundation, the LSM is physically interpreted as a consequence of the principle of energy conservation (in a lossless background). More precisely, it is shown that the far-field equation at the basis of the LSM (which does not follow from physical laws) can be regarded as a constraint on the power flux of the scattered wave in the far-field region: if the flow lines of the Poynting vector carrying this flux verify some regularity properties (as suggested by numerical simulations), the information contained in the far-field constraint is back-propagated to each point of the background up to the near-field region, and the (approximate) fulfilment of such constraint forces the L^2-norm of any (approximate) solution of the far-field equation to behave as a good indicator function for the unknown scatterer, i.e., to be 'small' inside the scatterer itself and 'large' outside.
Style APA, Harvard, Vancouver, ISO itp.
45

Zagordi, Osvaldo. "Statistical physics methods in computational biology". Doctoral thesis, SISSA, 2007. http://hdl.handle.net/20.500.11767/3971.

Pełny tekst źródła
Streszczenie:
The interest of statistical physics for combinatorial optimization is not new, it suffices to think of a famous tool as simulated annealing. Recently, it has also resorted to statistical inference to address some "hard" optimization problems, developing a new class of message passing algorithms. Three applications to computational biology are presented in this thesis, namely: 1) Boolean networks, a model for gene regulatory networks; 2) haplotype inference, to study the genetic information present in a population; 3) clustering, a general machine learning tool.
Style APA, Harvard, Vancouver, ISO itp.
46

Raden, Martin [Verfasser], i Rolf [Akademischer Betreuer] Backofen. "Computational Methods for Lattice Protein Models = Rechnerische Methoden für Gitterproteinmodelle". Freiburg : Universität, 2011. http://d-nb.info/1114828939/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Huismann, Immo. "Computational fluid dynamics on wildly heterogeneous systems". TUDPress, 2018. https://tud.qucosa.de/id/qucosa%3A74002.

Pełny tekst źródła
Streszczenie:
In the last decade, high-order methods have gained increased attention. These combine the convergence properties of spectral methods with the geometrical flexibility of low-order methods. However, the time step is restrictive, necessitating the implicit treatment of diffusion terms in addition to the pressure. Therefore, efficient solution of elliptic equations is of central importance for fast flow solvers. As the operators scale with O(p · N), where N is the number of degrees of freedom and p the polynomial degree, the runtime of the best available multigrid algorithms scales with O(p · N) as well. This super-linear scaling limits the applicability of high-order methods to mid-range polynomial orders and constitutes a major road block on the way to faster flow solvers. This work reduces the super-linear scaling of elliptic solvers to a linear one. First, the static condensation method improves the condition of the system, then the associated operator is cast into matrix-free tensor-product form and factorized to linear complexity. The low increase in the condition and the linear runtime of the operator lead to linearly scaling solvers when increasing the polynomial degree, albeit with low robustness against the number of elements. A p-multigrid with overlapping Schwarz smoothers regains the robustness, but requires inverse operators on the subdomains and in the condensed case these are neither linearly scaling nor matrix-free. Embedding the condensed system into the full one leads to a matrix-free operator and factorization thereof to a linearly scaling inverse. In combination with the previously gained operator a multigrid method with a constant runtime per degree of freedom results, regardless of whether the polynomial degree or the number of elements is increased. Computing on heterogeneous hardware is investigated as a means to attain a higher performance and future-proof the algorithms. A two-level parallelization extends the traditional hybrid programming model by using a coarse-grain layer implementing domain decomposition and a fine-grain parallelization which is hardware-specific. Thereafter, load balancing is investigated on a preconditioned conjugate gradient solver and functional performance models adapted to account for the communication barriers in the algorithm. With the new model, runtime prediction and measurement fit closely with an error margin near 5 %. The devised methods are combined into a flow solver which attains the same throughput when computing with p = 16 as with p = 8, preserving the linear scaling. Furthermore, the multigrid method reduces the cost of implicit treatment of the pressure to the one for explicit treatment of the convection terms. Lastly, benchmarks confirm that the solver outperforms established high-order codes.
Style APA, Harvard, Vancouver, ISO itp.
48

Hagdahl, Stefan. "Hybrid methods for computational electromagnetics in the frequency domain". Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1655.

Pełny tekst źródła
Streszczenie:

In this thesis we study hybrid numerical methods to be usedin computational electromagnetics. We restrict the methods tospectral domain and scattering problems. The hybrids consist ofcombinations of Boundary Element Methods and Geometrical Theoryof Diffraction.

In the thesis three hybrid methods will be presented. Onemethod has been developped from a theoretical idea to anindustrial code. The two other methods will be presented mainlyfrom a theoretical perspective. We will also give shortintroductions to the Boundary Element Method and theGeometrical Theory of Diffraction from a theoretical andimplementational point of view.

Keywords:Maxwell’s equations, Geometrical Theoryof Diffraction, Boundary Element Method, Hybrid methods,Electromagnetic Scattering

Style APA, Harvard, Vancouver, ISO itp.
49

Lin, Zhipeng. "Computational methods in fnancial mathematics course project". Worcester, Mass. : Worcester Polytechnic Institute, 2009. http://www.wpi.edu/Pubs/ETD/Available/etd-050509-115331/.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Yu, Jingyuan. "Discovering Twitter through Computational Social Science Methods". Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671609.

Pełny tekst źródła
Streszczenie:
Visibilitzant la vida quotidiana de la gent, Twitter s'ha convertit en una de les plataformes d'intercanvi d'informació més importants i ha atret ràpidament l'atenció dels científics. Investigadors de tot el món s'han centrat en les ciències socials i en els estudis d'Internet amb dades de Twitter com a mostra del món real, i en l'última dècada s'han dissenyat nombroses eines d'anàlisis i algorismes. La present tesi doctoral consta de tres recerques, en primer lloc, donats els 14 anys (fins a 2020) d'història des de la fundació de Twitter, hem assistit a una explosió de publicacions científiques relacionades, però el panorama actual de la recerca en aquesta plataforma de mitjans socials continuava sent desconegut. Per a omplir aquest buit de recerca, vam fer una anàlisi bibliomètrica dels estudis relacionats amb Twitter per a analitzar com van evolucionar els estudis de Twitter al llarg del temps, i per a proporcionar una descripció general de l'entorn acadèmic de recerca de Twitter des d'un nivell macro. En segon lloc, atès que hi ha moltes eines de programari analític que estan disponibles actualment per a la recerca en Twitter, una pregunta pràctica per als investigadors júnior és com triar el programari més apropiat per al seu propi projecte de recerca. Per a resoldre aquest problema, vam fer una revisió del programari per a alguns dels sistemes integrats que es consideren més rellevants per a la recerca en ciències socials. Atès que els investigadors júnior en ciències socials poden enfrontar-se a possibles limitacions financeres, vam reduir el nostre abast per a centrar-nos únicament en el programari gratuït i de baix cost. En tercer lloc, donada l'actual crisi de salut pública, hem observat que els mitjans de comunicació social són una de les fonts d'informació i notícies més accessibles per al públic. Durant una pandèmia, la forma en què s'emmarquen els problemes de salut i les malalties en la premsa influeix en la comprensió del públic sobre l'actual brot epidèmic i les seves actituds i comportaments. Per tant, decidim usar Twitter com una font de notícies de fàcil accés per a analitzar l'evolució dels marcs de notícies espanyols durant la pandèmia COVID-19. En general, les tres recerques s'han associat estretament amb l'aplicació de mètodes computacionals, incloent la recol·lecció de dades en línia, la mineria de textos, l'anàlisi de xarxes i la visualització de dades. Aquest projecte de doctorat ha mostrat com la gent estudia i utilitza Twitter des de tres nivells diferents: el nivell acadèmic, el nivell pràctic i el nivell empíric.
Visibilizando la vida cotidiana de la gente, Twitter se ha convertido en una de las plataformas de intercambio de información más importantes y ha atraído rápidamente la atención de los científicos. Investigadores de todo el mundo se han centrado en las ciencias sociales y en los estudios de Internet con datos de Twitter como muestra del mundo real, y en la última década se han diseñado numerosas herramientas de análisis y algoritmos. La presente tesis doctoral consta de tres investigaciones, en primer lugar, dados los 14 años (hasta 2020) de historia desde la fundación de Twitter, hemos asistido a una explosión de publicaciones científicas relacionadas, pero el panorama actual de la investigación en esta plataforma de medios sociales seguía siendo desconocido. Para llenar este vacío de investigación, hicimos un análisis bibliométrico de los estudios relacionados con Twitter para analizar cómo evolucionaron los estudios de Twitter a lo largo del tiempo, y para proporcionar una descripción general del entorno académico de investigación de Twitter desde un nivel macro. En segundo lugar, dado que hay muchas herramientas de software analítico que están disponibles actualmente para la investigación en Twitter, una pregunta práctica para los investigadores junior es cómo elegir el software más apropiado para su propio proyecto de investigación. Para resolver este problema, hicimos una revisión del software para algunos de los sistemas integrados que se consideran más relevantes para la investigación en ciencias sociales. Dado que los investigadores junior en ciencias sociales pueden enfrentarse a posibles limitaciones financieras, redujimos nuestro alcance para centrarnos únicamente en el software gratuito y de bajo coste. En tercer lugar, dada la actual crisis de salud pública, hemos observado que los medios de comunicación social son una de las fuentes de información y noticias más accesibles para el público. Durante una pandemia, la forma en que se enmarcan los problemas de salud y las enfermedades en la prensa influye en la comprensión del público sobre el actual brote epidémico y sus actitudes y comportamientos. Por lo tanto, decidimos usar Twitter como una fuente de noticias de fácil acceso para analizar la evolución de los marcos de noticias españoles durante la pandemia COVID-19. En general, las tres investigaciones se han asociado estrechamente con la aplicación de métodos computacionales, incluyendo la recolección de datos en línea, la minería de textos, el análisis de redes y la visualización de datos. Este proyecto de doctorado ha mostrado cómo la gente estudia y utiliza Twitter desde tres niveles diferentes: el nivel académico, el nivel práctico y el nivel empírico.
As Twitter has covered up people’s daily life, it has became one of the most important information exchange platforms, and quickly attracted scientists’ attention. Researchers around the world have highly focused on social science and internet studies with Twitter data as a real world sample, and numerous analytics tools and algorithms have been designed in the last decade. The present doctoral thesis consists of three researches, first, given the 14 years (until 2020) of history since the foundation of Twitter, an explosion of related scientific publications have been witnessed, but the current research landscape on this social media platform remained unknown, to fill this research gap, we did a bibliometric analysis on Twitter-related studies to analyze how the Twitter studies evolved over time, and to provide a general description of the Twitter research academic environment from a macro level. Second, since there are many analytic software tools that are currently available for Twitter research, a practical question for junior researchers is how to choose the most appropriate software for their own research project, to solve this problem, we did a software review for some of the integrated frameworks that are considered most relevant for social science research, given that junior social science researchers may face possible financial constraints, we narrowed our scope to solely focus on the free and low-cost software. Third, given the current public health crisis, we have noticed that social media are one of the most accessed information and news sources for the public. During a pandemic, how health issues and diseases are framed in the news release impacts public’s understanding of the current epidemic outbreak and their attitudes and behaviors. Hence, we decided to use Twitter as an easy-access news source to analyze the evolution of the Spanish news frames during the COVID-19 pandemic. Overall, the three researches have closely associated with the application of computational methods, including online data collection, text mining, complex network and data visualization. And this doctoral project has discovered how people study and use Twitter from three different levels: the academic level, the practical level and the empirical level.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii