Dissertations / Theses on the topic 'Computational approaches'

To see the other types of publications on this topic, follow the link: Computational approaches.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computational approaches.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Banks, Eric 1976. "Computational approaches to gene finding." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/81523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Shutova, Ekaterina. "Computational approaches to figurative language." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wynn, Hamish Henry. "Computational approaches to peptidomimetic design." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Traore, Seydou. "Computational approaches toward protein design." Thesis, Toulouse, INSA, 2014. http://www.theses.fr/2014ISAT0033/document.

Full text
Abstract:
Le Design computationnel de protéines, en anglais « Computational Protein Design » (CPD), est un champ derecherche récent qui vise à fournir des outils de prédiction pour compléter l'ingénierie des protéines. En effet,outre la compréhension théorique des propriétés physico-chimiques fondamentales et fonctionnelles desprotéines, l’ingénierie des protéines a d’importantes applications dans un large éventail de domaines, y comprisdans la biomédecine, la biotechnologie, la nanobiotechnologie et la conception de composés respectueux del’environnement. Le CPD cherche ainsi à accélérer le design de protéines dotées des propriétés désirées enpermettant le traitement d’espaces de séquences de large taille tout en limitant les coûts financier et humain auniveau expérimental.Pour atteindre cet objectif, le CPD requière trois ingrédients conçus de manière appropriée: 1) une modélisationréaliste du système à remodeler; 2) une définition précise des fonctions objectives permettant de caractériser lafonction biochimique ou la propriété physico-chimique cible; 3) et enfin des méthodes d'optimisation efficacespour gérer de grandes tailles de combinatoire.Dans cette thèse, nous avons abordé le CPD avec une attention particulière portée sur l’optimisationcombinatoire. Dans une première série d'études, nous avons appliqué pour la première fois les méthodesd'optimisation de réseaux de fonctions de coût à la résolution de problèmes de CPD. Nous avons constaté qu’encomparaison des autres méthodes existantes, nos approches apportent une accélération du temps de calcul parplusieurs ordres de grandeur sur un large éventail de cas réels de CPD comprenant le design de la stabilité deprotéines ainsi que de complexes protéine-protéine et protéine-ligand. Un critère pour définir l'espace demutations des résidus a également été introduit afin de biaiser les séquences vers celles attendues par uneévolution naturelle en prenant en compte des propriétés structurales des acides aminés. Les méthodesdéveloppées ont été intégrées dans un logiciel dédié au CPD afin de les rendre plus facilement accessibles à lacommunauté scientifique
Computational Protein Design (CPD) is a very young research field which aims at providing predictive tools to complementprotein engineering. Indeed, in addition to the theoretical understanding of fundamental properties and function of proteins,protein engineering has important applications in a broad range of fields, including biomedical applications, biotechnology,nanobiotechnology and the design of green reagents. CPD seeks at accelerating the design of proteins with wanted propertiesby enabling the exploration of larger sequence space while limiting the financial and human costs at experimental level.To succeed this endeavor, CPD requires three ingredients to be appropriately conceived: 1) a realistic modeling of the designsystem; 2) an accurate definition of objective functions for the target biochemical function or physico-chemical property; 3)and finally an efficient optimization framework to handle large combinatorial sizes.In this thesis, we addressed CPD problems with a special focus on combinatorial optimization. In a first series of studies, weapplied for the first time the Cost Function Network optimization framework to solve CPD problems and found that incomparison to other existing methods, it brings several orders of magnitude speedup on a wide range of real CPD instancesthat include the stability design of proteins, protein-protein and protein-ligand complexes. A tailored criterion to define themutation space of residues was also introduced in order to constrain output sequences to those expected by natural evolutionthrough the integration of some structural properties of amino acids in the protein environment. The developed methods werefinally integrated into a CPD-dedicated software in order to facilitate its accessibility to the scientific community
APA, Harvard, Vancouver, ISO, and other styles
5

Moreno, Nascimento Érica Cristina. "Understanding Acetylcholinesterase Inhibitors: Computational Modeling Approaches." Doctoral thesis, Universitat Jaume I, 2017. http://hdl.handle.net/10803/406125.

Full text
Abstract:
La presente Tesis Doctoral constituye un estudio teórico sobre el proceso de inhibición de la acetilcolinesterasa por moléculas que bloquean el sitio activo de la proteína. Se han estudiado un conjunto de 44 inhibidores, dividido en 8 clases distintas, por método QM/MM, docking molecular, análisis estadístico utilizando los métodos de análisis multivariados de datos, reconocimiento de patrones y de algoritmos de agrupamiento. Se ha estudiado la interacción y se ha calculado la energía libre de enlace (Gbind) utilizando los métodos híbridos QM/MM MD asociados a los métodos de la FEP y del BIE. De esta forma, podemos entender la interacción entre los residuos de la enzima, las moléculas de agua estructurales y los diversos inhibidores, a nivel atómico, que son fundamentales para el diseño de nuevos inhibidores y drogas con aplicaciones directas en la enfermedad de Alzheimer.
APA, Harvard, Vancouver, ISO, and other styles
6

Pérez, Llamas Christian 1976. "Computational approaches for integrative cancer genomics." Doctoral thesis, Universitat Pompeu Fabra, 2015. http://hdl.handle.net/10803/328729.

Full text
Abstract:
Given the complexity and heterogeneity of cancer, the development of new high-throughput wide-genome technologies has open new possibilities for its study. Several projects around the globe are exploiting these technologies for generating unprecedented amount of data for cancer genomes. Its analysis, integration and exploration are still a key challenge in the field. In this dissertation, we first present Gitools, a tool for accessing databases in biology, analysing high-throughput data, and visualising multi-dimensional results with interactive heatmaps. Then, we show IntOGen, the methodology employed for collection and organization of the data, the methods used for its analysis, and how the results and analysis were made available to other researchers. Finally, we compare several methods for impact prediction of non-synonymous mutations, showing that new tools specifically designed for cancer outperform those traditionally used for general diseases, and also the need for using other sources of information for better prediction of cancer mutations.
Davant de la complexitat i heterogeneitat del cancer, el desenvolupament de noves tecnologies per l'estudi de genomes, ha obert noves posibilitats. Diversos projectes al voltant del mon les fan servir per generar quantitats de dades de genomes de cancer mai vistes abans. En aquest treball, primer presentem Gitools, una eina que permet obtenir dades de bases de dades en biologia, anal itzar dades genomiques, i visual itzar els resul tats multidimensionals mitjançant mapes de calor interactius. Després mostrem IntOGen, les metodologies per obtenir i organitzar les dades, els metodes per el seu analisi, i com es van possar a disposició d'altres investigadors. Finalment, comparem diversos metods de predicció de l'impacte de les mutacions no sinonimes, que ens mostra com nou metods desenvolupats per cancer funcionen millor que els utilitzats tradicionalment per enfermetats generals, aixis com la necesitat de recorrer a altres fonts d'informació per tenir millor prediccions per mutacions de cancer.
APA, Harvard, Vancouver, ISO, and other styles
7

Cid, Samper Fernando 1991. "Computational approaches to characterize RNP granules." Doctoral thesis, Universitat Pompeu Fabra, 2020. http://hdl.handle.net/10803/668449.

Full text
Abstract:
Ribonucleoprotein granules (RNP granules) are liquid-liquid phase separated complexes composed mainly by proteins and RNA. They are responsible of many processes involved in RNA regulation. Alterations in the dynamics of these proteinRNA complexes are associated with the appearance of several neurodegenerative disorders such as Amyotrophic Lateral Sclerosis ALS or Fragile X Tremor Ataxia Syndrome FXTAS. Yet, many aspects of their organization as well as the specific roles of the RNA on the formation and function of these complexes are still unknown. In order to study RNP granules structure and formation, we integrated several state of the art high-throughput datasets. This includes protein and RNA composition obtained from RNP pull-downs, protein-RNA interaction data from eCLIP experiments and transcriptome-wide secondary structure information (produced by PARS). We used network analysis and clustering algorithms to understand the fundamental properties of granule RNAs. By integrating these properties, we produced a model to identify scaffolding RNA. Scaffolding RNAs are able to recruit many protein components into RNP granules. We found that the main protein components of stress granules (a kind of RNP granules) are connected through protein-RNA interactions. We also analyzed the contribution of RNA-RNA interactions and RNA post-transcriptional modifications on the granule internal organization. We applied these findings to understand the biochemical pathophysiology of FXTAS disease, employing as well some novel experimental data. In FXTAS, a mutation on the FMR1 gene produces a 5´microsatellite repetition that enhances its scaffolding ability. This mutated mRNA is able to sequester some important proteins into nuclear RNP granules, such as TRA2A (i.e. a splicing factor), impeding their normal function and therefore producing some symptoms associated with the progress of the disease. The better understanding of the principles governing granules formation and structure will enable to develop novel therapies (e.g. aptamers) to mitigate the development of several neurodegenerative diseases.
Los gránulos ribonucleoproteicos (gránulos RNP, por sus siglas en inglés) son complejos producidos mediante separación líquido-líquido y están constituidos principalmente por proteínas y ARN. Son responsables de numerosos procesos involucrados con la regulación del ARN. Alteraciones en la dinámica de estos complejos de proteínas y ARN están asociadas con la aparición de diversas enfermedades neurodegenerativas como el ELA o FXTAS. Sin embargo, todavía se desconocen muchos aspectos relativos a su organización interna así como las contribuciones específicas del RNA en la formación y funcionamiento de estos complejos. A fin de estudiar la estructura y formación de los gránulos RNP, hemos integrado varias bases de datos de alto rendimiento de reciente aparición. Esto incluye datos sobre la composición proteica y en ARN de los RNP, sobre la interacción de proteínas y ARN extraída de experimentos de eCLIP y sobre la estructura secundaria del transcriptoma (producida mediante PARS). Todos estos datos han sido procesados para comprender las propiedades fundamentales de los ARNs que integran los gránulos, mediante el empleo de métodos computacionales como el análisis de redes o algoritmos de agrupamiento. De esta manera, hemos producido un modelo que integra varias de estas propiedades e identifica candidatos denominados ARNs de andamiaje. Definimos ARNs de andamiaje como moléculas de ARN con una alta propensión a formar gránulos y reclutar un gran número de componentes proteicos a los gránulos RNP. También hemos encontrado que las interacciones proteína-ARN conectan los principales componentes proteicos de consenso de los gránulos de estrés (un tipo específico de gránulos RNP). También hemos estudiado la contribución de las interacciones ARN-ARN y las modificaciones post-transcriptionales del RNA en la organización interna del gránulo. Hemos aplicado estos resultados para la comprensión de la fisiopatología molecular de FXTAS, empleando también algunos datos experimentales originales. En FXTAS, una mutación en el gen FMR1 produce una repetición de microsatélite en 5´ que incrementa su capacidad como ARN de andamiaje. Este mARN mutado es capaz de secuestrar algunas proteínas importantes como TRA2A (un factor de ayuste alternativo) en gránulos RNP nucleares, impidiendo su normal funcionamiento y por consiguiente produciendo algunos síntomas asociados con el progreso de la enfermedad. Una mejor comprensión de los principios que gobiernan la formación y estructura de los gránulos puede permitir desarrollar nuevas terapias (ej: aptámeros) para mitigar el desarrollo de diversas enfermedades neurodegenerativas.
APA, Harvard, Vancouver, ISO, and other styles
8

Fang, Jianzhong. "Computational approaches to visual object detection." Thesis, University of Nottingham, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.416393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Vartapetiance, Anna. "Computational approaches for verbal deception detection." Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/807037/.

Full text
Abstract:
Deception exists in all aspects of life and is particularly evident on the Web. Deception includes child sexual predators grooming victims online, medical news headlines with little medical evidence or scientific rigour, individuals claiming others’ work as their own, and systematic deception of company shareholders and institutional investors leading to corporate collapses. This thesis explores the potential for automatic detection of deception. We investigate the nature of deception and the related cues, focusing in particular on Verbal Cues, and concluding that they cannot be readily generalised. We demonstrate how deception-specific features, based on sound hypotheses, can overcome related limitations by presenting approaches for three different examples of deception – namely Child Sexual Predator Detection (SPD), Authorship Identification (AI) and Intrinsic Plagiarism Detection (IPD). We further show how our approaches result in competitive levels of reliability. For SPD we develop our approach largely based on the commonality of requests for key personal information. To address AI, we introduce approaches based on a frequency-mean-variance and a frequency-only framework in order to detect strong associations between co-occurring patterns of a limited number of stopwords. Our IPD approaches are based on simple commonality of words at document level and usage of proper nouns; document sections lacking commonality can be identified as plagiarised. The frameworks of the International Workshop on Uncovering Plagiarism, Authorship, and Social Software Misuse (PAN) competitions provided an independent evaluation of the approaches. The SPD approach obtained an F1 score of 0.48. F1 scores of 0.47, 0.53 and 0.57 were achieved in AI tasks for PAN2012, 2013 and 2014 respectively. IPD yielded an overall accuracy of 91%. Through post-competition adaptations we also show how to improve the approaches and the scores and demonstrate the importance of suitable datasets and how most approaches are not easily transferable between various types of deception.
APA, Harvard, Vancouver, ISO, and other styles
10

Kaimal, Vivek. "Computational approaches to study microRNA networks." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1298041682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Stetson, Lindsay C. "Computational Approaches for Cancer Precision Medicine." Case Western Reserve University School of Graduate Studies / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=case1428050439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Smith, Kevin J. "Computational approaches to fragment based screening." Thesis, University of Essex, 2016. http://repository.essex.ac.uk/17574/.

Full text
Abstract:
Polarization is an often - neglected term in molecular modelling, and this is particularly the case in docking. However, the growing interest in fragment - based drug design, coupled with the small size of fragments that makes them amenable to quantum mechanical treatment, has created new opportunities for including polarization, anisotropic electrostatics and realistic repulsion potentials in docking. We have shown that polarization implemented as induced charges can offer in the region of a 10-15% improvement in native docking results, as judged by the percentage of poses within a rather tight threshold of 0.5 or 1.0 Å, where accurate prediction of binding interactions, are more likely. This is a significant improvement given the quality of current commercial docking programs (such as Glide use d here). This improvement is most apparent when the correct pose is known a priori, so that the extent of polarization is correctly modelled, and scoring is based on force - fields that do not scale the electrostatics. The introduction of specific active - sit e water molecules was shown to have a far greater effect than the polarization, probably because of the introduction of 3 additional full charges, rather than introduction of smaller charge perturbations. With active site waters , polarization is more likely to improve the docking when the water molecule is carefully orientated using quantum mechanical/molecular mechanics (QM/MM) methods. The placement of such water molecules is a matter of great current interest; we have shown that the water molecule can be placed with some degree of reliability simply by docking with the ligand present, provided that the water makes good hydrogen bonding interactions (these are the very conditions under which it is desirable to include the specific active-site water). Anisotropic electrostatics and exponential repulsion for rigid fragments was investigated using Orient and compared to QM/MM methods, all methods merited further research. The general hierarchy is that native docking using Glide (with polarization) > QM/MM (with MM polarization)> Orient-based methods. Thus, we expanded the Glide (with polarization) dataset to include more realistic crossdocking experiments on over 5000 structures. RMSD analysis resulted in many examples of clear improvement for including polarization.
APA, Harvard, Vancouver, ISO, and other styles
13

Khabirova, Eleonora. "Models of neurodegeneration using computational approaches." Thesis, University of Cambridge, 2016. https://www.repository.cam.ac.uk/handle/1810/274157.

Full text
Abstract:
Alzheimer's disease (AD), as one of the most common neurodegenerative diseases, is characterized by the loss of neuronal dysfunction and death resulting in progressive cognitive impairment. The main histopathological hallmark of AD is the accumulation and deposition of misfolded Aβ peptide as amyloid plaques, however the precise role of Aβ toxicity in the disease pathogenesis is still unclear. Moreover, at early stages of the disease the important clinical features of the disorder, in addition to memory loss, are the disruptions of circadian rhythms and spatial disorientation. In the present work I first studied the role of Aβ toxicity by comparing the findings of genome-wide association studies in sporadic AD with the results of an RNAi screen in a transgenic C. elegans model of Aβ toxicity. The initial finding was that none of the human orthologues of these worm genes are associated with risk for sporadic Alzheimer’s disease, indicating that Aβ toxicity in the worm model may not be equivalent to sporadic AD. Nevertheless, comparing the first degree physical interactors (+1 interactome) of the GWAS and worm screen-derived gene products have uncovered 4 worm genes that have a +1 interactome overlap with the GWAS genes that is larger than one would expect by chance. Three of these genes form a chaperonin complex and the fourth gene codes for actin, a major substrate of the same chaperonin. Next I have evaluated the circadian disruptions in AD by developing a new system to simultaneously monitor the oscillations of the peripheral molecular clock and behavioural rhythms in single Drosophila. Experiments were undertaken on wild- type and Aβ-expressing flies. The results indicate the robustness of the peripheral clock is not correlated with the robustness of the circadian sleep and locomotor behaviours, indicating that the molecular clock does not directly drive behaviour. This is despite period length correlations that indicate that the underlying molecular mechanisms that generate both molecular and behavioural rhythms are the same. Rhythmicity in Aβ-expressing flies is worse than in controls. I further investigated the mechanism of spatial orientation in Drosophila. It was established that in the absence of visual stimuli the flies use self-motion cues to orientate themselves within the tubes and that in a Drosophila model of Aβ toxicity this control function is disrupted.
APA, Harvard, Vancouver, ISO, and other styles
14

Marchese, Robinson Richard Liam. "Computational approaches to predicting drug induced toxicity." Thesis, University of Cambridge, 2013. https://www.repository.cam.ac.uk/handle/1810/244242.

Full text
Abstract:
Novel approaches and models for predicting drug induced toxicity in silico are presented. Typically, these were based on Quantitative Structure-Activity Relationships (QSAR). The following endpoints were modelled: mutagenicity, carcinogenicity, inhibition of the hERG ion channel and the associated arrhythmia - Torsades de Pointes. A consensus model was developed based on Derek for WindowsTM and Toxtree and used to filter compounds as part of a collaborative effort resulting in the identification of potential starting points for anti-tuberculosis drugs. Based on the careful selection of data from the literature, binary classifiers were generated for the identification of potent hERG inhibitors. These were found to perform competitively with, or better than, those computational approaches previously presented in the literature. Some of these models were generated using Winnow, in conjunction with a novel proposal for encoding molecular structures as required by this algorithm. The Winnow models were found to perform comparably to models generated using the Support Vector Machine and Random Forest algorithms. These studies also emphasised the variability in results which may be obtained when applying the same approaches to different train/test combinations. Novel approaches to combining chemical information with Ultrafast Shape Recognition (USR) descriptors are introduced: Atom Type USR (ATUSR) and a combination between a proposed Atom Type Fingerprint (ATFP) and USR (USR-ATFP). These were applied to the task of predicting protein-ligand interactions - including the prediction of hERG inhibition. Whilst, for some of the datasets considered, either ATUSR or USR-ATFP was found to perform marginally better than all other descriptor sets to which they were compared, most differences were statistically insignificant. Further work is warranted to determine the advantages which ATUSR and USR-ATFP might offer with respect to established descriptor sets. The first attempts to construct QSAR models for Torsades de Pointes using predicted cardiac ion channel inhibitory potencies as descriptors are presented, along with the first evaluation of experimentally determined inhibitory potencies as an alternative, or complement to, standard descriptors. No (clear) evidence was found that 'predicted' ('experimental') 'IC-descriptors' improve performance. However, their value may lie in the greater interpretability they could confer upon the models. Building upon the work presented in the preceding chapters, this thesis ends with specific proposals for future research directions.
APA, Harvard, Vancouver, ISO, and other styles
15

Slade, Jr Wayne Homer. "Computational Intelligence Approaches to Ocean Color Inversion." Fogler Library, University of Maine, 2004. http://www.library.umaine.edu/theses/pdf/SladeWH2004.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Andronescu, Mirela Stefania. "Computational approaches for RNA energy parameter estimation." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2794.

Full text
Abstract:
RNA molecules play important roles, including catalysis of chemical reactions and control of gene expression, and their functions largely depend on their folded structures. Since determining these structures by biochemical means is expensive, there is increased demand for computational predictions of RNA structures. One computational approach is to find the secondary structure (a set of base pairs) that minimizes a free energy function for a given RNA conformation. The forces driving RNA folding can be approximated by means of a free energy model, which associates a free energy parameter to a distinct considered feature. The main goal of this thesis is to develop state-of-the-art computational approaches that can significantly increase the accuracy (i.e., maximize the number of correctly predicted base pairs) of RNA secondary structure prediction methods, by improving and refining the parameters of the underlying RNA free energy model. We propose two general approaches to estimate RNA free energy parameters. The Constraint Generation (CG) approach is based on iteratively generating constraints that enforce known structures to have energies lower than other structures for the same molecule. The Boltzmann Likelihood (BL) approach infers a set of RNA free energy parameters which maximize the conditional likelihood of a set of known RNA structures. We discuss several variants and extensions of these two approaches, including a linear Gaussian Bayesian network that defines relationships between features. Overall, BL gives slightly better results than CG, but it is over ten times more expensive to run. In addition, CG requires software that is much simpler to implement. We obtain significant improvements in the accuracy of RNA minimum free energy secondary structure prediction with and without pseudoknots (regions of non-nested base pairs), when measured on large sets of RNA molecules with known structures. For the Turner model, which has been the gold-standard model without pseudoknots for more than a decade, the average prediction accuracy of our new parameters increases from 60% to 71%. For two models with pseudoknots, we obtain an increase of 9% and 6%, respectively. To the best of our knowledge, our parameters are currently state-of-the-art for the three considered models.
APA, Harvard, Vancouver, ISO, and other styles
17

Clegg, Andrew Brian. "Computational-linguistic approaches to biological text mining." Thesis, Birkbeck (University of London), 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.500019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Han, Ningren. "Computational and statistical approaches to optical spectroscopy." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120432.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 223-236).
Compact and smart optical sensors have had a major impact on people's lives over the last decade. Although the spatial information provided by optical imaging systems has already had a major impact, there is untapped potential in the spectroscopic domain. By transforming molecular information into wavelength-domain data, optical spectroscopy techniques have become some of the most popular scientific tools for examining the composition and nature of materials and chemicals in a non-destructive and non-intrusive manner. However, unlike imaging, spectroscopic techniques have not achieved the same level of penetration due to multiple challenges. These challenges have ranged from a lack of sensitive, miniaturized, and low-cost systems, to the general reliance on domain-specific expertise for interpreting complex spectral signals. In this thesis, we aim to address some of these challenges by combining modern computational and statistical techniques with physical domain knowledge. In particular, we focus on three aspects where computational or statistical knowledge have either enabled realization of a new instrument-with a compact form factor yet still maintaining a competitive performance-or deepened statistical insights of analyte detection and quantification in highly mixed or heterogeneous environments. In the first part, we utilize the non-paraxial Talbot effect to build compact and high-performance spectrometers and wave meters that use computational processing for spectral information retrieval without the need for a full-spectrum calibration process. In the second part, we develop an analyte quantification algorithm for Raman spectroscopy based on spectral shaping modeling. It uses a hierarchical Bayesian inference model and reversible-jump Markov chain Monte Carlo (RJMCMC) computation with a minimum training sample size requirement. In the last part, we numerically investigate the spectral characteristics and signal requirements for universal and predictive non-invasive glucose estimation with Raman spectroscopy, using an in vivo skin Raman spectroscopy dataset. These results provide valuable advancements and insights in bringing forth smart compact optical spectroscopic solutions to real-world applications.
by Ningren Han.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
19

Sorathiya, Anilkumar. "Computational modelling approaches to HIV-1 dynamics." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Scoggins, Randy Keith. "SIGNAL PROCESSING APPROACHES FOR APPEARANCE MATCHING." MSSTATE, 2003. http://sun.library.msstate.edu/ETD-db/theses/available/etd-04162003-143335/.

Full text
Abstract:
The motivation for this work is to study methods of estimating appropriate level-of-detail (LoD) object models by quantifying appearance errors prior to image synthesis. Visualization systems have been developed that employ LoD objects, however, the criteria are often based on heuristics that restrict the form of the object model and rendering method. Also, object illumination is not considered in the LoD selection. This dissertation proposes an image-based scene learning pre-process to determine appropriate LoD for each object in a scene. Scene learning employs sample images of an object, from many views and with a range of geometric representations, to produce a profile of the LoD image error as a function of viewing distance. Signal processing techniques are employed to quantify how images change with respect to object model resolution, viewing distance, and lighting direction. A frequency-space analysis is presented which includes use of the vision system?s contrast sensitivity to evaluate perceptible image differences with error metrics. The initial development of scene learning is directed to sampling the object?s appearance as a function of viewing distance and object geometry in scene space. A second phase allows local lighting to be incorporated in the scene learning pre-process. Two methods for re-lighting are presented that differ in accuracy and overhead; both allow properties of an object?s image to be computed without rendering. In summary, full-resolution objects pro-duce the best image since the 3D scene is as real as possible. A less realistic 3D scene with simpler objects produces a different appearance in an image, but by what amount? My the-sis is such can be had. Namely that object fidelity in the 3D scene can be loosened further than has previously been shown without introducing significant appearance change in an object and that the relationship between 3D object realism and appearance can be expressed quantitatively.
APA, Harvard, Vancouver, ISO, and other styles
21

Andersen, Malin. "Computational and experimental approaches to regulatory genetic variation." Doctoral thesis, Stockholm : Bioteknologi, Kungliga Tekniska högskolan, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Vaz, Junior M. "Computational approaches to simulation of metal cutting processes." Thesis, Swansea University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.639305.

Full text
Abstract:
The purpose of this thesis is to develop numerical techniques to model and analyse metal forming operations involving material removal and ductile fracture. Due to the diversity and complexity of the physical phenomena involved, several different computational aspects of the problem have been addressed such as: computational strategies for general thermo-mechanical coupled problems accounting for heat generation due to plastic and frictional work, thermal contact, thermal strains and temperature dependent properties; ductile fracture criteria for damaged and conventional J2 elasto-plastic materials; and transfer operators for thermo-mechanical coupled problems and error estimates for damaged and conventional J2 elasto-plastic materials. The above techniques made possible studies on the following subjects: application of ductile fracture concepts to material separation in incipient chip formation and blanking; and application of error estimates and re-meshing procedures to high-speed machining. The technique developed in this thesis provide useful computational tools in the analysis of the phenomena involved in chip formation processes and constitute an advance with respect to numerical simulation of orthogonal machining.
APA, Harvard, Vancouver, ISO, and other styles
23

Measures, Kathryn M. "Computational approaches to studying protein structures and reactivity." Thesis, University of Nottingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.321374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Todorov, Ilian Todorov. "Computational approaches to disordered compounds and solid solutions." Thesis, University of Bristol, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Diaz, Artiles Ana. "Exercise under artificial gravity - experimental and computational approaches." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98799.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 185-191).
Humans experience strong physiological deconditioning during space missions, primarily due to the weightlessness conditions. Some of these adverse consequences include bone loss, muscle atrophy, sensory-motor deconditioning, and cardiovascular adaptation, which may lead to orthostatic intolerance when astronauts are back on Earth. In order to mitigate the negative effects of weightlessness, several countermeasures are currently in place, particularly very intensive exercise protocols. However, despite these countermeasures, astronaut physiological deconditioning persists, highlighting the need for new approaches to maintain the astronauts' physiological state within acceptable limits. Artificial gravity has long been suggested as a comprehensive countermeasure that is capable of challenging all the physiological systems at the same time, therefore maintaining overall health during extended weightlessness. Ground studies have shown that intermittent artificial gravity using a short-radius centrifuge combined with ergometer exercise is effective in preventing cardiovascular and musculoskeletal deconditioning. However, these studies have been done in very different conditions, and confounding factors between the studies (including centrifuge configuration, exposure time, gravity level, gravity gradient, and use/intensity of exercise) make it very difficult to draw clear conclusions about the stimuli needed to maintain physiological conditioning in space. The first objective of this research effort is to analyze the effects of different artificial gravity levels and ergometer exercise workload on musculoskeletal and cardiovascular functions, motion sickness and comfort. Human experiments are conducted using a new configuration of the MIT Compact Radius Centrifuge, which has been constrained to a radius of 1.4 meters, the upper radial limit to fit within an ISS module without extensive structural alterations. The second objective is to develop a computational model of the cardiovascular system to gain a better understanding of the effects of exercise under a high gravity gradient on the cardiovascular system. The gravity gradient generated when using a short-radius centrifuge has not previously been investigated in detail. The model is validated with the experimental measurements from the MIT CRC. Then, the model is used to explore the cardiovascular responses to new centrifuge configurations and from 0g adapted subjects.
by Ana Diaz Artiles.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
26

Puvanendrampillai, Dushyanthan. "Novel computational approaches to understanding protein-ligand interactions." Thesis, University of Cambridge, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.614917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mirina, Alexandra. "Computational approaches for intelligent processing of biomedical data." Thesis, Yeshiva University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3664552.

Full text
Abstract:

The rapid development of novel experimental techniques has led to the generation of an abundance of biological data, which holds great potential for elucidating many scientific problems. The analysis of such complex heterogeneous information, which we often have to deal with, requires appropriate state-of-the-art analytical methods. Here we demonstrate how an unconventional approach and intelligent data processing can lead to meaningful results.

This work includes three major parts. In the first part we describe a correction methodology for genome-wide association studies (GWAS). We demonstrate the existing bias for the selection of larger genes for downstream analyses in GWA studies and propose a method to adjust for this bias. Thus, we effectively show the need for data preprocessing in order to obtain a biologically relevant result. In the second part, building on the results obtained in the first part, we attempt to elucidate the underlying mechanisms of aging and longevity by conducting a longevity GWAS. Here we took an unconventional approach to the GWAS analysis by applying the idea of genetic buffering. Doing this allowed us to identify pairs of genetic markers that play a role in longevity. Furthermore, we were able to confirm some of them by means of a downstream network analysis. In the third and final part, we discuss the characteristics of chronic lymphocytic leukemia (CLL) B-cells and perform clustering analysis based on immunoglobulin (Ig) mutation patterns. By comparing the sequences of Ig of CLL patients and healthy donors, we show that different Ig heavy chain (IGHV) regions in CLL exhibit similarities with different B-cell subtypes of healthy donors, which raised a question about the single origin of CLL cases.

APA, Harvard, Vancouver, ISO, and other styles
28

Mehio, Wissam. "Computational approaches for identifying inhibitors of protein interactions." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/9503.

Full text
Abstract:
Inter-molecular interaction is at the heart of biological function. Proteins can interact with ligands, peptides, small molecules, and other proteins to serve their structural or functional purpose. With advances in combinatorial chemistry and the development of high throughput binding assays, the available inter-molecular interaction data is increasing exponentially. As the space of testable compounds increases, the complexity and cost of finding a suitable inhibitor for a protein interaction increases. Computational drug discovery plays an important role in minimizing the time and cost needed to study the space of testable compounds. This work focuses on the usage of various computational methods in identifying protein interaction inhibitors and demonstrates the ability of computational drug discovery to contribute to the ever growing field of molecular interaction. A program to predict the location of binding surfaces on proteins, STP (Mehio et al., Bioinformatics, 2010, in press), has been created based on calculating the propensity of triplet-patterns of surface protein atoms that occur in binding sites. The use of STP in predicting ligand binding sites, allosteric binding sites, enzyme classification numbers, and binding details in multi-unit complexes is demonstrated. STP has been integrated into the in-house high throughput drug discovery pipeline, allowing the identification of inhibitors for proteins whose binding sites are unknown. Another computational paradigm is introduced, creating a virtual library of -turn peptidomimetics, designed to mimic the interaction of the Baff-Receptor (Baff-R) with the B-Lymphocyte Stimulator (Blys). LIDAEUS (Taylor, et al., Br J Pharmacol, 2008; 153, p. S55-S67) is used to identify chemical groups with favorable binding to Blys. Natural and non-natural sidechains are then used to create a library of synthesizable cyclic hexapeptides that would mimic the Blys:Baff-R interaction. Finally, this work demonstrates the usage and synergy of various in-house computational resources in drug discovery. The ProPep database is a repository used to study trends, motifs, residue pairing frequencies, and aminoacid enrichment propensities in protein-peptide interaction. The LHRLL protein-peptide interaction motif is identified and used with UFSRAT (S. Shave, PhD Thesis, University of Edinburgh, 2010) to conduct ligand-based virtual screening and generate a list of possible antagonists from the EDULISS (K. Hsin, PhD Thesis, University of Edinburgh, 2010) compound repository. A high throughput version of AutoDock (Morris, et al., J Comput Chem, 1998; 19, p. 1639-62) was adapted and used for precision virtual screening of these molecules, resulting in a list of compounds that are likely to inhibit the binding of this motif to several Nuclear Receptors.
APA, Harvard, Vancouver, ISO, and other styles
29

Cao, Yi. "Computational approaches for detecting manipulations in capital markets." Thesis, Ulster University, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.675472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bajuri, Mohd Nazri Bin. "Mechanobiological analyses of healing tendons using computational approaches." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:c6daa0b7-4875-4056-b05e-c35097988b72.

Full text
Abstract:
The healing process of ruptured tendons is problematic due to scar tissue formation and deteriorated material properties. In some cases, it may take nearly a year to complete. Mechanical loading has been shown to positively influence tendon healing; however, the mechanisms remain unclear. Computational mechanobiology methods employed extensively to model bone healing have achieved high fidelity, but not yet been explored to understand tendon regeneration. The general objective of this thesis is to develop computational approaches to enhance the knowledge of the role that mechanical factors play in fibre re-organisation in healing tendons, by proposing an appropriate constitutive formulation, followed by analysing the mechano-adaptation of the models created when regulated by different biophysical stimuli. Curve fitting of an established hyperelastic fibre-reinforced continuum model introduced by Gasser, Ogden and Holzapfel (GOH) against experimental tensile testing data of rat Achilles tendons at four timepoints during the tendon repair was used and achieved excellent fits (0.9903 < R2 < 0.9986). A parametric sensitivity study using a three-level central composite design, which is a fractional factorial design method, showed that the collagen-fibre-related parameters in the GOH model had almost equal influence on the fitting. The mechano-adaptation of the healing tendons when regulated by axial and principal strain predicted fibre re-organisation comparable to experimental findings, in contrast to models regulated by deviatoric strain. Also, mechano-adaptive models regulated by deviatoric strain were more spatially and temporally sensitive to different boundary conditions - length and loading magnitudes - than those regulated by axial and principal strain. This thesis describes that a hyperelastic fibre-reinforced mechano-adaptive model regulated by axial or principal strain is generally capable of describing the mechanobiological behaviours of healing tendons, and that further experiments should focus on establishing the localised structural and material parameters of collagen fibres and their mechano-adaptive behaviours in the healing tissue.
APA, Harvard, Vancouver, ISO, and other styles
31

Yang, Guoli. "Learning in adaptive networks : analytical and computational approaches." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20956.

Full text
Abstract:
The dynamics on networks and the dynamics of networks are usually entangled with each other in many highly connected systems, where the former means the evolution of state and the latter means the adaptation of structure. In this thesis, we will study the coupled dynamics through analytical and computational approaches, where the adaptive networks are driven by learning of various complexities. Firstly, we investigate information diffusion on networks through an adaptive voter model, where two opinions are competing for the dominance. Two types of dynamics facilitate the agreement between neighbours: one is pairwise imitation and the other is link rewiring. As the rewiring strength increases, the network of voters will transform from consensus to fragmentation. By exploring various strategies for structure adaptation and state evolution, our results suggest that network configuration is highly influenced by range-based rewiring and biased imitation. In particular, some approximation techniques are proposed to capture the dynamics analytically through moment-closure differential equations. Secondly, we study an evolutionary model under the framework of natural selection. In a structured community made up of cooperators and cheaters (or defectors), a new-born player will adopt a strategy and reorganise its neighbourhood based on social inheritance. Starting from a cooperative population, an invading cheater may spread in the population occasionally leading to the collapse of cooperation. Such a collapse unfolds rapidly with the change of external conditions, bearing the traits of a critical transition. In order to detect the risk of invasions, some indicators based on population composition and network structure are proposed to signal the fragility of communities. Through the analyses of consistency and accuracy, our results suggest possible avenues for detecting the loss of cooperation in evolving networks. Lastly, we incorporate distributed learning into adaptive agents coordination, which emerges as a consequence of rational individual behaviours. A generic framework of work-learn-adapt (WLA) is proposed to foster the success of agents organisation. To gain higher organisation performance, the division of labour is achieved by a series of events of state evolution and structure adaptation. Importantly, agents are able to adjust their states and structures through quantitative information obtained from distributed learning. The adaptive networks driven by explicit learning pave the way for a better understanding of intelligent organisations in real world.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhou, Mengshi. "Integrated Computational Drug Discovery Approaches for Neuropsychiatric Disorders." Case Western Reserve University School of Graduate Studies / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=case1595622079403654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Mapiye, Darlington Shingirirai. "Computational genomics approaches for kidney diseases in Africa." University of the Western Cape, 2015. http://hdl.handle.net/11394/4958.

Full text
Abstract:
Philosophiae Doctor - PhD
End stage renal disease (ESRD), a more severe form of kidney disease, is considered to be a complex trait that may involve multiple processes which work together on a background of a significant genetic susceptibility. Black Africans have been shown to bear an unequal burden of this disease compared to white Europeans, Americans and Caucasians. Despite this, most of the genetic and epidemiological advances made in understanding the aetiology of kidney diseases have been done in other populations outside of sub-Saharan Africa (SSA). Very little research has been undertaken to investigate key genetic factors that drive ESRD in Africans compared to patients from rest of world populations. Therefore, the primary aim of this Bioinformatics thesis was twofold: firstly, to develop and apply a whole exome sequencing (WES) analysis pipeline and use it to understand a genetic mechanism underlying ESRD in a South African population of mixed ancestry. As I hypothesized that the pipeline would enable the discovery of highly penetrate rare variants with large effect size, which are expected to explain an important fraction of the genetic aetiology and pathogenesis of ESRD in these African patients. Secondly, the aim was to develop and set up a multicenter clinical database that would capture a plethora of clinical data for patients with Lupus, one of the risk factors of ESRD. From WES of six family members (five cases and one control); a total of 23 196 SNVs, 1445 insertions and 1340 deletions, overlapped amongst all affected family members. The variants were consistent with an autosomal dominant inheritance pattern inferred in this family. Of these, only 1550 SNVs, 67 insertions and 112 deletions were present in all affected family members but absent in the unaffected family member. Following detailed evaluation of evidence for variant implication and pathogenicity, only 3 very rare heterozygous missense variants in 3 genes COL4A1 [p.R476W], ICAM1 [p.P352L], COL16A1 [p.T116M] were considered potentially disease causing. Computational relatedness analysis revealed approximate amount of DNA shared by family members and confirmed reported relatedness. Genotyping for the Y chromosome was additionally performed to assist in sample identity. The clinical database has been designed and is being piloted at Groote Schuur medical Hospital at the University of Cape Town. Currently, about 290 patients have already been entered in the registry. The resources and methodologies developed in this thesis have the potential to contribute not only to the understanding of ESRD and its risk factors, but to the successful application of WES in clinical practice. Importantly, it contributes significant information on the genetics of ESRD based on an African family and will also improve scientific infrastructure on the African continent. Clinical databasing will go a long way to enable clinicians to collect and store standardised clinical data for their patients.
APA, Harvard, Vancouver, ISO, and other styles
34

Sohn, Michael B. "Novel Computational and Statistical Approaches in Metagenomic Studies." Diss., The University of Arizona, 2015. http://hdl.handle.net/10150/556866.

Full text
Abstract:
Metagenomics has a great potential to discover previously unattainable information about microbial communities. The simplest, but extremely powerful approach for studying the characteristics of a microbial community is the analysis of differential abundance, which tries to identify differentially abundant features (e.g. species or genes) across different communities. For instance, detection of differentially abundant microbes across healthy and diseased groups can enable us to identify potential pathogens or probiotics. However, the analysis of differential abundance could mislead us about the characteristics of microbial communities if the counts or abundance of features on different scales are not properly normalized within and between communities. An important prerequisite for the analysis of differential abundance is to accurately estimate the composition of microbial communities, which is commonly known as the analysis of taxonomic composition. Most of prevalent approaches utilize solely the results of an alignment tool such as BLAST, limiting their estimation accuracy to high ranks of the taxonomy tree. In this study, two novel methods are developed: one for the analysis of taxonomic composition, called Taxonomic Analysis by Elimination and Correction (TAEC) and the other for the analysis of differential abundance, called Ratio Approach for Identifying Differential Abundance (RAIDA). TAEC utilizes the alignment similarity between known genomes in addition to the similarity between query sequences and sequences of known genomes. It is comprehensively tested on various simulated datasets of diverse complexity of bacterial structure. Compared with other available methods designed for estimating taxonomic composition at a relatively low taxonomic rank, TAEC demonstrates greater accuracy in the abundance of bacteria in a given microbial sample. RAIDA utilizes an invariant property of the ratio between the abundance of features, that is, a ratio between the relative abundance of two features is the same as a ratio between the absolute abundance of two features. Through comprehensive simulation studies the performance of RAIDA is consistently powerful and under some situations it greatly surpasses other existing methods for the analysis of differential abundance in metagenomic studies.
APA, Harvard, Vancouver, ISO, and other styles
35

Wallace, Ian Patrick. "Improved computational approaches to classical electric energy problems." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28922.

Full text
Abstract:
This thesis considers three separate but connected problems regarding energy networks: the load flow problem, the optimal power flow problem, and the islanding problem. All three problems are non-convex non linear problems, and so have the potential of returning local solutions. The goal of this thesis is to find solution methods to each of these problems that will minimize the chances of returning a local solution. The thesis first considers the load ow problem and looks into a novel approach to solving load flows, the Holomorphic Embedding Load Flow Method (HELM). The current literature does not provide any HELM models that can accurately handle general power networks containing PV and PQ buses of realistic sizes. This thesis expands upon previous work to present models of HELM capable of solving general networks efficiently, with computational results for the standard IEEE test cases provided for comparison. The thesis next considers the optimal power flow problem, and creates a framework for a load flow-based OPF solver. The OPF solver is designed with incorporating HELM as the load flow solver in mind, and is tested on IEEE test cases to compare it with other available OPF solvers. The OPF solvers are also tested with modified test cases known to have local solutions to show how a LF-OPF solver using HELM is more likely to find the global optimal solution than the other available OPF solvers. The thesis finally investigates solving a full AC-islanding problem, which can be considered as an extension of the transmission switching problem, using a standard MINLP solver and comparing the results to solutions obtained from approximations to the AC problem. Analysing in detail the results of the AC-islanding problem, alterations are made to the standard MINLP solver to allow better results to be obtained, all the while considering the trade-off between results and elapsed time.
APA, Harvard, Vancouver, ISO, and other styles
36

Li, Hao. "On Wave Based Computational Approaches For Heterogeneous Media." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLN001/document.

Full text
Abstract:
Ce travail de thèse s'intéresse au développement de stratégies de calcul pour résoudre les problèmes de Helmholtz, en moyennes fréquences, dans les milieux hétérogènes. Il s'appuie sur l'utilisation de la Théorie Variationnelle des Rayons Complexes (TVRC), et enrichit l'espace des fonctions qu'elle utilise par des fonctions d'Airy, quand le carré de la longueur d'onde du milieu varie linéairement. Il s'intéresse aussi à une généralisation de la prédiction de la solution pour des milieux dont la longueur d'onde varie d'une quelconque autre manière. Pour cela, des approximations à l'ordre zéro et à l'ordre un sont définies, et vérifient localement les équations d'équilibre selon une certaine moyenne sur les sous domaines de calcul.Plusieurs démonstrations théoriques des performances de la méthodes sont menées, et plusieurs exemples numériques illustrent les résultats. La complexité retenue pour ces exemples montrent que l'approche retenue permet de prédire le comportement vibratoire de problèmes complexes, tel que le régime oscillatoire des vagues dans un port maritime. Ils montrent également qu'il est tout à fait envisageable de mixer les stratégies de calcul développées avec celles classiquement utilisées, telle que la méthode des éléments finis, pour construire des stratégies de calcul utilisables pour les basses et les moyennes fréquences, en même temps
This thesis develops numerical approaches to solve mid-frequency heterogeneous Helmholtz problem. When the square of wave number varies linearly in the media, one considers an extended Variational Theory of Complex Rays(VTCR) with shape functions namely Airy wave functions, which satisfy the governing equation. Then a general way to handle heterogeneous media by the Weak Trefftz Discontinuous Galerkin (WTDG) is proposed. There is no a priori restriction for the wave number. One locally develops general approximated solution of the governing equation, the gradient of the wave number being the small parameter. In this way, zero order and first order approximations are defined, namely Zero Order WTDG and First Order WTDG. Their shape functions only satisfy the local governing equation in average sense.Theoretical demonstration and academic examples of approaches are addressed. Then the extended VTCR and the WTDG are both applied to solve a harbor agitation problem. Finally, a FEM/WAVE WTDG is further developed to achieve a mix use of the Finite Element method(FEM) approximation and the wave approximation in the same subdomains, at the same time for frequency bandwidth including LF and MF
APA, Harvard, Vancouver, ISO, and other styles
37

Parra, Farré Genís. "Computational identification of genes: ab initio and comparative approaches." Doctoral thesis, Universitat Pompeu Fabra, 2004. http://hdl.handle.net/10803/7082.

Full text
Abstract:
El trabajo que aquí se presenta, estudia el reconocimiento de las señales que delimitan y definen los genes que codifican para proteínas, así como su aplicabilidad en los programas de predicción de genes. La tesis que aquí se presenta, también explora la utilitzación de la genómica comparativa para mejorar la identificación de genes en diferentes especies simultaniamente. También se explica el desarrollo de dos programas de predicción computacional de genes: geneid y sgp2. El programa geneid identifica los genes codificados en una secuencia anónima de DNA basandose en sus propiedades intrínsecas (principalmente las señales de splicing y el uso diferencial de codones). sgp2 permite utilitzar la comparación entre dos genomas, que han de estar a una cierta distancia evolutiva óptima, para mejorar la predicción de genes, bajo la hipotesis que las regiones codificantes están mas conservadas que las regiones que no codifican para proteínas.
The motivation of this thesis is to give a little insight in how genes are encoded and recognized by the cell machinery and to use this information to find genes in unannotated genomic sequences. One of the objectives is the development of tools to identify eukaryotic genes through the modeling and recognition of their intrinsic signals and properties. This thesis addresses another problem: how the sequence of related genomes can contribute to the identification of genes. The value of comparative genomics is illustrated by the sequencing of the mouse genome for the purpose of annotating the human genome. Comparative gene predictions programs exploit this data under the assumption that conserved regions between related species correspond to functional regions (coding genes among them). Thus, this thesis also describes a gene prediction program that combines ab initio gene prediction with comparative information between two genomes to improve the accuracy of the predictions.
APA, Harvard, Vancouver, ISO, and other styles
38

Carl, Merlin [Verfasser]. "Alternative Finestructural and Computational Approaches to Constructibility / Merlin Carl." Bonn : Universitäts- und Landesbibliothek Bonn, 2011. http://d-nb.info/1016012713/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hartford, Alan Hughes. "Computational approaches for maximum likelihood estimation for nonlinearmixed models." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20000719-081254.

Full text
Abstract:

The nonlinear mixed model is an important tool for analyzingpharmacokinetic and other repeated-measures data.In particular, these models are used when the measured response for anindividual,,has a nonlinear relationship with unknown, random, individual-specificparameters,.Ideally, the method of maximum likelihood is used to find estimates forthe parameters ofthe model after integrating out the random effects in the conditionallikelihood. However, closed form solutions tothe integral are generally not available. As a result, methods have beenpreviously developed to find approximatemaximum likelihood estimates for the parameters in the nonlinear mixedmodel. These approximate methods include FirstOrder linearization, Laplace's approximation, importance sampling, andGaussian quadrature. The methods are availabletoday in several software packages for models of limited sophistication;constant conditional error variance is requiredfor proper utilization of most software. In addition, distributionalassumptions are needed. This work investigates howrobust two of these methods, First Order linearization and Laplace'sapproximation, are to these assumptions. The findingis that Laplace's approximation performs well, resulting in betterestimation than first order linearization when bothmodels converge to a solution.

A method must provide good estimates of the likelihood at points inthe parameter space near the solution. This workcompares this ability among the numerical integration techniques,Gaussian quadrature, importance sampling, and Laplace'sapproximation. A new "scaled" and "centered" version of Gaussianquadrature is found to be the most accurate technique.In addition, the technique requires evaluation of the integrand at onlya few abscissas. Laplace's method also performswell; it is more accurate than importance sampling with even 100importance samples over two dimensions. Even so,Laplace's method still does not perform as well as Gaussian quadrature.Overall, Laplace's approximation performs betterthan expected, and is shown to be a reliable method while stillcomputationally less demanding.

This work also introduces a new method to maximize the likelihood.This method can be sharpened to any desired levelof accuracy. Stochastic approximation is incorporated to continuesampling until enough information is gathered to resultin accurate estimation. This new method is shown to work well for linearmixed models, but is not yet successful for thenonlinear mixed model.

APA, Harvard, Vancouver, ISO, and other styles
40

Dulk, Paul den. "Computational approaches to affective processes evolutionary and neural perspectives /." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2002. http://dare.uva.nl/document/61946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Swett, Rebecca Jane. "Computational approaches to anti-toxin therapies and biomarker identification." Thesis, Wayne State University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3601751.

Full text
Abstract:

This work describes the fundamental study of two bacterial toxins with computational methods, the rational design of a potent inhibitor using molecular dynamics, as well as the development of two bioinformatic methods for mining genomic data. Clostridium difficile is an opportunistic bacillus which produces two large glucosylating toxins. These toxins, TcdA and TcdB cause severe intestinal damage. As Clostridium difficile harbors considerable antibiotic resistance, one treatment strategy is to prevent the tissue damage that the toxins cause. The catalytic glucosyltransferase domain of TcdA and TcdB was studied using molecular dynamics in the presence of both a protein-protein binding partner and several substrates. These experiments were combined with lead optimization techniques to create a potent irreversible inhibitor which protects 95% of cells in vitro. Dynamics studies on a TcdB cysteine protease domain were performed to an allosteric communication pathway. Comparative analysis of the static and dynamic properties of the TcdA and TcdB glucosyltransferase domains were carried out to determine the basis for the differential lethality of these toxins. Large scale biological data is readily available in the post-genomic era, but it can be difficult to effectively use that data. Two bioinformatics methods were developed to process whole-genome data. Software was developed to return all genes containing a motif in single genome. This provides a list of genes which may be within the same regulatory network or targeted by a specific DNA binding factor. A second bioinformatic method was created to link the data from genome-wide association studies (GWAS) to specific genes. GWAS studies are frequently subjected to statistical analysis, but mutations are rarely investigated structurally. HyDn-SNP-S allows a researcher to find mutations in a gene that correlate to a GWAS studied phenotype. Across human DNA polymerases, this resulted in strongly predictive haplotypes for breast and prostate cancer. Molecular dynamics applied to DNA Polymerase Lambda suggested a structural explanation for the decrease in polymerase fidelity with that mutant. When applied to Histone Deacetylases, mutations were found that alter substrate binding, and post-translational modification.

APA, Harvard, Vancouver, ISO, and other styles
42

Yalamanchili, Hari Krishna. "Computational approaches for protein functions and gene association networks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206477.

Full text
Abstract:
Entire molecular biology revolves primarily around proteins and genes (DNA and RNA). They collaborate with each other facilitating various biomolecular systems. Thus, to comprehend any biological phenomenon from very basic cell division to most complex cancer, it is fundamental to decode the functional dynamics of proteins and genes. Recently, computational approaches are being widely used to supplement traditional experimental approaches. However, each automated approach has its own advantages and limitations. In this thesis, major shortcomings of existing computational approaches are identified and alternative fast yet precise methods are proposed. First, a strong need for reliable automated protein function prediction is identified. Almost half of protein functional interpretations are enigmatic. Lack of universal functional vocabulary further elevates the problem. NRProF, a novel neural response based method is proposed for protein functional annotation. Neural response algorithm simulates human brain in classifying images; the same is applied here for classifying proteins. Considering Gene Ontology (GO) hierarchical structure as background, NRProF classifies a protein of interest to a specific GO category and thus assigns the corresponding function. Having established reliable protein functional annotations, protein and gene collaborations are studied next. Interactions amongst transcription factors (TFs) and transcription factor binding sites (TFBSs) are fundamental for gene regulation and are highly specific, even in evolution background. To explain this binding specificity a Co-Evo (co-evolutionary) relationship is hypothesized. Pearson correlation and Mutual Information (MI) metrics are used to validate the hypothesis. Residue level MI is used to infer specific binding residues of TFs and corresponding TFBSs, assisting a thorough understanding of gene regulatory mechanism and aid targeted gene therapies. After comprehending TF and TFBS associations, interplay between genes is abstracted as Gene Regulatory Networks. Several methods using expression correlations are proposed to infer gene networks. However, most of them ignore the embedded dynamic delay induced by complex molecular interactions and other riotous cellular mechanisms, involved in gene regulation. The delay is rather obvious in high frequency time series expression data. DDGni, a novel network inference strategy is proposed by adopting gapped smith-waterman algorithm. Gaps attune expression delays and local alignment unveils short regulatory windows, which traditional methods overlook. In addition to gene level expression data, recent studies demonstrated the merits of exon-level RNA-Seq data in profiling splice variants and constructing gene networks. However, the large number of exons versus small sample size limits their practical application. SpliceNet, a novel method based on Large Dimensional Trace is proposed to infer isoform specific co-expression networks from exon-level RNA-Seq data. It provides a more comprehensive picture to our understanding of complex diseases by inferring network rewiring between normal and diseased samples at isoform resolution. It can be applied to any exon level RNA-Seq data and exon array data. In summary, this thesis first identifies major shortcomings of existing computational approaches to functional association of proteins and genes, and develops several tools viz. NRProF, Co-Evo, DDGni and SpliceNet. Collectively, they offer a comprehensive picture of the biomolecular system under study.
published_or_final_version
Biochemistry
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
43

Cross, Simon St John. "Chemical, biological and computational approaches toward novel antibody applications." Thesis, University of Nottingham, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395456.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Chung, Andy Heung Wing. "Novel mathematical and computational approaches for modelling biological systems." Thesis, University of Sussex, 2016. http://sro.sussex.ac.uk/id/eprint/60405/.

Full text
Abstract:
This work presents the development, analysis and subsequent simulations of mathematical models aimed at providing a basis for modelling atherosclerosis. This cardiovascular disease is characterized by the growth of plaque in artery walls, forming lesions that protrude into the lumen. The rupture of these lesions contributes greatly to the number of cases of stroke and myocardial infarction. These are two of the main causes of death in the UK. Any work to understand the processes by which the disease initiates and progresses has the ultimate aim of limiting the disease through either its prevention or medical treatment and thus contributes a relevant addition to the growing body of research. The literature supports the view that the cause of atherosclerotic lesions is an in inflammatory process-succinctly put, excess amounts of certain biochemical species fed into the artery wall via the bloodstream spur the focal accumulation of extraneous cells. Therefore, suitable components of a mathematical model would include descriptions of the interactions of the various biochemical species and their movement in space and time. The models considered here are in the form of partial differential equations. Specifically, the following models are examined: first, a system of reaction-diffusion equations with coupling between surface and bulk species; second, a problem of optimisation to identify an unknown boundary; and finally, a system of advection-reaction-diffusion equations to model the assembly of keratin networks inside cells. These equations are approximated and solved computationally using the finite element method. The methods and algorithms shown aim to provide more accurate and efficient means to obtain solutions to such equations. Each model in this work is extensible and with elements from each model combined, they have scope to be a platform to give a fuller model of atherosclerosis.
APA, Harvard, Vancouver, ISO, and other styles
45

Mohammed, Shiras Chakkungal. "Digital Detail – Computational Approaches for Multi Performative Building Skins." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1262259520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Moutoussis, Michael. "Defensive avoidance in paranoid delusions : experimental and computational approaches." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/defensive-avoidance-in-paranoid-delusions-experimental-and-computational-approaches(e36dbfcf-9341-43a0-be41-087f9b22d994).html.

Full text
Abstract:
This abstract summarises the thesis entitled Defensive Avoidance in Paranoid Delusions: Experimental and Computational Approaches, submitted by Michael Moutoussis to The University of Manchester for the degree of Doctor of Philosophy (PhD) in the faculty of Medical and Human Sciences, in 2011.The possible aetiological role of defensive avoidance in paranoia was investigated in this work. First the psychological significance of the Conditioned Avoidance Response (CAR) was reappraised. The CAR activates normal threat-processing mechanisms that may be pathologically over-activated in the anticipation of threats in paranoia. This may apply both to external threats and also to threats to the self-esteem.A temporal-difference computational model of the CAR suggested that a dopamine-independent process may signal that a particular state has led to a worse-than-expected outcome. On the contrary, learning about actions is likely to involve dopamine in signalling both worse-than-expected and better-than-expected outcomes. The psychological mode of action of dopamine blocking drugs may involve dampening (1) the vigour of the avoidance response and (2) the prediction-error signals that drive action learning.Excessive anticipation of negative events might lead to inappropriately perceived high costs of delaying decisions. Efforts to avoid such costs might explain the Jumping-to-Conclusions (JTC) bias found in paranoid patients. Two decision-theoretical models were used to analyse data from the ‘beads-in-a-jar’ task. One model employed an ideal-observer Bayesian approach; a control model made decisions by weighing evidence against a fixed threshold of certainty. We found no support for our ‘high cost’ hypothesis. According to both models the JTC bias was better explained by higher levels of ‘cognitive noise’ (relative to motivation) in paranoid patients. This ‘noise’ appears to limit the ability of paranoid patients to be influenced by cognitively distant possibilities.It was further hypothesised that excessive avoidance of negative aspects of the self may fuel paranoia. This was investigated empirically. Important self-attributes were elicited in paranoid patients and controls. Conscious and non-conscious avoidance were assessed while negative thoughts about the self were presented. Both ‘deserved’ and ‘undeserved’ persecutory beliefs were associated with high avoidance/control strategies in general, but not with increased of avoidance of negative thoughts about the self. On the basis of the present studies the former is therefore considerably more likely than the latter to play an aetiological role in paranoia.This work has introduced novel computational methods, especially useful in the study of ‘hidden’ psychological variables. It supported and deepened some key hypotheses about paranoia and provided consistent evidence against other important aetiological hypotheses. These contributions have substantial implications for research and for some aspects of clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
47

Moore, Jimmy Daniel. "Computational approaches for the interpretation of ToF-SIMS data." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/computational-approaches-for-the-interpretation-of-tofsims-data(2b097f73-f9e8-4870-89d3-35a7ad14546f).html.

Full text
Abstract:
High surface sensitivity and lateral resolution imaging make Time-of-Flight SecondaryIon Mass Spectrometry (ToF-SIMS) a unique and powerful tool for biologicalanalysis. Many of these biological systems, including drug-cell interactions, requireboth the identification and location of specific chemicals. ToF-SIMS, used in imagingmode, is making great strides towards the goal of single cell and tissue analysis. The experiments, however, result in huge volumes of data. Here advanced computationalapproaches employing sophisticated techniques to convert these data intoknowledge are introduced. This thesis aims to produce a framework for data analysis, integrating novel algorithms,image analysis and 3D visualisation. New schema outlined in this thesisaddress the issues of the immense size of 3D image stacks and the complexity containedwithin the enormous wealth of information in ToF-SIMS data. To deal with the issues of size and complexity of ToF-SIMS data, new techniquesto processing image data are investigated. Automated compression routines for ToF-SIMSimages using a peak picking routine tailored for ToF-SIMS are evaluated. Newuser friendly GUIs capable of processing and visualising very large image stacks areintroduced as part of a tool-kit designed to streamline the process of multivariateanalysis and image processing. Along with this two well known classification routines,namely AdaBoost and SVMs, are also applied to ToF-SIMS data of severalbacterial strains to test their ability to classify SIMS data accurately. This thesispresent several new approaches to data processing and interpretation of ToF-SIMSdata.
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Lei. "Understanding protein motions by computational modeling and statistical approaches." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
49

Petrone, Paula Marcela. "Computational approaches to conformational change and specificity in biomolecules /." May be available electronically:, 2009. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Weirather, Jason Lee. "Computational approaches to the study of human trypanosomatid infections." Thesis, The University of Iowa, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3609102.

Full text
Abstract:

Trypanosomatids cause human diseases such as leishmaniasis and African trypanosomiasis. Trypanosomatids are protists from the order Trypanosomatida and include species of the genera Trypanosoma and Leishmania, which occupy a similar ecological niche. Both have digenic life-stages, alternating between an insect vector and a range of mammalian hosts. However, the strategies used to subvert the host immune system differ greatly as do the clinical outcome of infections between species. The genomes of both the host and the parasite instruct us about strategies the pathogens use to subvert the human immune system, and adaptations by the human host allowing us to better survive infections. We have applied unsupervised learning algorithms to aid visualization of amino acid sequence similarity and the potential for recombination events within Trypanosoma brucei 's large repertoire of variant surface glycoproteins (VSGs). Methods developed here reveal five groups of VSGs within a single sequenced genome of T. brucei, indicating many likely recombination events occurring between VSGs of the same type, but not between those of different types. These tools and methods can be broadly applied to identify groups of non-coding regulatory sequences within other Trypanosomatid genomes. To aid in the detection, quantification, and species identification of leishmania DNA isolated from environmental or clinical specimens, we developed a set of quantitative-PCR primers and probes targeting a taxonomically and geographically broad spectrum of Leishmania species. This assay has been applied to DNA extracted from both human and canine hosts as well as the sand fly vector, demonstrating its flexibility and utility in a variety of research applications. Within the host genomes, fine mapping SNP analysis was performed to detect polymorphisms in a family study of subjects in a region of Northeast Brazil that is endemic for Leishmania infantum chagasi, the parasite causing visceral leishmaniasis. These studies identified associations between genetic loci and the development of visceral leishmaniasis, with a single polymorphism associated with an asymptomatic outcome after infection. The methods and results presented here have capitalized on the large amount of genomics data becoming available that will improve our understanding of both parasite and host genetics and their role in human disease.

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography