Teses / dissertações sobre o tema "Structure-Based approaches"

Siga este link para ver outros tipos de publicações sobre o tema: Structure-Based approaches.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Structure-Based approaches".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Vankayala, Sai Lakshmana Kumar. "Computational Approaches for Structure Based Drug Design and Protein Structure-Function Prediction". Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4601.

Texto completo da fonte
Resumo:
This dissertation thesis consists of a series of chapters that are interwoven by solving interesting biological problems, employing various computational methodologies. These techniques provide meaningful physical insights to promote the scientific fields of interest. Focus of chapter 1 concerns, the importance of computational tools like docking studies in advancing structure based drug design processes. This chapter also addresses the prime concerns like scoring functions, sampling algorithms and flexible docking studies that hamper the docking successes. Information about the different kinds of flexible dockings in terms of accuracy, time limitations and success studies are presented. Later the importance of Induced fit docking studies was explained in comparison to traditional MD simulations to predict the absolute binding modes. Chapter 2 and 3 focuses on understanding, how sickle cell disease progresses through the production of sickled hemoglobin and its effects on sickle cell patients. And how, hydroxyurea, the only FDA approved treatment of sickle cell disease acts to subside sickle cell effects. It is believed the primary mechanism of action is associated with the pharmacological elevation of nitric oxide in the blood, however, the exact details of this mechanism is still unclear. HU interacts with oxy and deoxyHb resulting in slow NO production rates. However, this did not correlate with the observed increase of NO concentrations in patients undergoing HU therapy. The discrepancy can be attributed to the interaction of HU competing with other heme based enzymes such as catalase and peroxidases. In these two chapters, we investigate the atomic level details of this process using a combination of flexible-ligand / flexible-receptor virtual screening (i.e. induced fit docking, IFD) coupled with energetic analysis that decomposes interaction energies at the atomic level. Using these tools we were able to elucidate the previously unknown substrate binding modes of a series of hydroxyurea analogs to human hemoglobin, catalase and the concomitant structural changes of the enzymes. Our results are consistent with kinetic and EPR measurements of hydroxyurea-hemoglobin reactions and a full mechanism is proposed that offers new insights into possibly improving substrate binding and/or reactivity. Finally in chapter 4, we have developed a 3D bioactive structure of O6-alkylguanine-DNA alkyltransferase (AGT), a DNA repair protein using Monte Carlo conformational search process. It is known that AGT prevents DNA damage, mutations and apoptosis arising from alkylated guanines. Various Benzyl guanine analouges of O6- methylguanine were tested for activity as potential inhibitors. The nature and position of the substitutions methyl and aminomethyl profoundly affected their activity. Molecular modeling of their interactions with alkyltransferase provided a molecular explanation for these results. The square of the correlation coefficient (R2 ) obtained between E-model scores (obtained from GLIDE XP/QPLD docking calculations) vs log(ED)values via a linear regression analysis was 0.96. The models indicate that the ortho-substitution causes a steric clash interfering with binding, whereas the meta-aminomethyl substitution allows an interaction of the amino group to generate an additional hydrogen bond with the protein. Using this model for virtually screening studies resulted in identification of seven lead compounds with novel scaffolds from National Cancer Institute Diversity Set2.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Tosatto, Silvio Carlo Ermanno. "Protein structure prediction improving and automating knowledge-based approaches /". [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10605023.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Emami, Fatemesadat. "Prediction of Thermodynamic Properties by Structure-Based Group Contribution Approaches". University of Akron / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=akron1217270074.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Selmadji, Anfel. "From monolithic architectural style to microservice one : structure-based and task-based approaches". Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS026/document.

Texto completo da fonte
Resumo:
Les technologies logicielles ne cessent d'évoluer pour faciliter le développement, le déploiement et la maintenance d'applications dans différents domaines. En parallèle, ces applications évoluent en continu pour garantir une bonne qualité de service et deviennent de plus en plus complexes. Cette évolution implique souvent des coûts de développement et de maintenance de plus en plus importants, auxquels peut s'ajouter une augmentation des coûts de déploiement sur des infrastructures d'exécution récentes comme le cloud. Réduire ces coûts et améliorer la qualité de ces applications sont actuellement des objectifs centraux du domaine du génie logiciel. Récemment, les microservices sont apparus comme un exemple de technologie ou style architectural favorisant l'atteinte de ces objectifs.Alors que les microservices peuvent être utilisés pour développer de nouvelles applications, il existe des applications monolithiques (i.e., monolithes) cons-truites comme une seule unité et que les propriétaires (e.g., entreprise, etc.) souhaitent maintenir et déployer sur le cloud. Dans ce cas, il est fréquent d'envisager de redévelopper ces applications à partir de rien ou d'envisager une migration vers de nouveaux styles architecturaux. Redévelopper une application ou réaliser une migration manuellement peut devenir rapidement une tâche longue, source d'erreurs et très coûteuse. Une migration automatique apparaît donc comme une solution évidente.L'objectif principal de notre thèse est de contribuer à proposer des solutions pour l'automatisation du processus de migration d'applications monolithiques orientées objet vers des microservices. Cette migration implique deux étapes : l'identification de microservices et le packaging de ces microservices. Nous nous focalisons sur d'identification en s'appuyant sur une analyse du code source. Nous proposons en particulier deux approches.La première consiste à identifier des microservices en analysant les relations structurelles entre les classes du code source ainsi que les accès aux données persistantes. Dans cette approche, nous prenons aussi en compte les recommandations d'un architecte logiciel. L'originalité de ce travail peut être vue sous trois aspects. Tout d'abord, les microservices sont identifiés en se basant sur l'évaluation d'une fonction bien définie mesurant leur qualité. Cette fonction repose sur des métriques reflétant la "sémantique" du concept "microservice". Deuxièmement, les recommandations de l'architecte logiciel ne sont exploitées que lorsqu'elles sont disponibles. Enfin, deux modèles algorithmiques ont été utilisés pour partitionner les classes d'une application orientée objet en microservices : un algorithme de regroupement hiérarchique et un algorithme génétique.La deuxième approche consiste à extraire à partir d'un code source orienté objet un workflow qui peut être utilisé en entrée de certaines approches existantes d'identification des microservices. Un workflow décrit le séquencement de tâches constituant une application suivant deux formalismes: un flot de contrôle et/ou un flot de données. L'extraction d'un workflow à partir d'un code source nécessite d'être capable de définir une correspondance entre les concepts du mon-de objet et ceux d'un workflow.Pour valider nos deux approches, nous avons implémenté deux prototypes et mené des expérimentations sur plusieurs cas d'étude. Les microservices identifiés ont été évalués qualitativement et quantitativement. Les workflows obtenus ont été évalués manuellement sur un jeu de tests. Les résultats obtenus montrent respectivement la pertinence des microservices identifiés et l'exactitude des workflows obtenus
Software technologies are constantly evolving to facilitate the development, deployment, and maintenance of applications in different areas. In parallel, these applications evolve continuously to guarantee an adequate quality of service, and they become more and more complex. Such evolution often involves increased development and maintenance costs, that can become even higher when these applications are deployed in recent execution infrastructures such as the cloud. Nowadays, reducing these costs and improving the quality of applications are main objectives of software engineering. Recently, microservices have emerged as an example of a technology or architectural style that helps to achieve these objectives.While microservices can be used to develop new applications, there are monolithic ones (i.e., monoliths) built as a single unit and their owners (e.g., companies, etc.) want to maintain and deploy them in the cloud. In this case, it is common to consider rewriting these applications from scratch or migrating them towards recent architectural styles. Rewriting an application or migrating it manually can quickly become a long, error-prone, and expensive task. An automatic migration appears as an evident solution.The ultimate aim of our dissertation is contributing to automate the migration of monolithic Object-Oriented (OO) applications to microservices. This migration consists of two steps: microservice identification and microservice packaging. We focus on microservice identification based on source code analysis. Specifically, we propose two approaches.The first one identifies microservices from the source code of a monolithic OO application relying on code structure, data accesses, and software architect recommendations. The originality of our approach can be viewed from three aspects. Firstly, microservices are identified based on the evaluation of a well-defined function measuring their quality. This function relies on metrics reflecting the "semantics" of the concept "microservice". Secondly, software architect recommendations are exploited only when they are available. Finally, two algorithmic models have been used to partition the classes of an OO application into microservices: clustering and genetic algorithms.The second approach extracts from an OO source code a workflow that can be used as an input of some existing microservice identification approaches. A workflow describes the sequencing of tasks constituting an application according to two formalisms: control flow and /or data flow. Extracting a workflow from source code requires the ability to map OO conceptsinto workflow ones.To validate both approaches, we implemented two prototypes and conducted experiments on several case studies. The identified microservices have been evaluated qualitatively and quantitatively. The extracted workflows have been manually evaluated relying on test suites. The obtained results show respectively the relevance of the identified microservices and the correctness of the extracted workflows
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Stehr, Henning [Verfasser]. "Graph-based approaches to protein structure- and function prediction / Henning Stehr". Berlin : Freie Universität Berlin, 2011. http://d-nb.info/1026266157/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

BIANCO, GIULIA. "Structure-based approaches applied to the study of pharmaceutical relevant targets". Doctoral thesis, Università degli Studi di Cagliari, 2016. http://hdl.handle.net/11584/266709.

Texto completo da fonte
Resumo:
Computer Aided Drug Design/Discovery methods became complementary to traditional and modern drug discovery approaches. Indeed CADD is useful to improve and speed up the detection and the optimization of bioactive molecules. The present study is focused on the application of structure-based approaches to the study of pharmaceutical relevant targets. The introduction provides a quick overview on the fundamentals of computational chemistry and structure-based methods, while in the successive chapters the main targets investigated through these methods are treated. In particular we focused our attention on Reverse Transcriptase of HIV-1, Monoamine oxidase B and VP35 of Ebola virus. The last chapter is dedicated to the validation of covalent docking performed with Autodock.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Annadurai, Sivakumar. "Lead generation using a privileged structure-based approach". Diss., Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/213119.

Texto completo da fonte
Resumo:
Pharmaceutical Sciences
Ph.D.
In drug discovery there are several approaches to lead generation and one traditional approach involves the synthesis and screening of a structurally diverse compound library against a number of biological targets to identify high affinity lead compounds. The use of a `privileged' structure-based compound library represents a viable approach that could lead to drug like lead compounds. Privileged structures are defined as those ligand substructures that may be used to generate high affinity leads for more than one type of receptor. Examples of privileged structures include phenyl substituted monocycles such as biphenyls, diphenyl methane derivatives, 1,4-dihydropyridines, fused ring systems such as chromones, quinoxalines, quinazolines, 2-benzoxazolones, indoles, benzimidazoles and benzofurans. There are several instances in the literature describing the development of compound libraries based on privileged structures with reportedly high hit rates. Privileged structure based approaches has been used with notable success in the identification of high affinity ligands especially for G-protein coupled receptors (GPCRs). The scaffold 2-aminothiazole (fused and non-fused) may be considered a privileged structure because of its occurrence in a wide variety of pharmaceuticals. The scaffold is found in antibacterials, anti-inflammatory agents, glutamate transporter (GLT-1) modulators, serotonin and muscarinic ligands. The present study involves the synthesis of a 2-aminothiazole (fused and non-fused) based compound library (60 compounds) by incorporating bioactive fragments shown to produce hits in the biological targets of interest. Microwave assisted organic synthesis (MAOS) has been employed at key steps of scaffold synthesis as well as in Suzuki coupling to generate the target aminothiazoles. Preliminary biological screening has resulted in the identification of some promising lead compounds. Trifluoromethoxy substituted aminothiazoles were found to be potent antimicrobials with MIC values in the range of 4-16 microgram/ml. Furanone based aminothiazoles showed affinity for muscarinic receptors. Piperidine based aminothiazoles showed greater than 90% of control (8-OH-DPAT) specific agonist response at the 5-HT1A receptor subtype. The Clog P values of the most potent antimicrobials were found to be in the range of 4.5-6.2 indicating the high lipophilicity of the compounds. High lipophilicity is known to cause solubility issues that may hamper future development. Therefore in an effort to make compounds with intermediate lipophilicity, the phenyl core of the potent aminothiazoles will be replaced with pyridine core using literature procedures (Pyridine core containing aminothiazoles showed Clog P < 4). Future plans include expanding the library, improving the yields of compounds and to evaluate the compounds as modulators of glutamate transporter (GLT-1). The work could be extended to include other privileged structures such as 2-aminooxazole, 2-aminobenzoxazole, 2-aminoimidazole and 2-aminobenzimidazole. These mono and bicyclic heterocyles may be considered bioisosteres of 2-aminothiazole.
Temple University--Theses
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Rosenberger, David. "From the bottom up - A systematic study of structure based coarse graining approaches". Phd thesis, TUprints, 2019. https://tuprints.ulb.tu-darmstadt.de/8509/1/Phd_thesis.pdf.

Texto completo da fonte
Resumo:
Computer simulations of soft matter require a compromise to be made between the computational efficiency and the resolution of a model studied. Highly resolved models can give insights into the interactions between individual atoms in a soft material. But, since these atomistic models are computational expensive, they are limited to small length scales and short time scales, which makes it difficult to compare simulation results with the ones from experiments in the laboratory. On the other hand, continuum models enable the study of soft matter at larger length and longer time scales and are computationally less expensive, but they rather focus on macroscopic properties than on their atomistic origin. A possible way to bridge the gap between these scales is to perform simulations at an intermediate level of resolution. The problem, which exists at this mesoscopic scale, is the lack of accurate models. Hence, new ones have to be built. The process to construct mesoscopic models based on information from the atomistic scale is commonly referred to as bottom-up coarse graining. Bottom-up coarse graining describes the process of lowering the resolution of a atomistic model to make it applicable at larger length and time scales. The major goal of this Ph.D. thesis is to increase the knowledge on so called structure-based bottom-up coarse graining techniques.These methods enable the derivation of coarse grained (CG) models, which accurately reproduce the structure of an atomistic or fine grained (FG) model at the mesoscopic scale. The shortcomings of different structure-based methods are carefully analyzed and new approaches to overcome them are presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Checa, Ruano Luis. "Structure-based design of antiviral drugs against respiratory viruses using in silico approaches". Electronic Thesis or Diss., Sorbonne université, 2024. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2024SORUS0743.pdf.

Texto completo da fonte
Resumo:
Les interactions protéine-protéine (IPP) jouent un rôle crucial dans de nombreuses voies biologiques et sont de plus en plus explorées en tant que cibles thérapeutiques potentielles, notamment pour le traitement des maladies infectieuses. Cependant, la conception de petites molécules modulatrices pour les IPP reste un défi, car les interfaces des IPP n'ont pas évolué pour lier des petites molécules comme les cibles thérapeutiques conventionnelles telles que les enzymes ou les récepteurs membranaires. Par conséquent, la preuve de leur drugabilité doit être apportée au cas par cas. Dans ce contexte, les approches computationnelles peuvent être utiles pour aider à la conception de modulateurs IPP. Ce travail vise à développer de nouveaux protocoles de conception de médicaments in silico spécifiquement adaptés aux cibles IPP, dans le but de concevoir de nouveaux médicaments antiviraux contre deux cibles IPP : le virus respiratoire syncytial (VRS) et le SARS-CoV-2
Protein-Protein interactions (PPI) play crucial roles in many biological pathways and are being increasingly explored as potential therapeutic targets, including for treating infectious diseases. However, designing small molecule modulators for PPI remains challenging as PPI interfaces have not evolved to bind small molecules like conventional drug targets such as enzymes or membrane receptors. Therefore, proof of their druggability must be made on a case-by-case basis. In this context, computational approaches can be useful in assisting the design of PPI modulators.This work aims to develop new in silico drug design protocols specifically tailored to PPI targets, with the goal of designing new antiviral drugs against two PPI targets: the respiratory syncytial virus (RSV) and the SARS-CoV-2
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Speidel, Joshua A. "Computational approaches to structure based ligand design : an illustration for P/CAF bromodomain ligands /". Access full-text from WCMC, 2007. http://proquest.umi.com/pqdweb?did=1453183061&sid=21&Fmt=2&clientId=8424&RQT=309&VName=PQD.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Smith, Breland Elise. "Small Molecule Approaches Toward Therapeutics for Alzheimer's Disease and Colon Cancer". Diss., The University of Arizona, 2014. http://hdl.handle.net/10150/337213.

Texto completo da fonte
Resumo:
The research described in this dissertation is focused on the knowledge-based, often in silico assisted design, targeted synthesis, and biological evaluation of small molecules of interest for two translational medicinal chemistry projects. The first project (Part 1) is aimed at the identification of blood brain barrier (BBB) penetrable dual specificity tyrosine phosphorylation regulated kinase-1A (DYRK1A) inhibitors as a potential disease modifying approach to mitigate cognitive deficits associated with Alzheimer's neurodegeneration. Two major series with potent activity against DYRK1A were identified in addition to a number of other chemotype sub-series that also exhibit somewhat promising activity. Extensive profiling of active analogs revealed interesting biological activity and selectivity, which led to the identification of two analogs for in vivo studies and revealed new opportunities for further investigation into other kinase targets implicated in neurodegeneration and polypharmacological approaches. The second project (Part 2) is focused on the development of compounds that inhibit PGE₂ production, while not affecting cyclooxygenase (COX) activity, as a novel approach to treat cancer. Compounds were designed with the intention of inhibiting microsomal prostaglandin E₂ synthase-1 (mPGES-1); however, biological evaluation revealed phenotypically active compounds in a cell based assay with an unknown mechanism of action. Further profiling revealed promising anticancer activity in xenograft mouse models. In addition, PGE₂ has been implicated in an immune evasion mechanism of F. tularensis, a strain of bacteria that remains an exploitable threat in biowarfare, thus a small number of analogs were evaluated in a cell model of F. tularensis infection stimulated PGE₂ production.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Cuzzolin, Alberto. "Novel in silico approaches to depict the protein-ligand recognition events". Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424818.

Texto completo da fonte
Resumo:
The discovery and commercialization of a new drug is a long and expensive process. Such process is divided into different phases during which the phisico-chemical and therapeutic properties of the compounds are determined. In particular, the aim of the first phase is to verify whether the compound recognises and interacts efficiently with the target protein. In the last decade, several computational tools have been developed and used to support experimentalists. For this purpose, the scientist have to deal with high complex systems that are difficult to study in whole; thus, the methods and algorithms developers have to strongly simplify the system treatment. Moreover, the time required to obtain the results depends on the computational resources (hardware) available. Fortunately, the technological progress have increased the computing power at low cost, resulting in new and more complex techniques development. During this Ph.D. project we were focused on the development and even the improvement of in silico methods, which allowed to answer certain questions by saving time and money. Furthermore, these methods were implemented in software presenting a Graphical Unit Interface (GUI) with the aim to enhance the user-friendliness. The computational techniques often require a high understanding of the methodology theoretical aspects and also a good informatics proficiency, like different type files handling and hardware management. For this reason, our developed software were organized as pipelines to automatize the entire process and to make this tools useful also for non-expert users. Finally, these methodologies were applied in several research projects demonstrating their usefulness by elucidating, for the first time, interesting aspects of the ligand-protein recognition pathway.
La scoperta e la commercializzazione di un nuovo farmaco è un processo lungo e dispendioso, che si articola in diverse fasi durante le quali vengono determinate le proprietà fisiche, chimiche e terapeutiche dei composti investigati. In particolare, nella prima fase di questo processo si cerca di verificare che il composto riconosca e interagisca efficacemente con la proteina bersaglio. A tale scopo, negli ultimi decenni numerosi strumenti computazionali sono stati sviluppati e utilizzati per supportare i ricercatori che si adoperano nella parte sperimentale. I problemi affrontati presentano un alto livello di complessità, che sarebbero difficili da studiare in toto, perciò gli sviluppatori di metodi e algoritmi devono necessariamente adottare notevoli semplificazioni. Inoltre, le risorse di calcolo (hardware) determinano le tempistiche con le quali è possibile ottenere il risultato richiesto. In tal senso, lo sviluppo tecnologico ha portato a un importante aumento della potenza di calcolo a costi accessibili, stimolando l’interesse per lo sviluppo di tecniche sempre più complesse. Durante questo progetto di dottorato ci si è focalizzati sullo sviluppo e il miglioramento di metodi in silico, che permettono di rispondere ad alcuni interrogativei a costi e tempistiche di molto ridotte. Inoltre, tali metodi sono stati implementati in software dotati di interfaccia grafica (GUI) al fine di poter aiutare l’utente nel loro utilizzo. Le tecniche computazionali spesso richiedono un’elevata conoscenza teorica delle metodologie e anche una certa competenza informatica, come la gestione di diversi tipologie di file e delle risorse hardware da impiegare. Per questo motivo i software da noi sviluppati sono stati organizzati in pipelines, in modo da automatizzare l’intero processo e rendere questi strumenti fruibili anhce a persone non esperte. Infine, l’utilità di queste nuove metodologie è stata comprovata in progetti in cui questi strumenti hanno permesso di delucidare aspetti interessanti e fino ad ora non ancora accessibili nell’ambito del riconoscimento proteina-ligando.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Sarti, Edoardo. "Assessing the structure of proteins and protein complexes through physical and statistical approaches". Doctoral thesis, SISSA, 2015. http://hdl.handle.net/20.500.11767/4863.

Texto completo da fonte
Resumo:
Determining the correct state of a protein or a protein complex is of paramount importance for current medical and pharmaceutical research. The stable conformation of such systems depend on two processes called protein folding and protein-protein interaction. In the course of the last 50 years, both processes have been fruitfully studied. Yet, a complete understanding is still not reached, and the accuracy and the efficiency of the approaches for studying these problems is not yet optimal. This thesis is devoted to devising physical and statistical methods for recognizing the native state of a protein or a protein complex. The studies will be mostly based on BACH, a knowledge-based potential originally designed for the discrimination of native structures in protein folding problems. BACH method will be analyzed and extended: first, a new method to account for protein-solvent interaction will be presented. Then, we will describe an extension of BACH aimed at assessing the quality of protein complexes in protein-protein interaction problems. Finally, we will present a procedure aimed at predicting the structure of a complex based on a hierarchy of approaches ranging from rigid docking up to molecular dynamics in explicit solvent. The reliability of the approaches we propose will be always benchmarked against a selection of other state-of-the-art scoring functions which obtained good results in CASP and CAPRI competitions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

IOVINELLI, DANIELE. "Structure-based approaches for the rapid identification of tumor microenvironment nutrients, inhibitors and allosteric modulators against soluble and membrane proteins". Doctoral thesis, Università di Siena, 2022. http://hdl.handle.net/11365/1203165.

Texto completo da fonte
Resumo:
Bioinformatics is an interdisciplinary field that develops methods and software tools for understanding biological data, in particular when the data sets are large and complex. As an interdisciplinary field of science, bioinformatics combines biology, computer science, information engineering, mathematics and statistics to analyze and interpret the biological data. Bioinformatics has been used for in silico analyzes of biological queries using mathematical and statistical techniques. During the three years of doctorate we focused about Structural Bioinformatics and in particular in studies that allow the research of potential inhibitors and allosteric modulators, both against soluble and membrane proteins , related in particular with cancer and infectious deseases. All the projects were automated thanks to Python and Bash programming languages. The main operative system was Linux. The projects results are very promising and the some of those are still in progress
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Mardhiah, Ulfah [Verfasser]. "Determination of biotic and abiotic factors influencing soil structure development in a riparian system based on observational and experimental approaches / Ulfah Mardhiah". Berlin : Freie Universität Berlin, 2015. http://d-nb.info/1068504838/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Rosenberger, David [Verfasser], Nico van der [Akademischer Betreuer] Vegt e Martin [Akademischer Betreuer] Hanke-Bourgeois. "From the bottom up - A systematic study of structure based coarse graining approaches / David Rosenberger ; Nico van der Vegt, Martin Hanke-Bourgeois". Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2019. http://d-nb.info/1186258497/34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Carreno, Velazquez Thalia Lizbeth. "Structure-based drug discovery approaches to identify modulators of the Nrf2 pathway and glutamate receptors AMPA GluA2 and Kainate GluK1 and GluK2". Thesis, University of Sussex, 2018. http://sro.sussex.ac.uk/id/eprint/75046/.

Texto completo da fonte
Resumo:
Nrf2 project: The protein nuclear factor erythroid 2-related factor 2 (Nrf2) is a transcription factor that provides protection against oxidative stress and the dysfunction of this pathway has been suggested to be implicated in many neurodegenerative diseases. The aim of this thesis was to identify novel Nrf2 activators that disrupt the protein-protein interaction between Nrf2 and Keap1 and thereby induce increased expression of antioxidant enzymes and protective genes. The crystal structure of the Keap1-Nrf2 interface was used to perform a virtual screen and compounds from the screen were assayed using a cellular nuclear complementation assay that measures the nuclear translocation of Nrf2 from the cytosol. Although two novel compounds were found to increase the Nrf2 nuclear translocation, they had low activity and further characterisation did not provide sufficient evidence of a Nrf2-Keap1 robust interaction. iGluRs project: AMPA and kainate receptors are ionotropic glutamate receptors (iGluRs) that are important for excitatory transmission and synaptic plasticity and are linked to several neurological disorders such as epilepsy, schizophrenia and autism. This project aimed to find novel allosteric modulators binding in the ligand-binding domain (LBD) of the GluA2 and GluK1 and GluK2 subtypes of AMPA and kainate receptors, respectively, using protein purification and X-ray crystallography methodologies. Fragment screening for GluA2 identified eight novel fragments, five of which were located at the dimer interface and three located in a novel site near the glycine-threonine dipeptide linker. As regards kainate receptors, structural information on the Gluk1 and GluK2 LBD was obtained, both proteins were soaked with in-house fragments with one compound displaying 20% occupancy in the GluK2 dimer interface. These data form the basis of future studies in the search for novel drugs for the treatment of epilepsy and schizophrenia.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Manzenrieder, Florian. "New approaches to discover protease inhibitors : by de novo rational structure based design (BACE1) and by development and use of 1̲hn31̲hn1P NMR as versatile tool to screen compound libraries /". München : Verl. Dr. Hut, 2009. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=017356959&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Verrastro, Ivan. "The redoxomics of PTEN : walking a fine line between damage and signaling : mass spectrometry-based approaches to study the effect of oxidation on PTEN function, structure, and protein-protein interactions". Thesis, Aston University, 2016. http://publications.aston.ac.uk/28891/.

Texto completo da fonte
Resumo:
The research described in this PhD thesis focuses on proteomics approaches to study the effect of oxidation on the modification status and protein-protein interactions of PTEN, a redox-sensitive phosphatase involved in a number of cellular processes including metabolism, apoptosis, cell proliferation, and survival. While direct evidence of a redox regulation of PTEN and its downstream signaling has been reported, the effect of cellular oxidative stress or direct PTEN oxidation on PTEN structure and interactome is still poorly defined. In a first study, GST-tagged PTEN was directly oxidized over a range of hypochlorous acid (HOCl) concentration, assayed for phosphatase activity, and oxidative post-translational modifications (oxPTMs) were quantified using LC-MS/MS-based label-free methods. In a second study, GSTtagged PTEN was prepared in a reduced and reversibly H2O2-oxidized form, immobilized on a resin support and incubated with HCT116 cell lysate to capture PTEN interacting proteins, which were analyzed by LC-MS/MS and comparatively quantified using label-free methods. In parallel experiments, HCT116 cells transfected with a GFP-tagged PTEN were treated with H2O2 and PTENinteracting proteins immunoprecipitated using standard methods. Several high abundance HOCl-induced oxPTMs were mapped, including those taking place at amino acids known to be important for PTEN phosphatase activity and protein-protein interactions, such as Met35, Tyr155, Tyr240 and Tyr315. A PTEN redox interactome was also characterized, which identified a number of PTEN-interacting proteins that vary with the reversible inactivation of PTEN caused by H2O2 oxidation. These included new PTEN interactors as well as the redox proteins peroxiredoxin-1 (Prdx1) and thioredoxin (Trx), which are known to be involved in the recycling of PTEN active site following H2O2-induced reversible inactivation. The results suggest that the oxidative modification of PTEN causes functional alterations in PTEN structure and interactome, with fundamental implications for the PTEN signaling role in many cellular processes, such as those involved in the pathophysiology of disease and ageing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Ryan, A. "Adult learner strategies in foreign language grammar learning : A task-based study of approaches to the learning of grammatical structure in a micro-language, with a discussion of their implications for language teaching and materials". Thesis, University of Edinburgh, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375802.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Bae, Kyounghwa. "Bayesian model-based approaches with MCMC computation to some bioinformatics problems". Texas A&M University, 2005. http://hdl.handle.net/1969.1/2396.

Texto completo da fonte
Resumo:
Bioinformatics applications can address the transfer of information at several stages of the central dogma of molecular biology, including transcription and translation. This dissertation focuses on using Bayesian models to interpret biological data in bioinformatics, using Markov chain Monte Carlo (MCMC) for the inference method. First, we use our approach to interpret data at the transcription level. We propose a two-level hierarchical Bayesian model for variable selection on cDNA Microarray data. cDNA Microarray quantifies mRNA levels of a gene simultaneously so has thousands of genes in one sample. By observing the expression patterns of genes under various treatment conditions, important clues about gene function can be obtained. We consider a multivariate Bayesian regression model and assign priors that favor sparseness in terms of number of variables (genes) used. We introduce the use of different priors to promote different degrees of sparseness using a unified two-level hierarchical Bayesian model. Second, we apply our method to a problem related to the translation level. We develop hidden Markov models to model linker/non-linker sequence regions in a protein sequence. We use a linker index to exploit differences in amino acid composition between regions from sequence information alone. A goal of protein structure prediction is to take an amino acid sequence (represented as a sequence of letters) and predict its tertiary structure. The identification of linker regions in a protein sequence is valuable in predicting the three-dimensional structure. Because of the complexities of both models encountered in practice, we employ the Markov chain Monte Carlo method (MCMC), particularly Gibbs sampling (Gelfand and Smith, 1990) for the inference of the parameter estimation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Panei, Francesco Paolo. "Advanced computational techniques to aid the rational design of small molecules targeting RNA". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS106.

Texto completo da fonte
Resumo:
Les molécules d'ARN sont devenues des cibles thérapeutiques majeures, et le ciblage par petites molécules se révèle particulièrement prometteur. Cependant, malgré leur potentiel, le domaine est encore en développement, avec un nombre limité de médicaments spécifiquement conçus pour l'ARN. La flexibilité intrinsèque de l'ARN, bien qu'elle constitue un obstacle, introduit des opportunités thérapeutiques que les outils computationnels actuels ne parviennent pas pleinement à exploiter malgré leur prédisposition. Le projet de cette thèse est de construire un cadre computationnel plus complet pour la conception rationnelle de composés ciblant l'ARN. La première étape pour toute approche structure-based est l'analyse des connaissances structurales disponibles. Cependant, il manquait une base de données complète, organisée et régulièrement mise à jour pour la communauté scientifique. Pour combler cette lacune, j'ai créé HARIBOSS, une base de données de toutes les structures expérimentalement déterminées des complexes ARN-petites molécules extraites de la base de données PDB. Chaque entrée de HARIBOSS, accessible via une interface web dédiée (https://hariboss.pasteur.cloud), est annotée avec les propriétés physico-chimiques des ligands et des poches d'ARN. Cette base de données constamment mise à jour facilitera l'exploration des composés drug-like liées à l'ARN, l'analyse des propriétés des ligands et des poches, et en fin de compte, le développement de stratégies in silico pour identifier des petites molécules ciblant l'ARN. Lors de sa sortie, il a été possible de souligner que la majorité des poches de liaison à l'ARN ne conviennent pas aux interactions avec des molécules drug-like. Cela est dû à une hydrophobicité moindre et une exposition au solvant accrue par rapport aux sites de liaison des protéines. Cependant, cela résulte d'une représentation statique de l'ARN, qui peut ne pas capturer pleinement les mécanismes d'interaction avec de petites molécules. Il était nécessaire d'introduire des techniques computationnelles avancées pour une prise en compte efficace de la flexibilité de l'ARN. Dans cette direction, j'ai mis en œuvre SHAMAN, une technique computationnelle pour identifier les sites de liaison potentiels des petites molécules dans les ensembles structuraux d'ARN. SHAMAN permet d'explorer le paysage conformationnel de l'ARN cible par des simulations de dynamique moléculaire atomistique. Dans le même temps, il identifie efficacement les poches d'ARN en utilisant de petits fragments dont l'exploration de la surface de l'ARN est accélérée par des techniques d'enhanced sampling. Dans un ensemble de données comprenant divers riboswitches structurés ainsi que de petits ARN viraux flexibles, SHAMAN a précisément localisé des poches résolues expérimentalement, les classant les régions d’interaction préférées. Notamment, la précision de SHAMAN est supérieure à celle d'autres outils travaillant sur des structures statiques d'ARN dans un scénario réaliste de découverte de médicaments où seules les structures apo de la cible sont disponibles. Cela confirme que SHAMAN est une plateforme robuste pour les futures initiatives de conception de médicaments ciblant l'ARN avec de petites molécules, en particulier compte tenu de sa pertinence potentielle dans les campagnes de criblage virtuel. Dans l'ensemble, ma recherche contribue à améliorer notre compréhension et notre utilisation de l'ARN en tant que cible pour les médicaments à petites molécules, ouvrant la voie à des stratégies thérapeutiques plus efficaces dans ce domaine en évolution
RNA molecules have recently gained huge relevance as therapeutic targets. The direct targeting of RNA with small molecule drugs emerges for its wide applicability to different classes of RNAs. Despite this potential, the field is still in its infancy and the number of available RNA-targeted drugs remains limited. A major challenge is constituted by the highly flexible and elusive nature of the RNA targets. Nonetheless, RNA flexibility also presents unique opportunities that could be leveraged to enhance the efficacy and selectivity of newly designed therapeutic agents. To this end, computer-aided drug design techniques emerge as a natural and comprehensive approach. However, existing tools do not fully account for the flexibility of the RNA. The project of this PhD work aims to build a computational framework toward the rational design of compounds targeting RNA. The first essential step for any structure-based approach is the analysis of the available structural knowledge. However, a comprehensive, curated, and regularly updated repository for the scientific community was lacking. To fill this gap, I curated the creation of HARIBOSS ("Harnessing RIBOnucleic acid - Small molecule Structures"), a database of all the experimentally-determined structures of RNA-small molecule complexes retrieved from the PDB database. HARIBOSS is available via a dedicated web interface (https://hariboss.pasteur.cloud), and is regularly updated with all the structures resolved by X-ray, NMR, and cryo-EM, in which ligands with drug-like properties interact with RNA molecules. Each HARIBOSS entry is annotated with physico-chemical properties of ligands and RNA pockets. HARIBOSS repository, constantly updated, will facilitate the exploration of drug-like compounds known to bind RNA, the analysis of ligands and pockets properties and, ultimately, the development of in silico strategies to identify RNA-targeting small molecules. In coincidence of its release, it was possible to highlight that the majority of RNA binding pockets are unsuitable for interactions with drug-like molecules, attributed to the lower hydrophobicity and increased solvent exposure compared to protein binding sites. However, this emerges from a static depiction of RNA, which may not fully capture their interaction mechanisms with small molecules. In a broader perspective, it was necessary to introduce more advanced computational techniques for an effective accounting of RNA flexibility in the characterization of potential binding sites. In this direction, I implemented SHAMAN, a computational technique to identify potential small-molecule binding sites in RNA structural ensembles. SHAMAN enables the exploration of the target RNA conformational landscape through atomistic molecular dynamics. Simultaneously, it efficiently identifies RNA pockets using small probe compounds whose exploration of the RNA surface is accelerated by enhanced-sampling techniques. In a benchmark encompassing diverse large, structured riboswitches as well as small, flexible viral RNAs, SHAMAN accurately located experimentally resolved pockets, ranking them as preferred probe hotspots. Notably, SHAMAN accuracy was superior to other tools working on static RNA structures in the realistic drug discovery scenario where only apo structures of the target are available. This establishes SHAMAN as a robust platform for future drug design endeavors targeting RNA with small molecules, especially considering its potential applicability in virtual screening campaigns. Overall, my research contributed to enhance our understanding and utilization of RNA as a target for small molecule drugs, paving the way for more effective drug design strategies in this evolving field
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Abdalla, Hassan Hamed. "An intelligent multi-controller structure : a knowledge-based approach". Thesis, University of Newcastle Upon Tyne, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387245.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Greco, T. "NETWORK META-ANALYSIS: A NOVEL APPROACH BASED ON A HIERARCHICAL DATA STRUCTURE". Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/344198.

Texto completo da fonte
Resumo:
INTRODUCTION Meta-analysis is a powerful tool to cumulate and summarize the knowledge in a research field through statistical instruments, and to identify the overall measure of a treatment’s effect by combining several study-specific results. However, it is a controversial tool, because even small violations of certain rules can lead to misleading conclusions. Pooling data through meta-analysis can create problems, such as non linear correlations, multifactorial rather than unifactorial effects, limited coverage, or inhomogeneous data that fails to connect with the hypothesis. When head-to-head treatment comparisons are not available or conclusive, the limitations of standard (i.e. pairwise) meta-analyses can be overcome by network meta-analyses (NMA) which can provide estimates of treatment efficacy or safety of multiple treatment regimens. Different treatment strategies are analyzed by statistical inference methods rather than simply summing up trials that evaluated the same intervention compared to another intervention, standard care, or placebo. If a first trial compares drug A to drug B, showing that drug A is significantly superior to drug B, and a second trial investigates the same or a similar patient population comparing drug B versus drug C (demonstrating that drug B is equivalent to drug C), NMA may allow to infer that drug A is also potentially superior to drug C for this given patient population, even though there was no direct test of drug A against drug C. CONTENTS In this thesis we provided and discussed methods to overcome the limits of standard (univariate) meta-analysis, focusing on the ability to cope with multiple treatments and to deal with correlated data where correlation can derive from multiple endpoints, time-varying responses or from clustered observation. In the first chapter we explore the principal steps (from writing a prospective protocol of analysis to results’ interpretation) in order to minimize the risk of conducting a mediocre meta-analysis and to support researchers to accurately evaluate the published findings. The second chapter represents an overview of conceptual and practical issues of a network meta-analysis. We start from general considerations on network meta-analysis to specifically appraise how to collect study data, structure the analytical network, and specify the requirements for different models and parameter interpretations. Specifically, we outline the key steps, from literature search to sensitivity analysis, necessary to perform a valid network meta-analysis on binomial data. In the third party of this work, we focus our attention on data which can be analyzed with a binomial model applying the Bayesian hierarchical approach and using Markov Chain Monte Carlo approach. We also apply this analytical approach to a case study on the beneficial effects of anesthetic agents in order to further clarify the statistical details of the models, diagnostics, and computations. We presented a practical guide with the actual WinBUGS and SAS codes to allow transparency and ease of replication of all steps that are required when carrying out such quantitative syntheses. In the fourth chapter we propose an alternative frequentist approach to estimate consistency and inconsistency models for a network meta-analysis. We discuss the multilevel network meta-analysis which include a three-level data structure: subjects within studies at the first level, studies within study designs at the second level and design configuration at the third level. We discuss multilevel modeling which may be carried out within widely available statistical programs such as SAS software, and we compare the results of a published Bayesian network meta-analysis on a binary endpoint which examines the effect on mortality of desflurane, isoflurane, sevoflurane, and total intravenous anaesthetics at the longest follow-up available. In the final chapter we compare the Bayesian and the novel frequentist-multilevel approach in performing network meta-analysis on publicly available data and we investigate the descriptive characteristics that may contribute to decrease or increase the potential difference between the estimates derived from the two approaches. The two approaches were compared in terms of the difference between the pooled estimates or their standardized values, and of the Euclidean distance. BAYESIAN NETWORK META-ANALYSIS Suppose that J trials provide mixed comparisons among K treatments and that a is the trial-specific reference treatment. The random effect model is defined by: yja= β0+eja for j=1,2,...,J; a=1,2,…,K-1 yjk= β0j+δj,ak+ejk for j=1,2,...,J; a=2,3,…,K; b
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Hossain, Muhammad Shazzad. "New mechanism-based design approaches for spudcan foundations in clay". University of Western Australia. School of Civil and Resource Engineering, 2009. http://theses.library.uwa.edu.au/adt-WU2009.0103.

Texto completo da fonte
Resumo:
[Truncated abstract] Three-legged mobile jack-up rigs supported on spudcan foundations are used to perform most offshore drilling in shallow to moderate water depths, and are now capable of operating in water depths up to 130 m. With the gradual move towards heavier rigs in deeper water, and continuing high accident rates during preloading of the spudcan foundations, appraisal of the performance and safety of jack-up rigs has become increasingly important. A crucial aspect of this is to improve understanding of the mechanisms of soil flow around spudcan foundations undergoing continuous large penetration, and to provide accurate estimates of spudcan penetration resistance, avoiding excessive conservatism. Spudcan foundations undergo progressive penetration during preloading, contrasting with onshore practice where a footing is placed at the base of a pre-excavated hole or trench. However, spudcan penetration is generally assessed within the framework used for onshore foundations, considering the bearing resistance of spudcans pre-placed at different depths within the soil profile. The lack of accurate design approaches that take proper account of the nature of spudcan continuous penetration, which is particularly important in layered soil profiles, is an important factor in the high rate of accidents. ... It was found that when a spudcan penetrated into single layer clay, there were three distinct penetration mechanisms: during initial penetration, soil flow extended upwards to the surface leading to surface heave and formation of a cavity above the spudcan; with further penetration, soil began to flow back gradually onto the top of the spudcan; during deep penetration, soil back-flow continued to occur while the initial cavity remained unchanged. For spudcan penetration in stiff-over-soft clay, four interesting aspects of the soil flow mechanisms were identified: (a) vertically downward motion of the soil and consequent deformation of the layer interface; (b) trapping of the stronger material beneath the spudcan, with this material being carried down into the underlying soft layer; (c) delayed back-flow of soil around the spudcan into the cavity formed above the spudcan; (d) eventual localised flow around the embedded spudcan, surrounded by strong soil. At some stage during continuous spudcan penetration, the soil starts to flow back into the cavity above the spudcan. The resulting back-flow provides a seal above the penetrating spudcan and limits the cavity depth. It was shown that the current offshore design guidelines are based on the wrong criterion for when back-flow occurs. New design charts with robust expressions were developed to estimate the point of back-flow and hence the cavity depth above the installed spudcan. Load-penetration responses were presented in terms of normalised soil properties and geometry factors for both single layer and two-layer clay profiles, taking full account of the observed flow mechanisms. Further, guidelines were suggested to evaluate the likelihood and severity of spudcan punch-through failure in layered clays. Finally, the effect of strain-rate and strain-softening was examined, in an attempt to model real soil behaviour more closely. Adjustment factors were proposed to modify the design approaches developed on the basis of ideal elastic-perfectly plastic soil behaviour.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Sadawi, Noureddin. "A rule-based approach for recognition of chemical structure diagrams". Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/4325/.

Texto completo da fonte
Resumo:
In chemical literature much information is given in the form of diagrams depicting chemical structures. In order to access this information electronically, diagrams have to be recognized and translated into a processable format. Although a number of approaches have been proposed for the recognition of molecule diagrams in the literature, they traditionally employ procedural methods with limited flexibility and extensibility. This thesis presents a novel approach that models the principal recognition steps for molecule diagrams in a strictly rule based system. We develop a framework that enables the definition of a set of rules for the recognition of different bond types and arrangements as well as for resolving possible ambiguities. This allows us to view the diagram recognition problem as a process of rewriting an initial set of geometric artefacts into a graph representation of a chemical diagram without the need to adhere to a rigid procedure. We demonstrate the flexibility of the approach by extending it to capture new bond types and compositions. In experimental evaluation we can show that an implementation of our approach outperforms the currently available leading open source system. Finally, we discuss how our framework could be applied to other automatic diagram recognition tasks.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Abdul, Shukor Shazmin. "A geometrical-based approach to recognise structure of complex interiors". Thesis, University of Warwick, 2013. http://wrap.warwick.ac.uk/55722/.

Texto completo da fonte
Resumo:
3D modelling of building interiors has gained a lot of interest recently, specifically since the rise of Building Information Modeling (BIM). A number of methods have been developed in the past, however most of them are limited to modelling non-complex interiors. 3D laser scanners are the preferred sensor to collect the 3D data, however the cost of state-of-the-art laser scanners are prohibitive to many. Other types of sensors could also be used to generate the 3D data but they have limitations especially when dealing with clutter and occlusions. This research has developed a platform to produce 3D modelling of building interiors while adapting a low-cost, low-level laser scanner to generate the 3D interior data. The PreSuRe algorithm developed here, which introduces a new pipeline in modelling building interiors, combines both novel methods and adapts existing approaches to produce the 3D modelling of various interiors, from sparse room to complex interiors with non-ideal geometrical structure, highly cluttered and occluded. This approach has successfully reconstructed the structure of interiors, with above 96% accuracy, even with high amount of noise data and clutter. The time taken to produce the resulting model is almost real-time, compared to existing techniques which may take hours to generate the reconstruction. The produced model is also equipped with semantic information which differentiates the model from a regular 3D CAD drawing and can be use to assist professionals and experts in related fields.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Villemagne, Baptiste. "Conception, synthèse et dévelopement d'inhibiteurs du répresseur transcriptionnel mycobactérien ETHR selon une approche par fragments. Une nouvelle approche dans la lutte contre la tuberculose". Thesis, Lille 2, 2012. http://www.theses.fr/2012LIL2S052/document.

Texto completo da fonte
Resumo:
Avec plus d’un million et demi de morts chaque année, la tuberculose reste aujourd’hui la seconde cause de mortalité liée à un agent infectieux. De plus l’organisation mondiale de la santé (OMS) a estimé en 2011 qu’un tiers de la population mondiale était porteuse du bacille Mycobacterium tuberculosis responsable de la maladie. Depuis la fin des années 1980, une recrudescence du nombre de cas de tuberculose est observée à l’échelle mondiale. Cette recrudescence est due à la fois à l’apparition de souches résistantes, mais également à l’épidémie de VIH qui est un facteur de prédisposition au déclenchement de la maladie.En 2000, le répresseur transcriptionnel mycobactérien EthR a été identifié comme étant un régulateur clé dans la bioactivation de l’éthionamide (ETH), un antituberculeux utilisé pour le traitement de seconde intention. En 2009, l’inhibition de ce répresseur par le développement de molécules « drug-like » a permis de potentialiser l’activité de l’éthionamide d’un facteur 3 chez la souris infectée et a permis de valider cette cible pour une future approche thérapeutique.Ce travail repose sur la découverte et l’optimisation de nouveaux inhibiteurs de ce répresseur transcriptionnel mycobactérien, à partir d’une petite molécule appelée « fragment » qui a été cocristallisée avec la protéine. Par la combinaison d’un criblage in silico, d’un criblage in vitro des touches identifiées, de l’étude des structures radiocristallographiques des complexes ligands/protéines et de la chimie médicinale, le développement de trois approches complémentaires dites « fragmentgrowing », « fragment-merging » et « fragment-linking » a permis de développer des composés présentant de fortes activités. Ces résultats permettront très prochainement de sélectionner une nouvelle molécule issue de ce travail dans la perspective de nouveaux essais sur le modèle murin
Tuberculosis (TB) remains the leading cause of death due to a single infective agent with more than 1.5 million people killed each year. In 2011, the world health organization (WHO) estimated that one third of the world’s population is infected with Mycobacterium tuberculosis, the pathogen responsible for the disease. This phenomenon may be due to an explosive escalation of TB incidence that occurred in the 1980s due to the emergence of both resistant strains and HIV epidemic.In 2000, EthR, a mycobacterial transcriptional repressor, was identified as a key modulator of ethionamide (ETH) bioactivation. ETH is one of the main second-line drugs used to treat drug resistant strains. In 2009, it was shown that co-administration of ETH and drug-like inhibitors of EthR was able to boost ETH activity threefold in a mouse-model of TB-infection, thus validating the target for a new therapeutic strategy.This work deals with the discovery and optimisation of new EthR inhibitors, based on a small molecule, called a “fragment”, co-crystallized with the protein. We combined in silico screening, in vitro evaluation of the hit compounds, study of co-crystal structures and medicinal chemistry to develop three complementary approaches called “fragment growing”, “fragment merging” and “fragment linking” that led to the discovery of very potent inhibitors. Based on these results, we are currently selecting a potential candidate for new in vivo experiments
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Pekilis, Barry. "An Ontology-Based Approach To Concern-Specific Dynamic Software Structure Monitoring". Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2836.

Texto completo da fonte
Resumo:
Software reliability has not kept pace with computing hardware. Despite the use reliability improvement techniques and methods, faults remain that lead to software errors and failures. Runtime monitoring can improve software reliability by detecting certain errors before failures occur. Monitoring is also useful for online and electronic services, where resource management directly impacts reliability and quality. For example, resource ownership errors can accumulate over time (e. g. , as resource leaks) and result in software aging. Early detection of errors allows more time for corrective action before failures or service outages occur. In addition, the ability to monitor individual software concerns, such as application resource ownership structure, can help support autonomic computing for self-healing, self-adapting and self-optimizing software.

This thesis introduces ResOwn - an application resource ownership ontology for interactive session-oriented services. ResOwn provides software monitoring with enriched concepts of application resource ownership borrowed from real-world legal and ownership ontologies. ResOwn is formally defined in OWL-DL (Web Ontology Language Description Logic), verified using an off-the-shelf reasoner, and tested using the call processing software for a small private branch exchange (PBX). The ResOwn Prime Directive states that every object in an operational software system is a resource, an owner, or both simultaneously. Resources produce benefits. Beneficiary owners may receive resource benefits. Nonbeneficiary owners may only manage resources. This approach distinguishes resource ownership use from management and supports the ability to detect when a resource's role-based runtime capacity has been exceeded.

This thesis also presents a greybox approach to concern-specific, dynamic software structure monitoring including a monitor architecture, greybox interpreter, and algorithms for deriving monitoring model from a monitored target's formal specifications. The target's requirements and design are assumed to be specified in SDL, a formalism based on communicating extended finite state machines. Greybox abstraction, applicable to both behavior and structure, provides direction on what parts, and how much of the target to instrument, and what types of resource errors to detect.

The approach was manually evaluated using a number of resource allocation and ownership scenarios. These scenarios were obtained by collecting actual call traces from an instrumented PBX. The results of an analytical evaluation of ResOwn and the monitoring approach are presented in a discussion of key advantages and known limitations. Conclusions and recommended future work are discussed at the end of the thesis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Mirzoev, Alexander. "Multiscale simulations of soft matter: systematic structure-based coarse-graining approach". Doctoral thesis, Stockholms universitet, Institutionen för material- och miljökemi (MMK), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-94756.

Texto completo da fonte
Resumo:
The soft matter field considers a wide class of objects such as liquids, polymers, gels, colloids, liquid crystals and biological macromolecules, which have complex internal structure and conformational flexibility leading to phenomena and properties having multiple spacial and time scales. Existing computer simulation methods are able to cover these scales, but with different resolutions, and ability to link them together performing a multiscale simulation is highly desirable. The present work addresses systematic multiscaling approach for soft matter studies, using structure-based coarse-graining (CG) methods such as iterative Boltzmann inversion and inverse Monte Carlo. A new software package MagiC implementing these methods is introduced. The software developed for the purpose of effective CG potential derivation is applied for ionic water solution and for water solution of DMPC lipids. A thermodynamic transferability of the obtained potentials is studied. The effective inter-ionic solvent mediated potentials derived for NaCl successfully reproduce structural properties obtained in explicit solvent simulation, which indicates the perspectives of using the structure-based coarse-graining for studies of ion-DNA and other polyelectrolytes systems. The potentials have temperature dependence, dominated mostly by the electrostatic long-range part which can be described by temperature dependent effective dielectric permittivity, leaving the short-range part of the potential thermodynamically transferable. For CG simulations of lipids a 10-bead water-free model of dimyristoylphosphatidylcholine is introduced. Four atomistic reference systems, having different lipid/water ratio are used to derive the effective bead-bead potentials, which are used for subsequent coarse-grained simulations of lipid bilayer. A significant influence of lipid/water ratio in the reference system on the properties of the simulated bilayers is noted, however it can be softened by additional angle-bending interactions. At the same time the obtained bilayers have stable structure with correct density profiles. The model provides acceptable agreement between properties of coarse-grained and atomistic bilayer, liquid crystal - gel phase transition with temperature change, as well as realistic self-aggregation behavior, which results in formation of bilayer, bicell or vesicle from a dispersed lipid solution in a large-scale simulation.

At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 4: Submitted. 

 

Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Chimni, Jasbinder Singh Carleton University Dissertation Management Studies. "An approach to computer-based support for work breakdown structure development". Ottawa, 1989.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Di, Lascio Francesca Marta Lilja <1979&gt. "Analyzing the dependence structure of microarray data: a copula–based approach". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/670/1/Tesi_Di_Lascio_Francesca_Marta_Lilja.pdf.

Texto completo da fonte
Resumo:
The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Di, Lascio Francesca Marta Lilja <1979&gt. "Analyzing the dependence structure of microarray data: a copula–based approach". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/670/.

Texto completo da fonte
Resumo:
The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Güell, Riera Oriol. "A network-based approach to cell metabolism: from structure to flux balances". Doctoral thesis, Universitat de Barcelona, 2015. http://hdl.handle.net/10803/292364.

Texto completo da fonte
Resumo:
The thesis called “A network-based approach to cell metabolism: from structure to flux balances” shows how the vision of cell metabolism as a whole allows to unveil new mechanisms and responses impossible to reach by traditional reductionist approaches. Different lines of research have been used, and each one has allowed extracting new insights about the properties of cell metabolism of three organisms, Mycoplasma pneumoniae, Escherichia coli, and Staphylococcus aureus. To do so, tools that belong to the complex network science and Systems Biology have been used. The first line of study analyzes how the structure of the metabolic networks of the three mentioned organisms respond when their metabolic networks are affected by perturbations, i.e., when a reaction or a set of them are forced to be non-operative. To do this, the applied algorithm spreads a structural cascade when an initial reaction is forced to be non-operative. This study determines that evolutionary pressure favors the ability of efficient metabolic regulation at the expense of losing robustness to reaction failures. The second line of study focuses on the application of the technique called Flux Balance Analysis (FBA), which is able to compute the fluxes of all reactions composing a metabolic network, assuming that the biological target of the organism is to maximize maximizes the growth rate. The study of synthetic lethal pairs in E. coli and M. pneumoniae with FBA allows identifying two protection mechanisms called plasticity and redundancy. Plasticity sets up as a backup mechanism that is able to reorganize metabolic fluxes turning on inactive reactions when coessential counterparts are removed in order to maintain viability in a specific medium. Redundancy corresponds to a simultaneous use of different flux channels that ensures viability and besides increases growth. The third part combines FBA and the technique called Disparity Filter in E. coli and M. pneumoniae to obtain metabolic backbones, which are reduced versions of metabolic networks composed by the most relevant connections, this relevancy being determined by the importance of the chemical fluxes. One finds that the disparity filter recognizes metabolic connections that are important for long-term evolution, these connections being related to ancestral pathways. In addition, the disparity filter identifies metabolic connections that are important for short-term adaptation. These connections are related to pathways whose reactions quickly adapt to external stimuli. The last line of study studies whether the assumption of maximizing the growth rate leads to a representative solution or not. Although FBA gives a single solution, there exist a number of other possible solutions that are chemically feasible but that do not maximize growth, and that form part of the whole flux space. In this way, the third line of study computes all the possible solutions, obtaining in this way the whole space of flux solutions of E. coli. The information content in the whole space of solutions provides with an entire map of phenotypes to evaluate behavior and capabilities of metabolism. Therefore, it is found that the FBA solution is eccentric compared to the mean of solutions. In addition, the whole flux solution map can be used to calibrate the deviation of FBA from experimental observations. To finish, in the map it is possible to find solutions that perform aerobic fermentation, a process which is impossible to recover with FBA computations unless extra constraints are used. The obtained results could be applied in medical applications, for example to study the metabolism of cancer cells. Thus, it could be a way to study how to force that these cells do not proliferate in the human body, a fact that causes many problems in humans.
La visió completa del metabolisme cel·lular, és a dir, tenint en compte totes les reaccions que el componen, permet descobrir nous mecanismes i respostes que són impossibles d’obtenir amb els mètodes reduccionistes tradicionals. L’estudi d’una xarxa metabòlica completa requereix eines que pertanyen a la Biologia de Sistemes i a la Ciència de les Xarxes Complexes. La present tesi mostra com la combinació de les eines que pertanyen a aquests dos camps es pot aplicar per a descobrir noves propietats de les xarxes metabòliques. D'aquesta manera, s’han estudiat les xarxes metabòliques de tres bacteris amb les següents eines: (1) algoritme de cascada, que es pot usar per estudiar si les xarxes metabòliques poden sobreviure a inactivacions de determinades reaccions, (2) Flux Balance Analysis, que s’usa per a calcular els fluxos a través de les reaccions que composen la xarxa metabòlica suposant que l’objectiu biològic de l’organisme a estudiar és maximitzar la velocitat de creixement, (3) Disparity Filter, que permet obtenir versions reduïdes de xarxes metabòliques, cosa que facilita el seu estudi i anàlisi, i (4) Hit-And-Run, que permet obtenir totes les solucions metabòliques independentment de que maximitzin el creixement de l’organisme. En aquesta tesi es demostra que el metabolisme de les cèl·lules dels organismes vius ha evolucionat de forma que aconsegueix sobreviure a les inactivacions de les reaccions que el componen. Addicionalment, s’identifiquen les rutes metabòliques responsables dels processos evolutius i adaptatius que es donen en les xarxes metabòliques. A més, també es demostra que la tècnica Flux Balance Analysis dóna una solució de fluxos que no es representativa de totes les possibles solucions. Cal remarcar que això no invalida la tècnica, sinó que les assumpcions que usa donen una solució concreta que té sentit biològic però que és molt diferent de la resta de solucions. És important recalcar que els resultats obtinguts en aquesta tesi podrien emprar-se en aplicacions mèdiques, per exemple estudiar el metabolisme de les cèl·lules cancerígenes, que podia utilitzar-se per a que aquestes cèl·lules no proliferin en el cos dels humans, un fet que causa moltes problemes en l'ésser humà.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Hench, Jürgen Christian Hans. "A combined threading and genetic algorithm based approach to predict protein structure". [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=976774739.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Bratcher, Holly Bea. "Meningococcal genome dynamics : an allele based, population approach to define lineage structure". Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:08db20f8-2bda-4322-a6bc-5745139bbbad.

Texto completo da fonte
Resumo:
Advances in genome sequencing technologies have rapidly expanded the number of bacterial genome sequences available for study, permitting the emergence of the discipline of population genomics. Bioinformatics platforms were used to exploit this resource by the provision of data in an easily accessible and uniform format. The de novo assembly, combined with gene-by-gene annotation, generated high quality draft genomes in which the majority of protein-encoding genes were present with high accuracy. The approach catalogued diversity efficiently and was a practical approach to interpreting whole genome sequence data for a large bacterial population. The hyperinvasive meningococcal Lineage 3 core and accessory genome was described and mobile genetic elements and undescribed proteins were found to be influencing the shape of the lineages evolution and population structure. Commensal carriage of the meningococcus was examined using temporally paired isolates. Long-term carriage was found and the comparison of the genomes pairs found a highly conserved set of core genes. The methods used generated novel insights into the biology of the meningococcus and improved our understanding of the whole population structure, not just disease causing lineages. This work contributes to knowledge of genomic evolution of bacteria and population structure within a species.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Hong, Soonyoung. "An effective data mining approach for structure damage indentification". The Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=osu1194903908.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

SALAM, ABDUL. "Model-based and data-based frequency domain design of fixed structure robust controller: a polynomial optimization approach". Doctoral thesis, Politecnico di Torino, 2022. https://hdl.handle.net/11583/2972836.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Brucet, Balmaña Sandra. "Zooplankton structure and dynamics in Mediterranean marshes (Empordà Wetlands): a size-based approach". Doctoral thesis, Universitat de Girona, 2003. http://hdl.handle.net/10803/7649.

Texto completo da fonte
Resumo:
Zooplankton community structure (composition, diversity, dynamics and trophic relationships) of Mediterranian marshes, has been analysed by means of a size based approach.
In temporary basins the shape of the biomass-size spectra is related to the hydrological cycle. Linear shape spectra are more frequent in flooding situations when nutrient input causes population growth of small-sized organisms, more than compensating for the effect of competitive interactions. During confinement conditions the scarcity of food would decrease zooplankton growth and increase intra- and interspecific interactions between zooplankton organisms which favour the greatest sizes thus leading to the appearance of curved shape spectra.
Temporary and permanent basins have similar taxonomic composition but the latter have higher species diversity, a more simplified temporal pattern and a size distribution dominated mainly by smaller sizes. In permanents basins zooplankton growth is not only conditioned by the availability of resources but by the variable predation of planktivorous fish, so that the temporal variability of the spectra may also be a result of temporal differences in fish predation.
Size diversity seems to be a better indicator of the degree of this community structure than species diversity. The tendency of size diversity to increase during succession makes it useful to discriminate between different succession stages, fact that is not achieved by analysing only species diversity since it is low both under large and frequent or small and rare disturbances.
Amino acid composition differences found among stages of copepod species indicate a gradual change in diet during the life cycle of these copepods, which provide evidence of food niche partitioning during ontogeny, whereas Daphnia species show a relatively constant amino acid composition. There is a relationship between the degree of trophic niche overlap among stages of the different species and nutrient concentration. Copepods, which have low trophic niche overlap among stages are dominant in food-limited environments, probably because trophic niche partitioning during development allow them to reduce intraspecific competition between adults, juveniles and nauplii. Daphnia species are only dominant in water bodies or periods with high productivity, probably due to the high trophic niche overlap between juveniles and adults. These findings suggest that, in addition to the effect of interspecific competition, predation and abiotic factors, the intraspecific competition might play also an important role in structuring zooplankton assemblages.
L'estructura de la comunitat zooplanctònica dels Aiguamolls de l'Empordà, composició específica, dinàmica, diversitat i relacions tròfiques, s'ha estudiat a partir d'una aproximació basada en la mida.
L'aproximació s'ha basat en la modelització de l'espectre de mida-biomassa de la comunitat zooplanctònica a partir de la distribució de Pareto. S'ha observat que la forma de l'espectre de mida-biomassa del zooplàncton canvia segons les condicions ambientals: en situacions d'entrada d'aigua són més freqüents els espectres lineals ja que les entrades de nutrients causen un creixement dels organismes de mida petita de manera que es sobrecompensa l'efecte de les interaccions competitives. Els espectres corbats són més freqüents en situacions de confinament quan els recursos són escassos i les interaccions ecològiques entre els organismes prenen més rellevància de manera que es veuen més afavorides les espècies de mida gran que les de mida petita.
Les comunitats zooplanctòniques de les diferents llacunes de la maresma tenen una composició taxonòmica similar però una diversitat d'espècies, un patró estacional i una distribució de mides diferents. En el patró estacional de les llacunes temporànies es poden distingir sis situacions que estan condicionades per el cicle hidrològic i dominades per les següents espècies: Synchaeta spp, Diacyclops bicuspidatus, Eurytemora velox. Calanipeda aquae-dulcis, Cletocamptus confluens i Brachionus plicatilis. La llacuna permanent, tot i presentar una diversitat més alta que les temporànies, té un patró estacional més simple, amb només dues situacions: la situació de Synchaeta spp. i la de C. aquae-dulcis. Aquest patró estacional més reduït i una distribució de mides dominada principalment per organismes de mida petita s'explicarien per la pressió de depredació dels peixos en aquesta llacuna. Així, la variació en la forma de l'espectre de mida-biomassa de les comunitats zooplanctòniques de la llacuna permanent no està únicament relacionat amb el cicle higrològic sinó amb la pressió de depredació dels peixos.
La distribució de Pareto es pot utilitzar per calcular un índex de diversitat de mides (μs'). En el cas de la comunitat de zooplàncton la diversitat de mides ha resultat ser un millor indicador del nivell d'estructuració que la diversitat d'espècies, els augments de la qual moltes vegades no són deguts a una elevada estructuració de la comunitat. La tendència a augmentar al llarg de la successió fa que la diversitat de mides pugui discriminar entre diferents estadis de la successió, en canvi això no es possible a partir de la diversitat d'espècies ja que pot assolir valors elevats tant en moments de pertorbacions elevades i freqüents com en moments de pertorbacions petites i escasses. En llacunes temporànies, valors alts de diversitat de mides coincideixen en períodes de dominància d'una espècie de calanoid, períodes que representen les situacions més estables en aquestes llacunes.
L'anàlisi de la composició d'aminoàcids (AAC) ens demostra que les espècies de copèpodes dominants als Aiguamolls de l'Empordà mostren un canvi gradual en la seva composició bioquímica al llarg de la seva ontogènesi. Aquestes diferències en la AAC entre estadis no són degudes a diferències filogenètiques ni a les condicions ambientals, sinó a variacions en la dieta. Així, les diferents espècies de copèpodes mostren una repartició del nínxol tròfic entre els seus estadis de desenvolupament. Pel que fa a les espècies de dàfnids, mostren una AAC relativament constant durant el seu desenvolupament, fet que indicaria que tot els estadis s'alimenten del mateix recurs, és a dir, que juvenils i adults mostren un solapament del nínxol tròfic. La relació trobada entre la concentració de nutrients de les llacunes i el grau de solapament entre estadis de les espècies dominants mostra que la repartició del nínxol tròfic entre joves i adults és un possible mecanisme per tal d'evitar la competència intraespecífica. Els copèpodes, que tenen un baix solapament entre estadis, dominen en ambients on el recurs és limitat ja que la repartició del nínxol tròfic durant el desenvolupament els permet reduir la competència per l'aliment entre estadis. En les espècies de dàfnids, l'elevat solapament entre joves i adults els restringeix en llacunes o períodes amb elevada productivitat per tal d'evitar la competència intraespecífica. Així doncs, la competència intraespecífica juga un paper important a l'hora d'estructurar la comunitat de zooplàncton, juntament amb els dos altres factors que sovint són citats, la depredació i la competència interspecífica.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Boulkeroua, Wassila Abdelli. "The application of the fragment-based screening approach to RmlA protein and PA1645 structure". Thesis, University of St Andrews, 2013. http://hdl.handle.net/10023/4477.

Texto completo da fonte
Resumo:
P. aerguinosa is a serious human bacterial pathogen. This thesis describes attempts to use structural biology to identify new starting points for drugs against P. aerguinosa .A number of fragment-based screening techniques were used in order to identify potential inhibitors to P. aerguinosa RmlA protein, the first enzyme in the L-Rhamnose pathway. A 500 “Rule of 3” Fragment Library (Maybridge) was investigated. The first approach was the application of Differential Scanning Fluorimetry (DSF) approach to detect ligands that bind and stabilize RmlA protein. The stabilisation of RmlA was determined by thermal unfolding in the presence of each of the 500 compounds. 21 of those compounds were found to increase the protein stability. The library was then screened by NMR spectroscopy for binding to RmlA. Two techniques were evaluated STD and WaterLOGSY. 106 compounds gave positive results in both NMR experiments. These hits were then tested by a simple STD competition binding with dTTP, a natural RmlA substrate, in order to identify those binding at the active or allosteric site. 21 out of the 106 compounds were observed to compete with dTTP. The results were compared to the results of the DSF screening. Compounds that tested positive in the dTTP competition binding STD experiment and in the DSF screening were tested for their ability to inhibit RmlA in a biological assay. A coupled enzyme assay was used to monitor RmlA activity. Only one compound, 3-pyridin-3-ylaniline, showed significant inhibition of the enzyme activity. The PA1645 protein from P. aerguinosa has been identified as essential. The protein was overexpressed, purified and crystallised. Data were collected at Diamond on beamline IO3 and phases were determined by S-SAD at a wavelength of 1.6Å. Final coordinates have been deposited in the protein data bank under entry code 2XU8. The structure has 3 molecules in the asymmetric unit. There is some ambiguity as to the validity of the proposed trimeric arrangement, with results from solution and crystal disagreeing. Fragment-based screening approach has been applied to RmlA protein, using the DSF technique, a number of ligand-based NMR experiments and a coupled enzyme biological assay. 3-pyridin-3-ylaniline was the only compound that showed significant inhibition of the enzyme activity. The structure of PA1645 from P. aerguinosa has been solved. This work will help to design new drugs to combat multi-drug resistant P. aerguinosa and MTB.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

O'Leary, Brian. "A Vertex-Based Approach to the Statistical and Machine Learning Analyses of Brain Structure". University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1576254162111087.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

ZHU, SHICHAO. "RECURSIVE MULTI-MODEL UPDATING OF BUILDING STRUCTURE: A NEW SENSITIVITY BASED FINITE ELEMENT APPROACH". Doctoral thesis, Politecnico di Torino, 2016. http://hdl.handle.net/11583/2643111.

Texto completo da fonte
Resumo:
An invaluable tool for structural health monitoring and damage detection, parametric system identification through model-updating is an inverse problem, affected by several kinds of modelling assumptions and measurement errors. By minimizing the discrepancy between the measured data and the simulated response, traditional model-updating techniques identify one single optimal model that behaves similarly to the real structure. Due to several sources of errors, this mathematical optimum may be far from the true solution and lead to misleading conclusions about the structural state. Instead of the mere location of the global minimum, it should be therefore preferred the generation of several alternatives, capable to express near-optimal solutions while being as different as possible from each other in physical terms. The present paper accomplishes this goal through a new recursive, direct-search, multi-model updating technique, where multiple models are first created and separately solved for the respective minimum, and then a selection of quasi-optimal alternatives is retained and classified through data mining and clustering algorithm. The main novelty of the approach consists in the recursive strategy adopted for minimizing the objective function, where convergence towards optimality is sped up by sequentially changing only selected subsets of parameters, depending on their respective influence on the error function. Namely, this approach consists of two steps. First, a sensitivity analysis is performed. The input parameters are allowed to vary within a small interval of fractional variation around a nominal value to compute the partial derivatives numerically. After that, for each parameter the sensitivities to all the responses are summed up, and used as an indicator of sensitivity of the parameter. According to the sensitivity indicators the parameters are divided into an indicated number of subsets given by the user. Then every subset is updated recursively with a specified order according to the sensitivity indicator.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Thomas, Sherine Elizabeth. "Targeting Mycobacterium abscessus infection in cystic fibrosis : a structure-guided fragment-based drug discovery approach". Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289126.

Texto completo da fonte
Resumo:
Recent years have seen the emergence of Mycobacterium abscessus, a highly drug-resistant non-tuberculous mycobacterium, which causes life-threatening infections in people with chronic lung conditions like cystic fibrosis. This opportunistic pathogen is refractory to treatment with standard anti-tuberculosis drugs and most currently available antibiotics, often resulting in accelerated lung function decline. This project aims to use a structure-guided fragment-based drug discovery approach to develop effective drugs to treat M. abscessus infections. During the early stage of the project, three bacterial targets were identified, based on analysis of the structural proteome of M. abscessus and prior knowledge of M. tuberculosis drug targets, followed by gene knockout studies to determine target essentiality for bacterial survival. The three targets from M. abscessus were then cloned, expressed and purified and suitable crystallization conditions were identified leading to the determination of high resolution structures. Further, a large number of starting fragments that hit the three target proteins were determined, using a combination of biophysical screening methods and by defining crystal structures of the complexes. For target 3, PPAT (Phosphopantethiene adenylyl transferase), a chemical linking of two fragments followed by iterative fragment elaboration was carried out to obtain two compounds with low micromolar affinities in vitro. However, these compounds afforded only low inhibitory activity on M. abscessus whole cell. All starting fragments of target 2, PurC (SAICAR synthase), occupied the ATP indole pocket. Efforts were then made to identify further fragment hits by screening diverse libraries. Sub-structure searches of these initial fragment hits and virtual screening helped to identify potential analogues amenable to further medicinal chemistry intervention. While fragment hits of target 1, TrmD (tRNA-(N1G37) methyl transferase), were prioritized, whereby two chemical series were developed using fragment growing and merging approaches. Iterative fragment elaboration cycle, aided by crystallography, biophysical and biochemical assays led to the development of several potential lead candidates having low nano-molar range of in vitro affinities. Two such compounds afforded moderate inhibition of M. abscessus and stronger inhibition of M. tuberculosis and S. aureus cultures. Further chemical modifications of these compounds as well as others are now being done, to optimize cellular and in vivo activities, to be ultimately presented as early stage clinical candidates.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Nguyen, Khac Duy. "Structural damage identification using experimental modal parameters via correlation approach". Thesis, Queensland University of Technology, 2018. https://eprints.qut.edu.au/117289/2/Khac%20Duy%20Nguyen.pdf.

Texto completo da fonte
Resumo:
This research provides a new damage identification strategy using experimental modal parameters via correlation approach. Two damage identification algorithms using modal strain energy-eigenvalue ratio (MSEE) are presented. Firstly, a method using a simplified term of MSEE called geometric modal strain energy-eigenvalue ratio (GMSEE) is developed. Secondly, the original method is modified using the full term of MSEE, proving better capability of damage identification when used with fewer vibration modes. Performance of the proposed damage identification algorithms has been successfully validated with a numerical model and some experimental models of various scales from small to large.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Ashkani, Zadeh Kianosh. "Seismic analysis of the RC integral bridges using performance-based design approach including soil structure interaction". Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/45407.

Texto completo da fonte
Resumo:
Bridges in high seismic risk zones are designed and built to withstand damage when subjected to earthquakes. However, there have been cases of bridge collapse due to design flaws around the world in the last few decades. To avoid failure and minimize seismic risk, collapse issue should be appropriately addressed in the next generation bridge design codes. One of the important subjects that needs to be addressed in bridge design codes is Soil-Structure Interaction (SSI), especially when the supporting soil is soft. In this research, SSI is incorporated within a performance-based engineering framework to assess the behaviour of RC integral bridges. 3-D nonlinear models of three types of integral bridges with different skew angles are built. For each bridge type, two archetype models are constructed with and without considering the effect of SSI. CALTRANS spring and multi-purpose dynamic Winkler models are employed to simulate the effect of soil in the SSI simulation. In this study, relative displacement and drift of the abutment backwall and pier columns are considered as engineering demand parameters (EDPs). Spectral acceleration of ground motions is chosen as the intensity measure (IM). Incremental dynamic analysis (IDA) is employed to determine the engineering demand parameters and probability of collapse using a set of 20 well-selected ground motions. Current study shows that for the integral abutment bridges considering soil structure interaction mostly demonstrate smaller relative displacement capacity/demand ratio. Therefore, neglecting SSI can result in overestimating relative displacement capacity of the structural components in this type of bridges. In addition, it is shown that SSI can cause an increase in ductility of the pier columns while it can cause a decrease in the ductility of the abutments. Collapse Margin Ratio (CMR) is considered here as a primary parameter to characterize the collapse safety of the structures. It is found that the probability of collapse of the SSI archetype models is higher than probability of collapse of their corresponding non-SSI models. Consequently, CMR value of the SSI archetype model is smaller than CMR value of its corresponding non-SSI models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Mohammadhosseini, Ali. "A search for optimal structure of carbon-based porous adsorbents for hydrogen storage : numerical modeling approach". Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4736.

Texto completo da fonte
Resumo:
Le but principal de cette étude était la recherche de structures optimales de charbons activés capables d"atteindre l'objectif de stockage d'hydrogène fixé par le département de l"énergie américain (DOE) pour les applications mobiles en utilisant l"adsorption physique à la température ambiante et aux pressions en-dessous de120 bars. L'hydrogène est destiné à être stocké dans une cuve rempliée d"adsorbants et doit être utilisé dans les véhicules alimentés principalement par des piles à combustible. Les adsorbants à base de carbone connus ont une capacité de stockage faible. Par conséquent, dans ce travail, j'ai défini les paramètres responsables de l'insuffisance de capacité de stockage de ces matériaux. Une attention particulière a été accordée à la géométrie locale des pores des adsorbants. J'ai étudié la structure locale des pores des adsorbants à base de carbone et je présente le principe de la conception d"architectures tridimensionnelles de nouvelles structures de carbone ainsi que la capacité d'adsorption de l'hydrogène par ces structures, lesquelles constituent une classe prometteuse de matériaux pour le stockage d'hydrogène et qui n'ont pas été étudiées jusqu'ici. Hormis la maximisation de la densité de l'hydrogène absorbée par cette famille de structures, mon but était de caractériser l'adsorption dans cette nouvelle catégorie d'adsorbants. Cela permet d"apporter des informations quant à la méthodologie à utiliser pour ajuster les propriétés physiques de ces matériaux afin d'optimiser leurs propriétés de stockage. Les résultats obtenus semblent montrer que cet objectif est atteint et confirment que mon approche constitue une bonne base pour de futures recherches
The main goal of research presented in this thesis has been a search for optimal carbon-based porous structure capable to achieve the hydrogen storage capacity defined by US Department of Energy (DOE) for mobile applications at room temperature by adsorption at medium-level pressures below 120 bars. The hydrogen is assumed to be stored in a tank filled with adsorbents to be used in transport application, mainly fuel-cell driven vehicles. The known carbon-based adsorbents have low storage capacity. Therefore in this work, I have defined the basic parameters which are responsible for the capacity deficiency of such materials. Special attention has been paid to local pore geometry of adsorbents. I have investigated the pore local structure of carbon-based adsorbents and I present the basis of design and hydrogen adsorption capacity in three-dimensional architecture of new carbon frameworks, a promising class of potential hydrogen storage materials that have not been studied so far. Apart from maximizing the density of hydrogen taken up by this family of structures, I have aimed at characterization of this new category of adsorbents. This is hoped to lead to a guidance how their physical properties can be designed, or `tuned', to optimize their storage properties, and the obtained results seem to achieve this aim and thus provide a good basis for future research
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

LA, MONICA Gabriele. "Correlation between cell line chemosensitivity and protein expression pattern as new approach for the design of targeted anticancer small molecules". Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/573085.

Texto completo da fonte
Resumo:
BACKGROUND AND RATIONALE: Over the past few decades, several databases with a significant amount of biological data related to cancer cells and anticancer agents (e.g.: National Cancer Institute database, NCI; Cancer Cell Line Encyclopedia, CCLE; Genomic and Drug Sensitivity in Cancer portal, GDSC) have been developed. The huge amount of heterogeneous biological data extractable from these databanks (among all, drug response and protein expression) provides a real foundation for predictive cancer chemogenomics, which aims to investigate the relationships between genomic traits and the response of cancer cells to drug treatment with the aim to identify novel therapeutic molecules and targets. In very recent times many computational and statistical approaches have been proposed to integrate and correlate these heterogeneous biological data sequences (protein expression – drug response), with the aim to assign the putative mechanism of action of anticancer small molecules with unknown biological target/s. The main limitation of all these computational methods is the need for experimental drug response data (after screening data). From this point of view, the possibility to predict in silico the antiproliferative activity of new/untested small molecules against specific cell lines, could enable correlations to be found between the predicted drug response and protein expression of the desired target from the very earliest stages of research. Such an innovative approach could allow to select the compounds with molecular mechanisms that are more likely to be connected with the target of interest preliminary to the in vitro assays, which would be a critical aid in the design of new targeted anticancer agents. RESULTS: In the present study, we aimed to develop a new innovative computational protocol based on the correlation of drug activity and protein expression data to support the discovery of new targeted anticancer agents. Compared with the approaches reported in the literature, the main novelty of the proposed protocol was represented by the use of predicted antiproliferative activity data, instead of experimental ones. To this aim, in the first phase of the research the new in silico Antiproliferative Activity Predictor (AAP) tool able to predict the anticancer activity (expressed as GI50) of new/untested small molecules against the NCI-60 panel was developed. The ligand-based tool, which took the advantages of the consolidated expertise of the research group in the manipulation of molecular descriptors, was adequately validated and the reliability of the prediction was further confirmed by the analysis of an in-house database and subsequent evaluation of a set of molecules selected by the NCI for the one-dose/five-doses antiproliferative assays. In the second part of the study, a new computational method to correlate drug activity data and protein expression pattern data was proposed and evaluated by analysing several case studies of targeted drugs tested by NCI, confirming the reliability of the proposed method for the biological data analysis. In the last part of the project the proposed correlation approach was applied to design new small molecules as selective inhibitors of Cdc25 phosphatase, a well-known protein involved in carcinogenic processes. By means of this innovative approach, integrated with other classical ligand/structures-based techniques, it was possible to screen a large database of molecular structures, and to select the ones with optimal relationship with the focused target. In vitro antiproliferative and enzymatic inhibition assays of the selected compounds led to the identification of new structurally heterogeneous inhibitors of Cdc25 proteins and confirmed the results of the in silico analysis. CONCLUSIONS: Collectively, the obtained results showed that the correlation between protein expression pattern and chemosensitivity is an innovative, alternative, and effective method to identify new modulators for the selected targets. In contrast to traditional in silico methods, the proposed protocol allows for the selection of molecular structures with heterogeneous scaffolds, which are not strictly related to the binding sites and with chemical-physical features that may be more suitable for all the pathways involved in the overall mechanism. The biological assays further corroborate the robustness and the reliability of this new approach and encourage its application in the anticancer targeted drug discovery field.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Donnelly, Stephen Kevin. "Ethnic identity redefinition during acquisition of one's ancestral language (Irish) : an approach based on identity structure analysis". Thesis, University of Ulster, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.259582.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

McCoy, D. B. "Identity transition in persons undergoing elective interval sterilisation and vasectomy : An approach based on identity structure analysis". Thesis, University of Ulster, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378751.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Ruiz-Gómez, Gloria, John C. Hawkins, Jenny Philipp, Georg Künze, Robert Wodtke, Reik Löser, Karim Fahmy e M. Teresa Pisabarro. "Rational Structure-Based Rescaffolding Approach to De Novo Design of Interleukin 10 (IL-10) Receptor-1 Mimetics". Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-215877.

Texto completo da fonte
Resumo:
Tackling protein interfaces with small molecules capable of modulating protein-protein interactions remains a challenge in structure-based ligand design. Particularly arduous are cases in which the epitopes involved in molecular recognition have a non-structured and discontinuous nature. Here, the basic strategy of translating continuous binding epitopes into mimetic scaffolds cannot be applied, and other innovative approaches are therefore required. We present a structure-based rational approach involving the use of a regular expression syntax inspired in the well established PROSITE to define minimal descriptors of geometric and functional constraints signifying relevant functionalities for recognition in protein interfaces of non-continuous and unstructured nature. These descriptors feed a search engine that explores the currently available three-dimensional chemical space of the Protein Data Bank (PDB) in order to identify in a straightforward manner regular architectures containing the desired functionalities, which could be used as templates to guide the rational design of small natural-like scaffolds mimicking the targeted recognition site. The application of this rescaffolding strategy to the discovery of natural scaffolds incorporating a selection of functionalities of interleukin-10 receptor-1 (IL-10R1), which are relevant for its interaction with interleukin-10 (IL-10) has resulted in the de novo design of a new class of potent IL-10 peptidomimetic ligands.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!