Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Gene mapping – Computer simulation.

Rozprawy doktorskie na temat „Gene mapping – Computer simulation”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 18 najlepszych rozpraw doktorskich naukowych na temat „Gene mapping – Computer simulation”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Kruczkiewicz, Peter. "A comparative genomic framework for the in silico design and assessment of molecular typing methods using whole-genome sequence data with application to Listeria monocytogenes". Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Biological Sciences, c2013, 2013. http://hdl.handle.net/10133/3391.

Pełny tekst źródła
Streszczenie:
Although increased genome sequencing e orts have increased our understanding of genomic variability within many bacterial species, there has been limited application of this knowledge towards assessing current molecular typing methods and developing novel molecular typing methods. This thesis reports a novel in silico comparative genomic framework where the performance of typing methods is assessed on the basis of the discriminatory power of the method as well as the concordance of the method with a whole-genome phylogeny. Using this framework, we designed a comparative genomic ngerprinting (CGF) assay for Listeria monocytogenes through optimized molecular marker selection. In silico validation and assessment of the CGF assay against two other molecular typing methods for L. monocytogenes (multilocus sequence typing (MLST) and multiple virulence locus sequence typing (MVLST)) revealed that the CGF assay had better performance than these typing methods. Hence, optimized molecular marker selection can be used to produce highly discriminatory assays with high concordance to whole-genome phylogenies. The framework described in this thesis can be used to assess current molecular typing methods against whole-genome phylogenies and design the next generation of high-performance molecular typing methods from whole-genome sequence data.
xiii, 100 leaves : ill. ; 29 cm
Style APA, Harvard, Vancouver, ISO itp.
2

Akhtar, Mahmood Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Genomic sequence processing: gene finding in eukaryotes". Publisher:University of New South Wales. Electrical Engineering & Telecommunications, 2008. http://handle.unsw.edu.au/1959.4/40912.

Pełny tekst źródła
Streszczenie:
Of the many existing eukaryotic gene finding software programs, none are able to guarantee accurate identification of genomic protein coding regions and other biological signals central to pathway from DNA to the protein. Eukaryotic gene finding is difficult mainly due to noncontiguous and non-continuous nature of genes. Existing approaches are heavily dependent on the compositional statistics of the sequences they learn from and are not equally suitable for all types of sequences. This thesis firstly develops efficient digital signal processing-based methods for the identification of genomic protein coding regions, and then combines the optimum signal processing-based non-data-driven technique with an existing data-driven statistical method in a novel system demonstrating improved identification of acceptor splice sites. Most existing well-known DNA symbolic-to-numeric representations map the DNA information into three or four numerical sequences, potentially increasing the computational requirement of the sequence analyzer. Proposed mapping schemes, to be used for signal processing-based gene and exon prediction, incorporate DNA structural properties in the representation, in addition to reducing complexity in subsequent processing. A detailed comparison of all DNA representations, in terms of computational complexity and relative accuracy for the gene and exon prediction problem, reveals the newly proposed ?paired numeric? to be the best DNA representation. Existing signal processing-based techniques rely mostly on the period-3 behaviour of exons to obtain one dimensional gene and exon prediction features, and are not well equipped to capture the complementary properties of exonic / intronic regions and deal with the background noise in detection of exons at their nucleotide levels. These issues have been addressed in this thesis, by proposing six one-dimensional and three multi-dimensional signal processing-based gene and exon prediction features. All one-dimensional and multi-dimensional features have been evaluated using standard datasets such as Burset/Guigo1996, HMR195, and the GENSCAN test set. This is the first time that different gene and exon prediction features have been compared using substantial databases and using nucleotide-level metrics. Furthermore, the first investigation of the suitability of different window sizes for period-3 exon detection is performed. Finally, the optimum signal processing-based gene and exon prediction scheme from our evaluations is combined with a data-driven statistical technique for the recognition of acceptor splice sites. The proposed DSP-statistical hybrid is shown to achieve 43% reduction in false positives over WWAM, as used in GENSCAN.
Style APA, Harvard, Vancouver, ISO itp.
3

Hon, Wing-hong, i 韓永康. "Analysis of DNA shuffling by computer simulation". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B27771027.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Luotsinen, Linus Jan. "AUTONOMOUS ENVIRONMENTAL MAPPING IN MULTI-AGENT UAV SYSTEMS". Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4421.

Pełny tekst źródła
Streszczenie:
UAV units are by many researchers and aviation specialists considered the future and cutting edge of modern flight technology. This thesis discusses methods for efficient autonomous environmental mapping in a multi-agent domain. An algorithm that emphasizes on team work by sharing the agents local map information and exploration intentions is presented as a solution to the mapping problem. General theories on how to model and implement rational autonomous behaviour for UAV agents are presented. Three different human and tactical behaviour modeling techniques are evaluated. The author found the CxBR paradigm to be the most interesting approach. Also, in order to test and quantify the theories presented in this thesis a simulation environment was developed. This simulation software allows for UAV agents to operate in a visual 3-D environment with mountains, other various terrain types, danger points and enemies to model unexpected events.
M.S.
Department of Electrical and Computer Engineering
Engineering and Computer Science;
Electrical and Computer Engineering
Style APA, Harvard, Vancouver, ISO itp.
5

Ayar, Yusuf Yavuz. "Design And Simulation Of A Flash Translation Layer Algorithm". Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611995/index.pdf.

Pełny tekst źródła
Streszczenie:
Flash Memories have been widely used as a storage media in electronic devices such as USB flash drives, mobile phones and cameras. Flash Memory offers a portable and non-volatile de- sign, which can be carried to everywhere without data loss. It is durable against temperature and humidity. With all these advantages, Flash Memory gets popular day by day. However, Flash Memory has also some disadvantages, such as erase-before restriction and erase limi- tation of each individual block. Erase-before restriction pushes every single writable unit to be erased before an update operation. Another limitation is that every block can be erased up to a fixed number. Flash Translation Layer - FTL is the solution for these disadvantages. Flash Translation Layer is a software module inside the Flash Memory working between the operating system and the memory. FTL tries to reduce these disadvantages of Flash Memory via implementing garbage collector, address mapping scheme, error correcting and many oth- ers. There are various Flash Translation Layer software. Some of them have been reviewed in terms of their advantages and disadvantages. The study aims at designing, implementing and simulating a NAND type FTL algorithm.
Style APA, Harvard, Vancouver, ISO itp.
6

Hao, Guoliang. "Imaging of the atria and cardiac conduction system : from experiment to computer modelling". Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/imaging-of-the-atria-and-cardiac-conduction-system--from-experiment-to-computer-modelling(3e5dba52-70f3-4fa8-890d-adfe2380086c).html.

Pełny tekst źródła
Streszczenie:
Background: Experimental mapping and computer modelling provide important platforms to study the fundamental mechanisms underlying normal and abnormal activation of the heart. However, accurate computer modelling requires detailed anatomical models and needs support and validation from experimental data. Aims: 1) Construction of detailed anatomical heart models with the cardiac conduction system (CCS). 2) Mapping of the electrical activation sequence in rabbit atria to support and validate computer simulation. 3) Mapping of the spontaneous activity in the atrioventricular ring tissues (AV rings), which consist of nodal-like myocytes and can be a source of atrial tachycardia. Methods: High-resolution magnetic resonance imaging (MRI) and computed tomography (CT) were used to provide two-dimensional (2D) images for the construction of the detailed anatomical heart models. Immunohistochemistry and Masson’s trichrome staining were used to distinguish the CCS in the heart. LabVIEW was used in the development of a multi-electrode mapping system. The multi-electrode mapping technique was employed to map the electrical activation sequence of the rabbit atria. The cellular automaton model was used to simulate electrical activation of the rabbit atria. Results: 1) Three detailed anatomical models were constructed, including a detailed three dimensional (3D) anatomical model of the rabbit heart (whole of the atria and part of the ventricles), a 3D anatomical model of the rat heart with the CCS and AV rings, and a 3D anatomical model of the human atrioventricular node. 2) A multi-electrode mapping system was developed. 3) The electrical activation sequence of the rabbit atria was mapped in detail using the multi-electrode mapping system. The conduction velocity in the rabbit atria was measured. The mapping data showed the coronary sinus and the left superior vena cava do not provide an interatrial conduction route during sinus rhythm in the rabbit heart. 4) Electrical activation of the rabbit atria was simulated with the support of the 3D anatomical model of the rabbit atria and the experimental mapping data. 5) The spontaneous activity in the rat AV rings was mapped using the multi-electrode mapping system. Conclusions: The detailed anatomical models developed in this study can be used to support accurate computer simulation and can also be used in anatomical teaching and research. The experimental mapping data from the rabbit atria can be used to support and validate computer simulation. The computer simulation study demonstrated the importance of anatomical structure and electrophysiological heterogeneity. This study also demonstrated that the AV rings could potentially act as ectopic pacemakers.
Style APA, Harvard, Vancouver, ISO itp.
7

Walter, Matthew R. "Sparse Bayesian information filters for localization and mapping". Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/46498.

Pełny tekst źródła
Streszczenie:
Thesis (S.M.)--Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2008.
Includes bibliographical references (p. 159-170).
This thesis formulates an estimation framework for Simultaneous Localization and Mapping (SLAM) that addresses the problem of scalability in large environments. We describe an estimation-theoretic algorithm that achieves significant gains in computational efficiency while maintaining consistent estimates for the vehicle pose and the map of the environment.We specifically address the feature-based SLAM problem in which the robot represents the environment as a collection of landmarks. The thesis takes a Bayesian approach whereby we maintain a joint posterior over the vehicle pose and feature states, conditioned upon measurement data. We model the distribution as Gaussian and parametrize the posterior in the canonical form, in terms of the information (inverse covariance) matrix. When sparse, this representation is amenable to computationally efficient Bayesian SLAM filtering. However, while a large majority of the elements within the normalized information matrix are very small in magnitude, it is fully populated nonetheless. Recent feature-based SLAM filters achieve the scalability benefits of a sparse parametrization by explicitly pruning these weak links in an effort to enforce sparsity. We analyze one such algorithm, the Sparse Extended Information Filter (SEIF), which has laid much of the groundwork concerning the computational benefits of the sparse canonical form. The thesis performs a detailed analysis of the process by which the SEIF approximates the sparsity of the information matrix and reveals key insights into the consequences of different sparsification strategies. We demonstrate that the SEIF yields a sparse approximation to the posterior that is inconsistent, suffering from exaggerated confidence estimates.
(cont) This overconfidence has detrimental effects on important aspects of the SLAM process and affects the higher level goal of producing accurate maps for subsequent localization and path planning. This thesis proposes an alternative scalable filter that maintains sparsity while preserving the consistency of the distribution. We leverage insights into the natural structure of the feature-based canonical parametrization and derive a method that actively maintains an exactly sparse posterior. Our algorithm exploits the structure of the parametrization to achieve gains in efficiency, with a computational cost that scales linearly with the size of the map. Unlike similar techniques that sacrifice consistency for improved scalability, our algorithm performs inference over a posterior that is conservative relative to the nominal Gaussian distribution. Consequently, we preserve the consistency of the pose and map estimates and avoid the effects of an overconfident posterior. We demonstrate our filter alongside the SEIF and the standard EKEF both in simulation as well as on two real-world datasets. While we maintain the computational advantages of an exactly sparse representation, the results show convincingly that our method yields conservative estimates for the robot pose and map that are nearly identical to those of the original Gaussian distribution as produced by the EKF, but at much less computational expense. The thesis concludes with an extension of our SLAM filter to a complex underwater environment. We describe a systems-level framework for localization and mapping relative to a ship hull with an Autonomous Underwater Vehicle (AUV) equipped with a forward-looking sonar. The approach utilizes our filter to fuse measurements of vehicle attitude and motion from onboard sensors with data from sonar images of the hull. We employ the system to perform three-dimensional, 6-DOF SLAM on a ship hull.
by Matthew R. Walter.
S.M.
Style APA, Harvard, Vancouver, ISO itp.
8

Barran, Brian Arthur. "View dependent fluid dynamics". Texas A&M University, 2006. http://hdl.handle.net/1969.1/3827.

Pełny tekst źródła
Streszczenie:
This thesis presents a method for simulating fluids on a view dependent grid structure to exploit level-of-detail with distance to the viewer. Current computer graphics techniques, such as the Stable Fluid and Particle Level Set methods, are modified to support a nonuniform simulation grid. In addition, infinite fluid boundary conditions are introduced that allow fluid to flow freely into or out of the simulation domain to achieve the effect of large, boundary free bodies of fluid. Finally, a physically based rendering method known as photon mapping is used in conjunction with ray tracing to generate realistic images of water with caustics. These methods were implemented as a C++ application framework capable of simulating and rendering fluid in a variety of user-defined coordinate systems.
Style APA, Harvard, Vancouver, ISO itp.
9

Cai, Xinye. "A multi-objective GP-PSO hybrid algorithm for gene regulatory network modeling". Diss., Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1492.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Agyapong-Kodua, Kwabena. "Multi-product cost and value stream modelling in support of business process analysis". Thesis, Loughborough University, 2009. https://dspace.lboro.ac.uk/2134/5585.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

He, Yiyang. "A Physically Based Pipeline for Real-Time Simulation and Rendering of Realistic Fire and Smoke". Thesis, Stockholms universitet, Numerisk analys och datalogi (NADA), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-160401.

Pełny tekst źródła
Streszczenie:
With the rapidly growing computational power of modern computers, physically based rendering has found its way into real world applications. Real-time simulations and renderings of fire and smoke had become one major research interest in modern video game industry, and will continue being one important research direction in computer graphics. To visually recreate realistic dynamic fire and smoke is a complicated problem. Furthermore, to solve the problem requires knowledge from various areas, ranged from computer graphics and image processing to computational physics and chemistry. Even though most of the areas are well-studied separately, when combined, new challenges will emerge. This thesis focuses on three aspects of the problem, dynamic, real-time and realism, to propose a solution in form of a GPGPU pipeline, along with its implementation. Three main areas with application in the problem are discussed in detail: fluid simulation, volumetric radiance estimation and volumetric rendering. The weights are laid upon the first two areas. The results are evaluated around the three aspects, with graphical demonstrations and performance measurements. Uniform grids are used with Finite Difference (FD) discretization scheme to simplify the computation. FD schemes are easy to implement in parallel, especially with ComputeShader, which is well supported in Unity engine. The whole implementation can easily be integrated into any real-world applications in Unity or other game engines that support DirectX 11 or higher.
Style APA, Harvard, Vancouver, ISO itp.
12

Diaz, Espinosa Carlos Andrés. "Uma aplicação de navegação robótica autônoma através de visão computacional estéreo". [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/263062.

Pełny tekst źródła
Streszczenie:
Orientador: Paulo Roberto Gardel Kurka
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica
Made available in DSpace on 2018-08-16T16:41:02Z (GMT). No. of bitstreams: 1 DiazEspinosa_CarlosAndres_M.pdf: 5130242 bytes, checksum: 334f37aa82bbde2c9ddbfe192baa7c48 (MD5) Previous issue date: 2010
Resumo: O presente trabalho descreve uma técnica de navegação autônoma, utilizando imagens estereoscópicas de câmeras para estimar o movimento de um robô em um ambiente desconhecido. Um método de correlação de pontos em imagens unidimensionais é desenvolvido para a identificação de pontos homólogos de duas imagens em uma cena. Utilizam-se métodos de segmentação de bordas ou contornos para extrair as principais características inerentes nas imagens. Constrói-se um mapa de profundidade dos pontos da imagem com maior similitude dentre os objetos visíveis no ambiente, utilizando um processo de triangulação. Finalmente a estimação do movimento bidimensional do robô é calculada aproveitando a relação epipolar entre dois ou mais pontos em pares de imagens. Experimentos realizados em ambientes virtuais e testes práticos verificam a viabilidade e robustez dos métodos em aplicações de navegação robótica
Abstract: The present work describes a technique for autonomous navigation using stereoscopic camera images to estimate the movement of a robot in an unknown environment. A onedimensional image point correlation method is developed for the identification of similar image points of a scene. Boundary or contour segments are used to extract the principal characteristics of the images. A depth map is built for the points with grater similarity, among the scene objects depicted, using a triangulation process. Finally, the bi-dimensional movement of a robot is estimated through epipolar relations between two or more correlated points in pairs of images. Virtual ambient and practical robot tests are preformed to evaluate the viability of employment and robustness of the proposed techniques
Mestrado
Mecanica dos Sólidos e Projeto Mecanico
Mestre em Engenharia Mecânica
Style APA, Harvard, Vancouver, ISO itp.
13

Refahi, Yassin. "Modélisation multiéchelle de perturbation de la phyllotaxie d'Arabidopsis thaliana". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2011. http://tel.archives-ouvertes.fr/tel-00859869.

Pełny tekst źródła
Streszczenie:
Dans cette thèse nous nous intéressons à la manière dont la structure des plantes émerge du fonctionnement de leur méristème apical. Pour cela, nous étudions la structure du méristème apical d'Arabidopsis thaliana à différentes échelles. La thèse commence par étudier les plantes à l'échelle macroscopique dont la phyllotaxie a été perturbée et par le développement d'outils mathématiques pour quantifier et analyser ces perturbations. Ensuite, nous étudions à une échelle plus microscopiques quelles peuvent être les raisons de telles perturbations. Pour cela, nous avons testé une version étendue d'un modèle proposé par Douady et Couder (1996) dans lequel plusieurs paramètres clés sont modifiés par différentes sources de bruit. Cette étude de modélisation suggère que la stabilité de la taille de la zone de la zone centrale peut être un facteur clé dans la robustesse phyllotaxie. Alors que des modèles 3D réalistes des champs d'inhibition autour des primordia ont été développés récemment, une telle étude est toujours manquante pour les tissus réalistes en 3D dans le cas de la zone centrale. Cela nous conduit finalement à analyser en profondeur le réseau de régulation génétique qui contrôle la taille de la zone centrale dans le méristème. Nous avons implémenté une version 3D d'un modèle de la littérature de la zone centrale et testé ce modèle sur des méristèmes 3D obtenues à partir des images 3D de la microscopie laser.
Style APA, Harvard, Vancouver, ISO itp.
14

Costa, MÃrio Jorge Nunes. "RealizaÃÃo de PrÃtica de FÃsica em Bancada e SimulaÃÃo Computacional para Promover o Desenvolvimento da Aprendizagem Significativa e Colaborativa". Universidade Federal do CearÃ, 2013. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=9445.

Pełny tekst źródła
Streszczenie:
nÃo hÃ
A avaliaÃÃo educacional brasileira, segundo o PISA, se reflete no quadro de desigualdades econÃmicas e sociais vivenciadas entre os hemisfÃrios norte e sul planetÃrios. A presente pesquisa objetiva investigar, de que maneira, a concepÃÃo e realizaÃÃo de uma atividade pedagÃgica colaborativa de experimentaÃÃo de bancada, apoiada por atividades pedagÃgicas de simulaÃÃo e modelagem computacional, pÃde contribuir para favorecer o desenvolvimento do processo de aprendizagem significativa. As atividades foram efetivadas enfatizando-se a construÃÃo e (re) significaÃÃo de conceitos de fÃsica, especificamente no tema eletricidade e circuitos elÃtricos. Foram inicialmente verificados os conhecimentos prÃvios dos alunos, atravÃs da aplicaÃÃo de questionÃrios de sondagem de conhecimentos. Em seguida, se realizaram aulas teÃricas, com foco na formaÃÃo de organizadores prÃvios. Em seguida, promoveram-se atividades fazendo uso pedagÃgico de software educacionais de simulaÃÃo e modelagem de circuitos de resistores elÃtricos, PhET e Crocodile, quando os alunos inter-relacionaram e/ou (re) significaram conceitos. Para tanto, vivenciaram e realizaram as mediÃÃes de grandezas elÃtricas e demais atividades propostas, sob a mediaÃÃo do presente Professor-Pesquisador. Numa etapa consecutiva, os alunos realizaram a prÃtica de experimentaÃÃo de bancada, relacionada ao mesmo tema anterior de circuitos elÃtricos, para (re) significar os conhecimentos dos alunos, partindo do estudo do brilho de lÃmpadas. Em todas as atividades laboratoriais, foram utilizados instrumentos de coleta de dados do tipo: gravaÃÃes de Ãudio e vÃdeo; respostas e relatos escritos pelos alunos nos roteiros das atividades de simulaÃÃo e modelagem computacional, atividade experimental de bancada e questionÃrios de sondagem de conhecimentos prÃvios e avaliaÃÃo da prÃtica pedagÃgica. A pesquisa classifica-se como qualitativa, exploratÃria e pesquisa-aÃÃo. No referencial teÃrico-metodolÃgico, destacam-se, como principais contribuiÃÃes, os pressupostos de: Dorneles, AraÃjo, Veit, no uso de software de simulaÃÃo e dificuldades de aprendizagem; Ribeiro et al., nos aspectos da integraÃÃo de laboratÃrios de experimentaÃÃo e simulaÃÃo, para facilitar o desenvolvimento da aprendizagem colaborativa, na qual destacam-se Ausubel, Novak e Valente; Moraes, Galiazzi e Okada, quanto ao mapeamento cognitivo da anÃlise textual discursiva; e Almeida, Prado e GÃes, quanto à anÃlise qualitativa de dados multidimensionais, com o uso do software CHIC. Sem perda de generalidade, a anÃlise dos dados de campo evidencia preliminarmente que: as atividades de simulaÃÃo e modelagem computacional contribuÃram para a formaÃÃo de organizadores prÃvios relativos a conceitos de eletricidade, leitura e interpretaÃÃo de medidas elÃtricas. Posteriormente, a atividade de experimentaÃÃo auxiliou os alunos a (re) significarem os conhecimentos de eletricidade e circuitos elÃtricos, as atividades de leitura, mediÃÃo e interpretaÃÃo de grandezas elÃtricas, auxiliando o desenvolvimento da aprendizagem significativa. A anÃlise dos resultados tambÃm revela indÃcios que, com a integraÃÃo entre as atividades de experimentaÃÃo de bancada e softwares de simulaÃÃo e modelagem computacional, os alunos, de forma colaborativa e minoritariamente cooperativa, (re) significaram e reelaboraram conhecimentos relativos a circuitos elÃtricos de resistores, porÃm, em determinados momentos, caracterizavam dificuldades de aprendizagem, pois nÃo conseguiam expressar suas concepÃÃes e argumentaÃÃes, de maneira a se apropriar corretamente dos conceitos de eletricidade.
Style APA, Harvard, Vancouver, ISO itp.
15

Fischer, Stephan. "Modélisation de l'évolution de la taille des génomes et de leur densité en gènes par mutations locales et grands réarrangements chromosomiques". Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00924831.

Pełny tekst źródła
Streszczenie:
Bien que de nombreuses séquences génomiques soient maintenant connues, les mécanismes évolutifs qui déterminent la taille des génomes, et notamment leur part d'ADN non codant, sont encore débattus. Ainsi, alors que de nombreux mécanismes faisant grandir les génomes (prolifération d'éléments transposables, création de nouveaux gènes par duplication, ...) sont clairement identifiés, les mécanismes limitant la taille des génomes sont moins bien établis. La sélection darwinienne pourrait directement défavoriser les génomes les moins compacts, sous l'hypothèse qu'une grande quantité d'ADN à répliquer limite la vitesse de reproduction de l'organisme. Cette hypothèse étant cependant contredite par plusieurs jeux de données, d'autres mécanismes non sélectifs ont été proposés, comme la dérive génétique et/ou un biais mutationnel rendant les petites délétions d'ADN plus fréquentes que les petites insertions. Dans ce manuscrit, nous montrons à l'aide d'un modèle matriciel de population que la taille du génome peut aussi être limitée par la dynamique spontanée des duplications et des grandes délétions, qui tend à raccourcir les génomes même si les deux types de ré- arrangements se produisent à la même fréquence. En l'absence de sélection darwinienne, nous prouvons l'existence d'une distribution stationnaire pour la taille du génome même si les duplications sont deux fois plus fréquentes que les délétions. Pour tester si la sélection darwinienne peut contrecarrer cette dynamique spontanée, nous simulons numériquement le modèle en choisissant une fonction de fitness qui favorise directement les génomes conte- nant le plus de gènes, tout en conservant des duplications deux fois plus fréquentes que les délétions. Dans ce scénario où tout semblait pousser les génomes à grandir infiniment, la taille du génome reste pourtant bornée. Ainsi, notre étude révèle une nouvelle force susceptible de limiter la croissance des génomes. En mettant en évidence des comporte- ments contre-intuitifs dans un modèle pourtant minimaliste, cette étude souligne aussi les limites de la simple " expérience de pensée " pour penser l'évolution. Nous proposons un modèle mathématique de l'évolution structurelle des génomes en met- tant l'accent sur l'influence des différents mécanismes de mutation. Il s'agit d'un modèle matriciel de population, à temps discret, avec un nombre infini d'états génomiques pos- sibles. La taille de population est infinie, ce qui élimine le phénomène de dérive génétique. Les mutations prises en compte sont les mutations ponctuelles, les petites insertions et délétions, mais aussi les réarrangements chromosomiques induits par la recombinaison ectopique de l'ADN, comme les inversions, les translocations, les grandes délétions et les duplications. Nous supposons par commodité que la taille des segments réarrangés suit une loi uniforme, mais le principal résultat analytique est ensuite généralisé à d'autres dis- tributions. Les mutations étant susceptibles de changer le nombre de gènes et la quantité d'ADN intergénique, le génome est libre de varier en taille et en compacité, ce qui nous permet d'étudier l'influence des taux de mutation sur la structure génomique à l'équilibre. Dans la première partie de la thèse, nous proposons une analyse mathématique dans le cas où il n'y a pas de sélection, c'est-à-dire lorsque la probabilité de reproduction est identique quelle que soit la structure du génome. En utilisant le théorème de Doeblin, nous montrons qu'une distribution stationnaire existe pour la taille du génome si le taux de duplications par base et par génération n'excède pas 2.58 fois le taux de grandes délétions. En effet, sous les hypothèses du modèle, ces deux types de mutation déterminent la dynamique spontanée du génome, alors que les petites insertions et petites délétions n'ont que très peu d'impact. De plus, même si les tailles des duplications et des grandes délétions sont distribuées de façon parfaitement symétriques, leur effet conjoint n'est, lui, pas symétrique et les délétions l'emportent sur les duplications. Ainsi, si les tailles de délétions et de duplications sont distribuées uniformément, il faut, en moyenne, plus de 2.58 duplications pour compenser une grande délétion. Il faut donc que le taux de duplications soit quasiment trois fois supérieur au taux de délétions pour que la taille des génomes croisse à l'infini. L'impact des grandes délétions est tel que, sous les hypothèses du modèle, ce dernier résultat reste valide même en présence d'un mécanisme de sélection favorisant directement l'ajout de nouveaux gènes. Même si un tel mécanisme sélectif devrait intuitivement pousser les génomes à grandir infiniment, en réalité, l'influence des délétions va rapidement limiter leur accroissement. En résumé, l'étude analytique prédit que les grands réarrangements délimitent un ensemble de tailles stables dans lesquelles les génomes peuvent évoluer, la sélection influençant la taille précise à l'équilibre parmi cet ensemble de tailles stables. Dans la deuxième partie de la thèse, nous implémentons le modèle numériquement afin de pouvoir simuler l'évolution de la taille du génome en présence de sélection. En choisissant une fonction de fitness non bornée et strictement croissante avec le nombre de gènes dans le génome, nous testons le comportement du modèle dans des conditions extrêmes, poussant les génomes à croître indéfiniment. Pourtant, dans ces conditions, le modèle numérique confirme que la taille des génomes est essentiellement contrôlée par les taux de duplications et de grandes délétions. De plus, cette limite concerne la taille totale du génome et s'applique donc aussi bien au codant qu'au non codant. Nous retrouvons en particulier le seuil de 2.58 duplications pour une délétion en deçà duquel la taille des génomes reste finie, comme prévu analytiquement. Le modèle numérique montre même que, dans certaines conditions, la taille moyenne des génomes diminue lorsque le taux de duplications augmente, un phénomène surprenant lié à l'instabilité structurelle des grands génomes. De façon similaire, augmenter l'avantage sélectif des grands génomes peut paradoxalement faire rétrécir les génomes en moyenne. Enfin, nous montrons que si les petites insertions et délétions, les inversions et les translocations ont un effet limité sur la taille du génome, ils influencent très largement la proportion d'ADN non codant.
Style APA, Harvard, Vancouver, ISO itp.
16

Li, Xuejing. "Using machine learning to predict gene expression and discover sequence motifs". Thesis, 2012. https://doi.org/10.7916/D81N874M.

Pełny tekst źródła
Streszczenie:
Recently, large amounts of experimental data for complex biological systems have become available. We use tools and algorithms from machine learning to build data-driven predictive models. We first present a novel algorithm to discover gene sequence motifs associated with temporal expression patterns of genes. Our algorithm, which is based on partial least squares (PLS) regression, is able to directly model the flow of information, from gene sequence to gene expression, to learn cis regulatory motifs and characterize associated gene expression patterns. Our algorithm outperforms traditional computational methods e.g. clustering in motif discovery. We then present a study of extending a machine learning model for transcriptional regulation predictive of genetic regulatory response to Caenorhabditis elegans. We show meaningful results both in terms of prediction accuracy on the test experiments and biological information extracted from the regulatory program. The model discovers DNA binding sites ab intio. We also present a case study where we detect a signal of lineage-specific regulation. Finally we present a comparative study on learning predictive models for motif discovery, based on different boosting algorithms: Adaptive Boosting (AdaBoost), Linear Programming Boosting (LPBoost) and Totally Corrective Boosting (TotalBoost). We evaluate and compare the performance of the three boosting algorithms via both statistical and biological validation, for hypoxia response in Saccharomyces cerevisiae.
Style APA, Harvard, Vancouver, ISO itp.
17

Anderssen, Edwin Cheere. "'n Rekenaargebaseerde model vir die voorstelling van tyd-ruimtelike aspekte met verwysing na historiese veldslae". Thesis, 2014. http://hdl.handle.net/10210/10562.

Pełny tekst źródła
Streszczenie:
Ph.D.
Until recently the majority of computer aided instruction (CAl) programs available for the teaching of history, only provided elementary facilities such as drill-and-practice exercises. Some of the more advanced systems use simulation techniques to create fictitious historical situations. These simulation systems take the form of computer games where the participants have to make decisions on historical situations with which they are confronted. The initial aim of this study was to develop a CAl-system for the teaching of history in which historical field battles could be simulated, or more correctly, in which a particular field battle situation could be reconstructed. By using the system, a student could get a better understanding of the different factors which played a role during a specific battle. It soon became clear though, that the original aims were too broad and too general. The decision was therefore made to undertake a study of the dynamic interrelationships of time and space with reference to field battles. A model was developed which provides a framework for the transformation of often unstructured and diffuse time and space relationships into more specific, structured values which can be loaded into the database of a computer. Historical field battles are used as a vehicle to outline the functioning of the model. After a history teacher or historian has analysed and restructured a specific field battle into relations that can be computerized, a history student can interactively formulate his questions on the time-space relationships of the battle under study. In the field battle model, the concept of an "event" plays an important role. An event defines an action or activity which took place during a field battle. Two of the major constituents of an event are the time when the event took place, and the geographical position where it occurred. Therefore much of the work reported in this thesis covers the development of algorithms for the representation of time and space relations. Algoritmhs were developed for the interactive drawing of geographical maps of the area where the battle took place. The main building blocks of a geographical map are points, icons, lines and areas. Special attention was given to the representation of these entities. Due to the limited viewing area available on the screen of a micro computer, an area clipping algorithm was developed for the display of selected parts of the map. Time which is observed under operational conditions during a field battle is referred to as "perceived time". Perceived time is often vague and even unreliable. An algorithm was developed through which these vague time references are transformed to more specific "clock time" values. The algorithm constructs a time network, using the vague known time of occurrence of events, to sequence the events relative to each other. By solving this network, the time of occurrence of the events forming part of the network, are determined to a fair degree of accuracy. These time values and other relevant information are entered into the database of a micro computer system, to be used for instructional purposes.
Style APA, Harvard, Vancouver, ISO itp.
18

Malakar, Preeti. "Integrated Parallel Simulations and Visualization for Large-Scale Weather Applications". Thesis, 2013. http://etd.iisc.ernet.in/2005/3907.

Pełny tekst źródła
Streszczenie:
The emergence of the exascale era necessitates development of new techniques to efficiently perform high-performance scientific simulations, online data analysis and on-the-fly visualization. Critical applications like cyclone tracking and earthquake modeling require high-fidelity and high- performance simulations involving large-scale computations and generate huge amounts of data. Faster simulations and simultaneous online data analysis and visualization enable scientists provide real-time guidance to policy makers. In this thesis, we present a set of techniques for efficient high-fidelity simulations, online data analysis and visualization in environments with varying resource configurations. First, we present a strategy for improving throughput of weather simulations with multiple regions of interest. We propose parallel execution of these nested simulations based on partitioning the 2D process grid into disjoint rectangular regions associated with each subdomain. The process grid partitioning is obtained from a Huffman tree which is constructed from the relative execution times of the subdomains. We propose a novel combination of performance prediction, processor allocation methods and topology-aware mapping of the regions on torus interconnects. We observe up to 33% gain over the default strategy in weather models. Second, we propose a processor reallocation heuristic that minimizes data redistribution cost while reallocating processors in the case of dynamic regions of interest. This algorithm is based on hierarchical diffusion approach that uses a novel tree reorganization strategy. We have also developed a parallel data analysis algorithm to detect regions of interest within a domain. This helps improve performance of detailed simulations of multiple weather phenomena like depressions and clouds, thereby in- creasing the lead time to severe weather phenomena like tornadoes and storm surges. Our method is able to reduce the redistribution time by 25% over a simple partition from scratch method. We also show that it is important to consider resource constraints like I/O bandwidth, disk space and network bandwidth for continuous simulation and smooth visualization. High simulation rates on modern-day processors combined with high I/O bandwidth can lead to rapid accumulation of data at the simulation site and eventual stalling of simulations. We show that formulating the problem as an optimization problem can deter- mine optimal execution parameters for enabling smooth simulation and visualization. This approach proves beneficial for resource-constrained environments, whereas a naive greedy strategy leads to stalling and disk overflow. Our optimization method provides about 30% higher simulation rate and consumes about 25-50% lesser storage space than a naive greedy approach. We have then developed an integrated adaptive steering framework, InSt, that analyzes the combined e ect of user-driven steering with automatic tuning of application parameters based on resource constraints and the criticality needs of the application to determine the final parameters for the simulations. It is important to allow the climate scientists to steer the ongoing simulation, specially in the case of critical applications. InSt takes into account both the steering inputs of the scientists and the criticality needs of the application. Finally, we have developed algorithms to minimize the lag between the time when the simulation produces an output frame and the time when the frame is visualized. It is important to reduce the lag so that the scientists can get on-the- y view of the simulation, and concurrently visualize important events in the simulation. We present most-recent, auto-clustering and adaptive algorithms for reducing lag. The lag-reduction algorithms adapt to the available resource parameters and the number of pending frames to be sent to the visualization site by transferring a representative subset of frames. Our adaptive algorithm reduces lag by 72% and provides 37% larger representativeness than the most-recent for slow networks.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii