Dissertations / Theses on the topic 'Other theoretical computer science and computational mathematics'

To see the other types of publications on this topic, follow the link: Other theoretical computer science and computational mathematics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 19 dissertations / theses for your research on the topic 'Other theoretical computer science and computational mathematics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Williamson, Alexander James. "Methods, rules and limits of successful self-assembly." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:9eb549f9-3372-4a38-9370-a9b0e58ca26b.

Full text
Abstract:
The self-assembly of structured particles into monodisperse clusters is a challenge on the nano-, micro- and even macro-scale. While biological systems are able to self-assemble with comparative ease, many aspects of this self-assembly are not fully understood. In this thesis, we look at the strategies and rules that can be applied to encourage the formation of monodisperse clusters. Though much of the inspiration is biological in nature, the simulations use a simple minimal patchy particle model and are thus applicable to a wide range of systems. The topics that this thesis addresses include: Encapsulation: We show how clusters can be used to encapsulate objects and demonstrate that such `templates' can be used to control the assembly mechanisms and enhance the formation of more complex objects. Hierarchical self-assembly: We investigate the use of hierarchical mechanisms in enhancing the formation of clusters. We find that, while we are able to extend the ranges where we see successful assembly by using a hierarchical assembly pathway, it does not straightforwardly provide a route to enhance the complexity of structures that can be formed. Pore formation: We use our simple model to investigate a particular biological example, namely the self-assembly and formation of heptameric alpha-haemolysin pores, and show that pore insertion is key to rationalising experimental results on this system. Phase re-entrance: We look at the computation of equilibrium phase diagrams for self-assembling systems, particularly focusing on the possible presence of an unusual liquid-vapour phase re-entrance that has been suggested by dynamical simulations, using a variety of techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Blakey, Edward William. "A model-independent theory of computational complexity : from patience to precision and beyond." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:5db40e2c-4a22-470d-9283-3b59b99793dc.

Full text
Abstract:
The field of computational complexity theory--which chiefly aims to quantify the difficulty encountered when performing calculations--is, in the case of conventional computers, correctly practised and well understood (some important and fundamental open questions notwithstanding); however, such understanding is, we argue, lacking when unconventional paradigms are considered. As an illustration, we present here an analogue computer that performs the task of natural-number factorization using only polynomial time and space; the system's true, exponential complexity, which arises from requirements concerning precision, is overlooked by a traditional, `time-and-space' approach to complexity theory. Hence, we formulate the thesis that unconventional computers warrant unconventional complexity analysis; the crucial omission from traditional analysis, we suggest, is consideration of relevant resources, these being not only time and space, but also precision, energy, etc. In the presence of this multitude of resources, however, the task of comparing computers' efficiency (formerly a case merely of comparing time complexity) becomes difficult. We resolve this by introducing a notion of overall complexity, though this transpires to be incompatible with an unrestricted formulation of resource; accordingly, we define normality of resource, and stipulate that considered resources be normal, so as to rectify certain undesirable complexity behaviour. Our concept of overall complexity induces corresponding complexity classes, and we prove theorems concerning, for example, the inclusions therebetween. Our notions of resource, overall complexity, normality, etc. form a model-independent framework of computational complexity theory, which allows: insightful complexity analysis of unconventional computers; comparison of large, model-heterogeneous sets of computers, and correspondingly improved bounds upon the complexity of problems; assessment of novel, unconventional systems against existing, Turing-machine benchmarks; increased confidence in the difficulty of problems; etc. We apply notions of the framework to existing disputes in the literature, and consider in the context of the framework various fundamental questions concerning the nature of computation.
APA, Harvard, Vancouver, ISO, and other styles
3

Dunn, Sara-Jane Nicole. "Towards a computational model of the colonic crypt with a realistic, deformable geometry." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:c3c9440a-52ac-4a3d-8e1c-5dc276b8eb6c.

Full text
Abstract:
Colorectal cancer (CRC) is one of the most prevalent and deadly forms of cancer. Its high mortality rate is associated with difficulties in early detection, which is crucial to survival. The onset of CRC is marked by macroscopic changes in intestinal tissue, originating from a deviation in the healthy cell dynamics of glands known as the crypts of Lieberkuhn. It is believed that accumulated genetic alterations confer on mutated cells the ability to persist in the crypts, which can lead to the formation of a benign tumour through localised proliferation. Stress on the crypt walls can lead to buckling, or crypt fission, and the further spread of mutant cells. Elucidating the initial perturbations in crypt dynamics is not possible experimentally, but such investigations could be made using a predictive, computational model. This thesis proposes a new discrete crypt model, which focuses on the interaction between cell- and tissue-level behaviour, while incorporating key subcellular components. The model contains a novel description of the role of the surrounding tissue and musculature, which allows the shape of the crypt to evolve and deform. A two-dimensional (2D) cross-sectional geometry is considered. Simulation results reveal how the shape of the crypt base may contribute mechanically to the asymmetric division events typically associated with the stem cells in this region. The model predicts that epithelial cell migration may arise due to feedback between cell loss at the crypt collar and density-dependent cell division, an hypothesis which can be investigated in a wet lab. Further, in silico experiments illustrate how this framework can be used to investigate the spread of mutations, and conclude that a reduction in cell migration is key to confer persistence on mutant cell populations. A three-dimensional (3D) model is proposed to remove the spatial restrictions imposed on cell migration in 2D, and preliminary simulation results agree with the hypotheses generated in 2D. Computational limitations that currently restrict extension to a realistic 3D geometry are discussed. These models enable investigation of the role that mechanical forces play in regulating tissue homeostasis, and make a significant contribution to the theoretical study of the onset of crypt deformation under pre-cancerous conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Leonard, Katherine H. L. "Mathematical and computational modelling of tissue engineered bone in a hydrostatic bioreactor." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:05845740-1a74-4e19-95ea-6b5229d1af27.

Full text
Abstract:
In vitro tissue engineering is a method for developing living and functional tissues external to the body, often within a device called a bioreactor to control the chemical and mechanical environment. However, the quality of bone tissue engineered products is currently inadequate for clinical use as the implant cannot bear weight. In an effort to improve the quality of the construct, hydrostatic pressure, the pressure in a fluid at equilibrium that is required to balance the force exerted by the weight of the fluid above, has been investigated as a mechanical stimulus for promoting extracellular matrix deposition and mineralisation within bone tissue. Thus far, little research has been performed into understanding the response of bone tissue cells to mechanical stimulation. In this thesis we investigate an in vitro bone tissue engineering experimental setup, whereby human mesenchymal stem cells are seeded within a collagen gel and cultured in a hydrostatic pressure bioreactor. In collaboration with experimentalists a suite of mathematical models of increasing complexity is developed and appropriate numerical methods are used to simulate these models. Each of the models investigates different aspects of the experimental setup, from focusing on global quantities of interest through to investigating their detailed local spatial distribution. The aim of this work is to increase understanding of the underlying physical processes which drive the growth and development of the construct, and identify which factors contribute to the highly heterogeneous spatial distribution of the mineralised extracellular matrix seen experimentally. The first model considered is a purely temporal model, where the evolution of cells, solid substrate, which accounts for the initial collagen scaffold and deposited extracellular matrix along with attendant mineralisation, and fluid in response to the applied pressure are examined. We demonstrate that including the history of the mechanical loading of cells is important in determining the quantity of deposited substrate. The second and third models extend this non-spatial model, and examine biochemically and biomechanically-induced spatial patterning separately. The first of these spatial models demonstrates that nutrient diffusion along with nutrient-dependent mass transfer terms qualitatively reproduces the heterogeneous spatial effects seen experimentally. The second multiphase model is used to investigate whether the magnitude of the shear stresses generated by fluid flow, can qualitatively explain the heterogeneous mineralisation seen in the experiments. Numerical simulations reveal that the spatial distribution of the fluid shear stress magnitude is highly heterogeneous, which could be related to the spatial heterogeneity in the mineralisation seen experimentally.
APA, Harvard, Vancouver, ISO, and other styles
5

Dutta, Sara. "A multi-scale computational investigation of cardiac electrophysiology and arrhythmias in acute ischaemia." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:f5f68d8b-7a60-4109-91c8-6b1d80c7ee5b.

Full text
Abstract:
Sudden cardiac death is one of the leading causes of mortality in the western world. One of the main factors is myocardial ischaemia, when there is a mismatch between blood demand and supply to the heart, which may lead to disturbed cardiac excitation patterns, known as arrhythmias. Ischaemia is a dynamic and complex process, which is characterised by many electrophysiological changes that vary through space and time. Ischaemia-induced arrhythmic mechanisms, and the safety and efficacy of certain therapies are still not fully understood. Most experimental studies are carried out in animal, due to the ethical and practical limitations of human experiments. Therefore, extrapolation of mechanisms from animal to human is challenging, but can be facilitated by in silico models. Since the first cardiac cell model was built over 50 years ago, computer simulations have provided a wealth of information and insight that is not possible to obtain through experiments alone. Therefore, mathematical models and computational simulations provide a powerful and complementary tool for the study of multi-scale problems. The aim of this thesis is to investigate pro-arrhythmic electrophysiological consequences of acute myocardial ischaemia, using a multi-scale computational modelling and simulation framework. Firstly, we present a novel method, combining computational simulations and optical mapping experiments, to characterise ischaemia-induced spatial differences modulating arrhythmic risk in rabbit hearts. Secondly, we use computer models to extend our investigation of acute ischaemia to human, by carrying out a thorough analysis of recent human action potential models under varied ischaemic conditions, to test their applicability to simulate ischaemia. Finally, we combine state-of-the-art knowledge and techniques to build a human whole ventricles model, in which we investigate how anti-arrhythmic drugs modulate arrhythmic mechanisms in the presence of ischaemia.
APA, Harvard, Vancouver, ISO, and other styles
6

Kay, Sophie Kate. "Cell fate mechanisms in colorectal cancer." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:f19bf73d-0c0e-4fff-9589-bf43f9ff12f0.

Full text
Abstract:
Colorectal cancer (CRC) arises in part from the dysregulation of cellular proliferation, associated with the canonical Wnt pathway, and differentiation, effected by the Notch signalling network. In this thesis, we develop a mathematical model of ordinary differential equations (ODEs) for the coupled interaction of the Notch and Wnt pathways in cells of the human intestinal epithelium. Our central aim is to understand the role of such crosstalk in the genesis and treatment of CRC. An embedding of this model in cells of a simulated colonic tissue enables computational exploration of the cell fate response to spatially inhomogeneous growth cues in the healthy intestinal epithelium. We also examine an alternative, rule-based model from the literature, which employs a simple binary approach to pathway activity, in which the Notch and Wnt pathways are constitutively on or off. Comparison of the two models demonstrates the substantial advantages of the equation-based paradigm, through its delivery of stable and robust cell fate patterning, and its versatility for exploring the multiscale consequences of a variety of subcellular phenomena. Extension of the ODE-based model to include mutant cells facilitates the study of Notch-mediated therapeutic approaches to CRC. We find a marked synergy between the application of γ-secretase inhibitors and Hath1 stabilisers in the treatment of early-stage intestinal polyps. This combined treatment is an efficient means of inducing mitotic arrest in the cell population of the intestinal epithelium through enforced conversion to a secretory phenotype and is highlighted as a viable route for further theoretical, experimental and clinical study.
APA, Harvard, Vancouver, ISO, and other styles
7

Björck, Olof. "Creating Interactive Visualizations for Twitter Datasets using D3." Thesis, Uppsala universitet, Matematiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-351802.

Full text
Abstract:
Project Meme Evolution Programme (Project MEP) is a research program directed by Raazesh Sainudiin, Uppsala University, Sweden, that collects and analyzes datasets from Twitter. Twitter can be used to understand how ideas spread in social media. This project aims to produce interactive visualizations for datasets collected in Project MEP. Such interactive visualizations will facilitate exploratory data analysis in Project MEP. Several technologies had to be learned to produce the visualizations, most notably JavaScript, D3, and Scala. Three interactive visualizations were produced; one that allows for exploration of a Twitter user timeline and two that allows for exploration and understanding of a Twitter retweet network. The interactive visualizations are accessible as Scala functions and in a website developed in this project and uploaded to GitHub. The interactive visulizations contain some known bugs but they still allow for useful exploratory data analysis of Project MEP datasets and the project goal is therefore considered met.
Project Meme Evolution Programme
APA, Harvard, Vancouver, ISO, and other styles
8

Wredh, Simon, Anton Kroner, and Tomas Berg. "A Comparison of Three Time-stepping Methods for the LLG Equation in Dynamic Micromagnetics." Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-323537.

Full text
Abstract:
Micromagnetism is the study of magnetic materials on the microscopic length scale (of nano to micrometers), this scale does not take quantum mechanical effects into account, but is small enough to neglect certain macroscopic effects of magnetism in a material. The Landau-Lifshitz-Gilbert (LLG) equation is used within micromagnetism to determine the time evolution of the magnetisation vector field in a ferromagnetic solid. It is a partial differential equation with high non linearity, which makes it very difficult so solve analytically. Thus numerical methods have been developed for approximating the solution using computers. In this report we compare the performance of three different numerical methods for the LLG equation, the implicit midpoint method (IMP), the midpoint with extrapolation method (MPE), and the Gauss-Seidel Projection method (GSPM). It was found that all methods have convergence rates as expected; second order for IMP and MPE, and first order for GSPM. Energy conserving properties of the schemes were analysed and neither MPE or GSPM conserve energy. The computational time required for each method was determined to be very large for the IMP method in comparison to the other two. Suggestions for different areas of use for each method are provided.
APA, Harvard, Vancouver, ISO, and other styles
9

Ueda, Maria. "Programmering i matematik ur elevernas perspektiv : En fallstudie i en niondeklass." Thesis, KTH, Lärande, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291255.

Full text
Abstract:
Beslutsfattare i Sverige och internationellt har kommit fram till att undervisning i programmering är viktigt. I Sverige infördes således 2018 programmering i den svenska läroplanen där delar av programmeringsundervisningen ska bedrivas i matematikämnet. Många matematiklärare känner sig dock osäkra på hur denna undervisning ska utföras rent praktiskt. Syftet med detta arbete är att studera programmering i matematikämnet ur högstadieelevers perspektiv, och speciellt vad gäller matematiskt lärande och att lära sig hur man tänker vid programmering (datalogiskt tänkande). Detta görs som en fallstudie i en niondeklass som undervisades i programmering under 6 lektioner vid 5 tillfällen. Studien består av klassrumsobservationer, korta enkäter samt intervjuer med eleverna. Slutsatsen av fallstudien är att eleverna är övervägande positiva till programmering i matematik och ser det som något nytt och annorlunda. De ser programmeringen som kreativ jämfört med andra matematiklektioner och uppskattar den direkta responsen datorerna ger. Däremot har de svårt att se direkt lärande i matematik, förutom att de får använda variabler. Det räcker inte att programmeringsuppgifterna innehåller matematik för att eleverna ska uppleva att de lär sig matematik när de programmerar. Vilka tecken på datalogiskt tänkande som visar sig efter programmeringslektionerna beror på hur datalogiskt tänkande definieras. I detta arbete indelas datalogiskt tänkande i sex huvudkoncept: abstraktion, algoritmiskt tänkande, automatisering, nedbrytning i komponenter, felsökning och generalisering, varav eleverna i detta arbete visar tecken på algoritmiskt tänkande, uttrycker att de uppskattar automatisering och lär sig arbeta genom felsökning. Dessutom beskriver de att de samarbetar och kommunicerar mer på programmeringslektionerna än på andra matematiklektioner och uppskattar att få skapa egna programmeringsuppgifter och arbeta med öppna problem.
Decision makers in Sweden and internationally have come to the conclusion that teaching programming is important. In Sweden, programming was thus introduced in the Swedish curriculum in 2018, where parts of the programming education will be conducted in the subject of mathematics. Many mathematics teachers, however, feel uncertain about how this teaching should be carried out. The purpose of this work is to study programming in the subject of mathematics from the perspective of students in year 7-9, and especially in terms of mathematical learning and how to think when programming (computational thinking). This is done as a case study in a class in ninth grade that was taught programming during 6 lessons on 5 occasions. The study consists of classroom observations, short questionnaires and interviews with students. The conclusion of the case study is that the students are predominantly positive about programming in mathematics and see it as something new and different. They see programming as compared to other math lessons and appreciate the direct response the computers give. However, they have difficulty seeing direct learning in mathematics, except that the use of variables. It is not enough that the programming tasks contain mathematics for the students to experience that they learn mathematics when they program. What signs of computational thinking appear after the programming lessons depends on how computational thinking is defined. In this work, computational thinking is divided into six main concepts: abstraction, algorithmic thinking, automation, decomposition, troubleshooting and generalization, of which the students in this work show signs of algorithmic thinking, express that they appreciate automation and learn to work through troubleshooting. In addition, they describe that they collaborate and communicate more in the programming lessons than in other mathematics lessons and appreciate being able to create their own programming tasks and work with open problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Xi, Jiahe. "Cardiac mechanical model personalisation and its clinical applications." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:0db4cf52-4f64-4ee0-8933-3fb49d64aee6.

Full text
Abstract:
An increasingly important research area within the field of cardiac modelling is the development and study of methods of model-based parameter estimation from clinical measurements of cardiac function. This provides a powerful approach for the quantification of cardiac function, with the potential to ultimately lead to the improved stratification and treatment of individuals with pathological myocardial mechanics. In particular, the diastolic function (i.e., blood filling) of left ventricle (LV) is affected by its capacity for relaxation, or the decay in residual active tension (AT) whose inhibition limits the relaxation of the LV chamber, which in turn affects its compliance (or its reciprocal, stiffness). The clinical determination of these two factors, corresponding to the diastolic residual AT and passive constitutive parameters (stiffness) in the cardiac mechanical model, is thus essential for assessing LV diastolic function. However these parameters are difficult to be assessed in vivo, and the traditional criterion to diagnose diastolic dysfunction is subject to many limitations and controversies. In this context, the objective of this study is to develop model-based applicable methodologies to estimate in vivo, from 4D imaging measurements and LV cavity pressure recordings, these clinically relevant parameters (passive stiffness and active diastolic residual tension) in computational cardiac mechanical models, which enable the quantification of key clinical indices characterising cardiac diastolic dysfunction. Firstly, a sequential data assimilation framework has been developed, covering various types of existing Kalman filters, outlined in chapter 3. Based on these developments, chapter 4 demonstrates that the novel reduced-order unscented Kalman filter can accurately retrieve the homogeneous and regionally varying constitutive parameters from the synthetic noisy motion measurements. This work has been published in Xi et al. 2011a. Secondly, this thesis has investigated the development of methods that can be applied to clinical practise, which has, in turn, introduced additional difficulties and opportunities. This thesis has presented the first study, to our best knowledge, in literature estimating human constitutive parameters using clinical data, and demonstrated, for the first time, that while an end-diastolic MR measurement does not constrain the mechanical parameters uniquely, it does provide a potentially robust indicator of myocardial stiffness. This work has been published in Xi et al. 2011b. However, an unresolved issue in patients with diastolic dysfunction is that the estimation of myocardial stiffness cannot be decoupled from diastolic residual AT because of the impaired ventricular relaxation during diastole. To further address this problem, chapter 6 presents the first study to estimate diastolic parameters of the left ventricle (LV) from cine and tagged MRI measurements and LV cavity pressure recordings, separating the passive myocardial constitutive properties and diastolic residual AT. We apply this framework to three clinical cases, and the results show that the estimated constitutive parameters and residual active tension appear to be a promising candidate to delineate healthy and pathological cases. This work has been published in Xi et al. 2012a. Nevertheless, the need to invasively acquire LV pressure measurement limits the wide application of this approach. Chapter 7 addresses this issue by analysing the feasibility of using two kinds of non-invasively available pressure measurements for the purpose of inverse parameter estimation. The work has been submitted for publication in Xi et al. 2012b.
APA, Harvard, Vancouver, ISO, and other styles
11

Bosson, Maël. "Adaptive algorithms for computational chemistry and interactive modeling." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00846458.

Full text
Abstract:
At the atomic scale, interactive physically-based modeling tools are more and more in demand. Unfortunately, solving the underlying physics equations at interactive rates is computationally challenging. In this dissertation, we propose new algorithms that allow for interactive modeling of chemical structures. We first present a modeling tool to construct structural models of hydrocarbon systems. The physically-based feedbacks are based on the Brenner potential. In order to be able to interactively edit systems containing numerous atoms, we introduce a new adaptive simulation algorithm. Then, we introduce what we believe to be the first interactive quantum chemistry simulation algorithm at the Atom Superposition and Electron Delocalization Molecular Orbital (ASED-MO) level of theory. This method is based on the divide-and-conquer (D&C) approach, which we show is accurate and efficient for this non-self-consistent semi-empirical theory. We then propose a novel Block-Adaptive Quantum Mechanics (BAQM) approach to interactive quantum chemistry. BAQM constrains some nuclei positions and some electronic degrees of freedom on the fly to simplify the simulation. Finally, we demonstrate several applications, including one study of graphane formation, interactive simulation for education purposes, and virtual prototyping at the atomic scale, both on desktop computers and in virtual reality environments.
APA, Harvard, Vancouver, ISO, and other styles
12

Nalluri, Joseph Jayakar. "NETWORK ANALYTICS FOR THE MIRNA REGULOME AND MIRNA-DISEASE INTERACTIONS." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/5012.

Full text
Abstract:
miRNAs are non-coding RNAs of approx. 22 nucleotides in length that inhibit gene expression at the post-transcriptional level. By virtue of this gene regulation mechanism, miRNAs play a critical role in several biological processes and patho-physiological conditions, including cancers. miRNA behavior is a result of a multi-level complex interaction network involving miRNA-mRNA, TF-miRNA-gene, and miRNA-chemical interactions; hence the precise patterns through which a miRNA regulates a certain disease(s) are still elusive. Herein, I have developed an integrative genomics methods/pipeline to (i) build a miRNA regulomics and data analytics repository, (ii) create/model these interactions into networks and use optimization techniques, motif based analyses, network inference strategies and influence diffusion concepts to predict miRNA regulations and its role in diseases, especially related to cancers. By these methods, we are able to determine the regulatory behavior of miRNAs and potential causal miRNAs in specific diseases and potential biomarkers/targets for drug and medicinal therapeutics.
APA, Harvard, Vancouver, ISO, and other styles
13

Bauer, Pavol. "Parallelism in Event-Based Computations with Applications in Biology." Doctoral thesis, Uppsala universitet, Tillämpad beräkningsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-332009.

Full text
Abstract:
Event-based models find frequent usage in fields such as computational physics and biology as they may contain both continuous and discrete state variables and may incorporate both deterministic and stochastic state transitions. If the state transitions are stochastic, computer-generated random numbers are used to obtain the model solution. This type of event-based computations is also known as Monte-Carlo simulation. In this thesis, I study different approaches to execute event-based computations on parallel computers. This ultimately allows users to retrieve their simulation results in a fraction of the original computation time. As system sizes grow continuously or models have to be simulated at longer time scales, this is a necessary approach for current computational tasks. More specifically, I propose several ways to asynchronously simulate such models on parallel shared-memory computers, for example using parallel discrete-event simulation or task-based computing. The particular event-based models studied herein find applications in systems biology, computational epidemiology and computational neuroscience. In the presented studies, the proposed methods allow for high efficiency of the parallel simulation, typically scaling well with the number of used computer cores. As the scaling typically depends on individual model properties, the studies also investigate which quantities have the greatest impact on the simulation performance. Finally, the presented studies include other insights into event-based computations, such as methods how to estimate parameter sensitivity in stochastic models and how to simulate models that include both deterministic and stochastic state transitions.
UPMARC
APA, Harvard, Vancouver, ISO, and other styles
14

Olofsson, Nils. "Kidney Dynamic Model Enrichment." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-242315.

Full text
Abstract:
This thesis explores and explains a method using discrete curvature as a feature to find regions of vertices that can be classified as being likely to indicate the presence of an underlying tumor on a kidney surface mesh. Vertices are tagged based on curvature type and mathematical morphology is used to form regions on the mesh. The size and location of the tumor is approximated by fitting a sphere to this region. The method is intended to be employed in noninvasive radiotherapy with a dynamic soft tissue model. It could also provide an alternative to volumetric methods used to segment tumors. A validation is made using the images from which the kidney mesh was constructed, the tumor is visible as a comparison to the method result. The dynamic kidney model is validated using the Hausdorff distance and it is explained how this can be computed in an effective way using bounding volume hierarchies. Both the tumor finding method and the dynamic model show promising results since they lie within the limit used by practitioners during therapy.
APA, Harvard, Vancouver, ISO, and other styles
15

Prost, Jean-Philippe. "Modelling Syntactic Gradience with Loose Constraint-based Parsing." Phd thesis, Université de Provence - Aix-Marseille I, 2008. http://tel.archives-ouvertes.fr/tel-00352828.

Full text
Abstract:
La grammaticalité d'une phrase est habituellement conçue comme une notion binaire : une phrase est soit grammaticale, soit agrammaticale. Cependant, bon nombre de travaux se penchent de plus en plus sur l'étude de degrés d'acceptabilité intermédiaires, auxquels le terme de gradience fait parfois référence. À ce jour, la majorité de ces travaux s'est concentrée sur l'étude de l'évaluation humaine de la gradience syntaxique. Cette étude explore la possibilité de construire un modèle robuste qui s'accorde avec ces jugements humains.
Nous suggérons d'élargir au langage mal formé les concepts de Gradience Intersective et de Gradience Subsective, proposés par Aarts pour la modélisation de jugements graduels. Selon ce nouveau modèle, le problème que soulève la gradience concerne la classification d'un énoncé dans une catégorie particulière, selon des critères basés sur les caractéristiques syntaxiques de l'énoncé. Nous nous attachons à étendre la notion de Gradience Intersective (GI) afin qu'elle concerne le choix de la meilleure solution parmi un ensemble de candidats, et celle de Gradience Subsective (GS) pour qu'elle concerne le calcul du degré de typicité de cette structure au sein de sa catégorie. La GI est alors modélisée à l'aide d'un critère d'optimalité, tandis que la GS est modélisée par le calcul d'un degré d'acceptabilité grammaticale. Quant aux caractéristiques syntaxiques requises pour permettre de classer un énoncé, notre étude de différents cadres de représentation pour la syntaxe du langage naturel montre qu'elles peuvent aisément être représentées dans un cadre de syntaxe modèle-théorique (Model-Theoretic Syntax). Nous optons pour l'utilisation des Grammaires de Propriétés (GP), qui offrent, précisément, la possibilité de modéliser la caractérisation d'un énoncé. Nous présentons ici une solution entièrement automatisée pour la modélisation de la gradience syntaxique, qui procède de la caractérisation d'une phrase bien ou mal formée, de la génération d'un arbre syntaxique optimal, et du calcul d'un degré d'acceptabilité grammaticale pour l'énoncé.
À travers le développement de ce nouveau modèle, la contribution de ce travail comporte trois volets.
Premièrement, nous spécifions un système logique pour les GP qui permet la révision de sa formalisation sous l'angle de la théorie des modèles. Il s'attache notamment à formaliser les mécanismes de satisfaction et de relâche de contraintes mis en oeuvre dans les GP, ainsi que la façon dont ils permettent la projection d'une catégorie lors du processus d'analyse. Ce nouveau système introduit la notion de satisfaction relâchée, et une formulation en logique du premier ordre permettant de raisonner au sujet d'un énoncé.
Deuxièmement, nous présentons notre implantation du processus d'analyse syntaxique relâchée à base de contraintes (Loose Satisfaction Chart Parsing, ou LSCP), dont nous prouvons qu'elle génère toujours une analyse syntaxique complète et optimale. Cette approche est basée sur une technique de programmation dynamique (dynamic programming), ainsi que sur les mécanismes décrits ci-dessus. Bien que d'une complexité élevée, cette solution algorithmique présente des performances suffisantes pour nous permettre d'expérimenter notre modèle de gradience.
Et troisièmement, après avoir postulé que la prédiction de jugements humains d'acceptabilité peut se baser sur des facteurs dérivés de la LSCP, nous présentons un modèle numérique pour l'estimation du degré d'acceptabilité grammaticale d'un énoncé. Nous mesurons une bonne corrélation de ces scores avec des jugements humains d'acceptabilité grammaticale. Qui plus est, notre modèle s'avère obtenir de meilleures performances que celles obtenues par un modèle préexistant que nous utilisons comme référence, et qui, quant à lui, a été expérimenté à l'aide d'analyses syntaxiques générées manuellement.
APA, Harvard, Vancouver, ISO, and other styles
16

Mishkinis, Anton. "Extension des méthodes de géométrie algorithmique aux structures fractales." Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00991384.

Full text
Abstract:
La définition de formes par ces procédés itératifs génère des structures avec des propriétésspécifiques intéressantes : rugosité, lacunarité. . . . Cependant, les modèles géométriques classiquesne sont pas adaptés à la description de ces formes.Dans le but de développer un modeleur itératif pour concevoir des objets fractals décrits à l'aide duBCIFS, nous avons développé un ensemble d'outils et d'algorithmes génériques qui nous permettentd'évaluer, de caractériser et d'analyser les différentes propriétés géométriques (la localisation, lecalcul de l'enveloppe convexe, de la distance à partir d'un point, etc) de fractals. Nous avons identifiéles propriétés des opérations standards (intersection, union, offset, . . . ) permettant de calculer uneapproximation d'image des fractales et de plus d'optimiser ces algorithmes d'approximation.Dans certains cas, il est possible de construire un CIFS avec l'opérateur de HUTCHINSON généralisédont l'attracteur est suffisamment proche du résultat de l'opération par rapport à la métrique deHausdorff. Nous avons développé un algorithme générique pour calculer ces CIFS pour une précisiondonnée. Nous avons défini la propriété d'auto-similarité de l'opération, qui définie un ensemble detransformations utilisé dans un système itératif résultant.Pour construire un CIFS exact de l'image, si il existe, il faut prouver tous les similitudes nécessairesmanuellement. Nous explicitons également la condition de l'opération, quand le résultat peut êtrereprésenté par un IFS avec un opérateur de HUTCHINSON généralisé. Dans ce cas, il n'est que cettecondition à prouver manuellement
APA, Harvard, Vancouver, ISO, and other styles
17

Szekely, Tamas. "Stochastic modelling and simulation in cell biology." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:f9b8dbe6-d96d-414c-ac06-909cff639f8c.

Full text
Abstract:
Modelling and simulation are essential to modern research in cell biology. This thesis follows a journey starting from the construction of new stochastic methods for discrete biochemical systems to using them to simulate a population of interacting haematopoietic stem cell lineages. The first part of this thesis is on discrete stochastic methods. We develop two new methods, the stochastic extrapolation framework and the Stochastic Bulirsch-Stoer methods. These are based on the Richardson extrapolation technique, which is widely used in ordinary differential equation solvers. We believed that it would also be useful in the stochastic regime, and this turned out to be true. The stochastic extrapolation framework is a scheme that admits any stochastic method with a fixed stepsize and known global error expansion. It can improve the weak order of the moments of these methods by cancelling the leading terms in the global error. Using numerical simulations, we demonstrate that this is the case up to second order, and postulate that this also follows for higher order. Our simulations show that extrapolation can greatly improve the accuracy of a numerical method. The Stochastic Bulirsch-Stoer method is another highly accurate stochastic solver. Furthermore, using numerical simulations we find that it is able to better retain its high accuracy for larger timesteps than competing methods, meaning it remains accurate even when simulation time is speeded up. This is a useful property for simulating the complex systems that researchers are often interested in today. The second part of the thesis is concerned with modelling a haematopoietic stem cell system, which consists of many interacting niche lineages. We use a vectorised tau-leap method to examine the differences between a deterministic and a stochastic model of the system, and investigate how coupling niche lineages affects the dynamics of the system at the homeostatic state as well as after a perturbation. We find that larger coupling allows the system to find the optimal steady state blood cell levels. In addition, when the perturbation is applied randomly to the entire system, larger coupling also results in smaller post-perturbation cell fluctuations compared to non-coupled cells. In brief, this thesis contains four main sets of contributions: two new high-accuracy discrete stochastic methods that have been numerically tested, an improvement that can be used with any leaping method that introduces vectorisation as well as how to use a common stepsize adapting scheme, and an investigation of the effects of coupling lineages in a heterogeneous population of haematopoietic stem cell niche lineages.
APA, Harvard, Vancouver, ISO, and other styles
18

Edvinsson, Marcus. "Implementing the circularly polarized light method for determining wall thickness of cellulosic fibres." Thesis, Uppsala universitet, Bildanalys och människa-datorinteraktion, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-174066.

Full text
Abstract:
The wall thickness of pulp fibers plays a major role in the paper industry, but it is currently not possible to measure this property without manual laboratory work. In 2007, researcher Ho Fan Jang patented a technique to automatically measure fiber wall thickness, combining the unique optical properties of pulp fibers with image analysis. In short, the method creates images through the use of an optical system resulting in color values which demonstrate the retardation of a particular wave length instead of the intensity. A device based on this patent has since been developed by Eurocon Analyzer. This thesis investigates the software aspects of this technique, using sample images generated by the Eurocon Analyzer prototype. The software developed in this thesis has been subdivided into three groups for independent consideration. First being the problem of solving wall thickness for colors in the images. Secondly, the image analysis process of identifying fibers and good points for measuring them. Lastly, it is investigated how statistical analysis can be applied to improve results and derive other useful properties such as fiber coarseness. With the use of this technique there are several problems which need to be overcome. One such problem is that it may be difficult to disambiguate the colors produced by fibers of different thickness. This complication may be reduced by using image analysis and statistical analysis. Another challenge can be that theoretical values often differ greatly from the observed values which makes the computational aspect of the method problematic. The results of this thesis show that the effects of these problems can be greatly reduced and that the method offers promising results. The results clearly distinguish between and show the expected characteristics of different pulp samples, but more qualitative reference measurements are needed in order to draw conclusions on the correctness of the results.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Bei. "Separating Features from Noise with Persistence and Statistics." Diss., 2010. http://hdl.handle.net/10161/2982.

Full text
Abstract:

In this thesis, we explore techniques in statistics and persistent homology, which detect features among data sets such as graphs, triangulations and point cloud. We accompany our theorems with algorithms and experiments, to demonstrate their effectiveness in practice.

We start with the derivation of graph scan statistics, a measure useful to assess the statistical significance of a subgraph in terms of edge density. We cluster graphs into densely-connected subgraphs based on this measure. We give algorithms for finding such clusterings and experiment on real-world data.

We next study statistics on persistence, for piecewise-linear functions defined on the triangulations of topological spaces. We derive persistence pairing probabilities among vertices in the triangulation. We also provide upper bounds for total persistence in expectation.

We continue by examining the elevation function defined on the triangulation of a surface. Its local maxima obtained by persistence pairing are useful in describing features of the triangulations of protein surfaces. We describe an algorithm to compute these local maxima, with a run-time ten-thousand times faster in practice than previous method. We connect such improvement with the total Gaussian curvature of the surfaces.

Finally, we study a stratification learning problem: given a point cloud sampled from a stratified space, which points belong to the same strata, at a given scale level? We assess the local structure of a point in relation to its neighbors using kernel and cokernel persistent homology. We prove the effectiveness of such assessment through several inference theorems, under the assumption of dense sample. The topological inference theorem relates the sample density with the homological feature size. The probabilistic inference theorem provides sample estimates to assess the local structure with confidence. We describe an algorithm that computes the kernel and cokernel persistence diagrams and prove its correctness. We further experiment on simple synthetic data.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography