To see the other types of publications on this topic, follow the link: Mathematical and computational ophthalmology.

Dissertations / Theses on the topic 'Mathematical and computational ophthalmology'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Mathematical and computational ophthalmology.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Andrews, Brian. "Computational Solutions for Medical Issues in Ophthalmology." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case15275972120621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Macdougall, Lindsey C. "Mathematical modelling of retinal metabolism." Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/30615/.

Full text
Abstract:
Age-related macular degeneration and diabetic retinopathy, in which the cells at the back of the eye degrade due to age and diabetes respectively, are prevalent causes of vision loss in adults. We formulate mathematical models of retinal metabolic regulation to investigate defects that may be responsible for pathology. Continuum PDE models are developed to test whether rod photoreceptors, light detecting cells in the eye, may regulate their energy demand by adapting their length under light and dark conditions. These models assume photoreceptor length depends on the availability of nutrients, such as oxygen, which diffuse and are consumed within the photoreceptor. Our results suggest that the length is limited by oxygen and phosphocreatine shuttle-derived ATP under dark and light conditions respectively. Parameter sensitivity analysis indicates that lowered mitochondrial efficiency due to ageing may be responsible for the damage to and death of photoreceptors that are characteristic of age-related macular degeneration. In the latter part of this thesis we shift our focus to the inner retina and examine how metabolite levels in the tissue surrounding the neurons (highly sensitive, excitable cells that transmit electrical signals) are regulated by glial cells. For instance, stimulated neurons activate their neighbours via the release of the neurotransmitter glutamate, while glial cells regulate neuronal activity via glutamate uptake. Diabetes produces large fluctuations in blood glucose levels, and eventually results in neuronal cell death, causing vision loss. We generate an ODE model for the exchange of key metabolites between neurons and surrounding cells. Using numerical and analytical techniques, we use the model to show that the fluctuations in blood glucose and metabolic changes associated with diabetes may result in abnormally high glutamate levels in the inner retina, which could lead to neuronal damage via excitotoxicity (unregulated neuronal stimulation).
APA, Harvard, Vancouver, ISO, and other styles
3

Casarin, Stefano. "Mathematical models in computational surgery." Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS008/document.

Full text
Abstract:
La chirurgie informatisée est une science nouvelle dont le but est de croiser la chirurgie avec les sciences de l’informatique afin d’aboutir à des améliorations significatives dans les deux domaines. Avec l’évolution des nouvelles techniques chirurgicales, une collaboration étroite entre chirurgiens et chercheurs est devenue à la fois inévitable et essentielle à l’optimisation des soins chirurgicaux. L’utilisation de modèles mathématiques est la pierre angulaire de ce nouveau domaine. Cette thèse démontre comment une approche systématique d’un problème clinique nous a amenés à répondre à des questions ouvertes dans le domaine chirurgical en utilisant des modèles mathématiques à grande échelle. De manière générale, notre approche inclut (i) une vision générale du problème, (ii) le ciblage du/des système(s) physiologique(s) à étudier pour y répondre, et (iii) un effort de modélisation mathématique, qui a toujours été poussé par la recherche d’un compromis entre complexité du système étudié et réalité physiologique. Nous avons consacré la première partie de cette thèse à l’optimisation des conditions limites à appliquer à un bio-réacteur utilisé pour démultiplier le tissu pulmonaire provenant d’un donneur. Un modèle géométrique de l’arbre trachéo-bronchique couplé à un modèle de dépôt de soluté nous a permis de déterminer l’ensemble des pressions à appliquer aux pompes servant le bio-réacteur afin d’obtenir une distribution optimale des nutriments à travers les cultures de tissus. Nous avons consacré la seconde partie de cette thèse au problème de resténose des greffes de veines utilisées pour contourner une occlusion artérielle. Nous avons reproduit l’apparition de resténose grâce à plusieurs modèles mathématiques qui permettent d’étudier les preuves cliniques et de tester des hypothèses cliniques avec un niveau croissant de complexité et de précision. Pour finir, nous avons développé un cadre de travail robuste pour tester les effets des thérapies géniques afin de limiter la resténose. Une découverte intéressante a été de constater qu’en contrôlant un groupe de gènes spécifique, la perméabilité à la lumière double après un mois de suivi. Grace aux résultats obtenus, nous avons démontré que la modélisation mathématique peut servir de puissant outil pour l’innovation chirurgicale
Computational surgery is a new science that aims to intersect surgery and computational sciences in order to bring significant improvements in both fields. With the evolution of new surgical techniques, a close collaboration between surgeons and computational scientists became unavoidable and also essential to optimize surgical care. A large usage of mathematical models is the cornerstone in this new field. The present thesis shows how a systematic approach to a clinical problem brought us to answer open questions in the field of surgery by using mathematical models on a large scale. In general, our approach includes (i) an overview of the problem, (ii) the individuation of which physiological system/s is/are to be studied to address the question, and (iii) a mathematical modeling effort, which has been always driven by the pursue of a compromise between system complexity and closeness to the physiological reality. In the first part, we focused on the optimization of the boundary conditions to be applied to a bioreactor used to re-populate lung tissue from donor. A geometrical model of tracheobronchial tree combined with a solute deposition model allowed us to retrieve the set of pressures to be applied to the pumps serving the bioreactor in order to reach an optimal distribution of nourishment across the lung scaffold. In the second part, we focused on the issue of post-surgical restenosis of vein grafts used to bypass arterial occlusions. We replicated the event of restenosis with several mathematical models that allow us to study the clinical evidences and to test hypothesis with an escalating level of complexity and accuracy. Finally, we developed a solid framework to test the effect of gene therapies aimed to limit the restenosis. Interestingly, we found that by controlling a specific group of genes, the lumen patency is double after a month of follow-up. With the results achieved, we proved how mathematical modeling can be used as a powerful tool for surgical innovation
APA, Harvard, Vancouver, ISO, and other styles
4

He, Xiaoyin. "CHARACTERIZATION OF CORNEAL BIOMECHANICAL PROPERTIES USING EXPERIMENTAL AND COMPUTATIONAL METHODS." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1280178567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Baker, Nathan Andrew. "Mathematical and computational modeling of biomolecular systems /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2001. http://wwwlib.umi.com/cr/ucsd/fullcit?p3007138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Remias, Michael George. "Computational studies of some fuzzy mathematical problems." Thesis, Curtin University, 2012. http://hdl.handle.net/20.500.11937/1147.

Full text
Abstract:
In modelling and optimizing real world systems and processes, one usually ends up with a linear or nonlinear programming problem, namely maximizing one or more objective functions subject to a set of constraint equations or inequalities. For many cases, the constraints do not need to be satisfied exactly, and the coefficients involved in the model are imprecise in nature and have to be described by fuzzy numbers to reflect the real world nature. The resulting mathematical programming problem is referred to as a fuzzy mathematical programming problem.Over the past decades, a great deal of work has been conducted to study fuzzy mathematical programming problems and a large volume of results have been obtained. However, many issues have not been resolved. This research is thus undertaken to study two types of fuzzy mathematical programming problems. The first type of problems is fuzzy linear programming in which the objective function contains fuzzy numbers. To solve this type of problems, we firstly introduce the concept of fuzzy max order and non-dominated optimal solution to fuzzy mathematical programming problems within the framework of fuzzy mathematics. Then, based on the new concept introduced, various theorems are developed, which involve converting the fuzzy linear programming problem to a four objective linear programming problem of non-fuzzy members. The theoretical results and methods developed are then validated and their applications for solving fuzzy linear problems are demonstrated through examples.The second type of problems which we tackle in this research is fuzzy linear programming in which the constraint equations or inequalities contain fuzzy numbers. For this work, we first introduce a new concept, the α-fuzzy max order. Based on this concept, the general framework of an α-fuzzy max order method is developed for solving fuzzy linear programming problems with fuzzy parameters in the constraints. For the special cases in which the constraints consist of inequalities containing fuzzy numbers with isosceles triangle or trapezoidal membership functions, we prove that the feasible solution space can be determined by the respective 3n or 4n non-fuzzy inequalities. For the general cases in which the constraints contain fuzzy numbers with any other form of membership functions, robust numerical algorithms have been developed for the determination of the feasible solution space and the optimal solution to the fuzzy linear programming problem in which the constraints contain fuzzy parameters. Further, by using the results for both the first and second types of problems, general algorithms have also been developed for the general fuzzy linear programming problems in which both the objective function and the constraint inequalities contain fuzzy numbers with any forms of membership functions. Some examples are then presented to validate the theoretical results and the algorithms developed, and to demonstrate their applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Thorn, Graeme John. "Mathematical and computational modelling of friction stir welding." Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.426545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Southern, James Alastair. "Mathematical and computational modelling of ultrasound elasticity imaging." Thesis, University of Oxford, 2006. http://ora.ox.ac.uk/objects/uuid:242fddf0-ef9c-4a90-88f5-c7b41f4bda5a.

Full text
Abstract:
In this thesis a parameter recovery method for use in ultrasound elasticity imaging is developed. Elasticity imaging is a method for using a series of ultrasound images (and the displacement field between them) to estimate the spatial variation of the stiffness of the tissue being imaged. Currently iterative methods are used to do this: a model of tissue mechanics is assumed and a large number of simulations using varying parameters are compared to the actual displacement field. The aim of this work is to develop a solution method that works back from the known displacement field to determine the tissue properties, reducing the number of simulations that must be performed to one. The parameter recovery method is based on the formulation and direct solution of the 2-d linear elasticity inverse problem using finite element methods. The inverse problem is analyzed mathematically and the existence and uniqueness of solutions is described for varying numbers of displacement fields and appropriate boundary conditions. It is shown to be hyperbolic (and so difficult to solve numerically) and then reformulated as a minimization problem with hyperbolic Euler-Lagrange equations. A finite element solution of the minimization problem is developed and implemented. The results of the finite element implementation are shown to work well in recovering the parameters used in numerical simulations of the linear elasticity forward problem so long as these are continuous. The method is shown to be robust in dealing with small errors in displacement estimation and larger errors in the boundary values of the parameters. The method is also tested on displacement fields calculated from series of real ultrasound images. The validity of modelling the ultrasound elasticity imaging process as a 2-d problem is discussed. The assumption of plane strain is shown not to be valid and methods for extending the parameter recovery method to 3 dimensions once 3-d ultrasound becomes more widely used are described (but not implemented).
APA, Harvard, Vancouver, ISO, and other styles
9

GIRIBONE, PIER GIUSEPPE. "Mathematical modeling in Quantitative Finance and Computational Economics." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1046108.

Full text
Abstract:
The first part of my PhD Thesis deals with different Machine Learning techniques mainly applied to solve financial engineering and risk management issues. After a short literary review, every chapter analyzes a particular topic linked to the implementation of these models, showing the most suitable methodologies able to solve it efficiently. The following topics are therefore covered: *) Data Fitting and Regression *) Forecasting *) Classification *) Outlier Detection and Data Quality *) Pricing Every chapter provides the theoretical explanation of the model, the description of the implementation in a numerical computing environment and the solution for real case-studies. Among others, the main technologies discussed in this work are the following: *) Shallow Multi-Layers networks *) Feed-forward and static networks *) Radial Basis Functions (RBF) networks *) Recurrent and Dynamic Neural Networks *) Nonlinear Autoregressive (NAR) networks and Nonlinear Autoregressive networks with exogenous variables (NARX) *) Deep Neural networks *) Convolutional Networks (Conv Net) *) Fuzzy C-Means (FCM) clustering *) Self-Organizing Maps (SOM) and Kohonen networks *) Neural Networks with Circular Neurons *) Auto-Associative Neural Networks (AANN) and Auto-encoders for Nonlinear Principal Component Analysis (NLPCA) The second part of my PhD Thesis deals with the problem of Optimal Control in Quantitative Finance and Labour Economics. Even if the fields of application are hugely different, they share the same mathematical instrument for their solution: the Bellman principle of optimality. After a short literary review that introduces the financial and economic problems solved in this part, the following four chapters show the most popular pricing techniques used to evaluate an option: closed formulas, Partial Differential Equations (PDE), Lattice methods and Stochastic Differential Equations (SDE). Chapter 6 faces the problem of early-exercise in option pricing and shows how to apply the principle of optimality in the models presented in the previous chapters. The following pricing methodologies are covered: *) Stochastic Trees and Lattice models (Cox-Ross-Rubinstein, Tian, Jarrow-Rudd, Drifted CRR, Leisen-Reimer, CRR Trinomial, Adaptive Mesh Method (AMM), Pentanomial and Heptanomial Trees) *) PDE numerical schemes (Finite Difference Method - FDM, Finite Elements Method - FEM and Radial Basis Function - RBF) *) SDE numerical solution (Longstaff-Schwartz Monte Carlo) *) Quasi-closed formulas (Roll-Geske-Whaley, Barone-Adesi-Whaley, Bjerksund- Stensland model) The last two chapters examine two important Labour Economics dynamic problems in the field of Optimal Control Theory: Implicit Contracts and Wage Bargaining. They share the same procedure for the solution which can be synthesized in these steps: *) Infinite-horizon deterministic optimal control problem formulation. The solution for this kind of problem can be found applying the Hamilton – Jacobi – Bellman (HJB) Equation. *) Design of a Markov Decision Chain for the numerical solution of the previous problem. *) Infinite-horizon stochastic optimal control problem formulation. After the validation of the discretization scheme in the deterministic context, the Markov Decision Chain can be extended in order to solve the stochastic version of the problem. In particular, an Ornstein-Uhlenbeck process has been introduced in the model. The third part of my PhD Thesis deals with Forecasting and Risk Management in Energy Markets. The first chapter introduces the two studies presented in this field through a short literary review and the Regulatory framework. The second chapter suggests some quantitative methods with the aim of managing the main risks of Guarantees of Origin (Gos). Given that Gos trading is rather recent, it implements an innovative integrated control system in order to handle market and counterparty risks. The following techniques are covered: *) Market Risk: Historical, parametric and Monte Carlo VaR with a special focus on volatility modeling (historical, implied, GARCH, SABR). *) Liquidity Risk: Bid-Ask spread analysis. *) Counterparty Risk: Probability of Default estimation starting from: listed CDS premium, traded bond prices and statement analysis (KMV model). The third chapter deals with the energy spot prices forecasting problem. The aim of the study is to establish a time-horizon within which it is reasonable to predict prices. The state-of-the-art architectures based on Deep Learning methods are implemented in order to solve this econometric issue. The analyzed techniques are: *) A multi-layered Nonlinear Autoregressive (NAR) network (Endogenous variable: prices). *) A multi-layered Nonlinear Autoregressive with an exogenous variable (NARX) network (Endogenous variable: prices - Exogenous variable: demand). *) A Long Short-Term Memory (LSTM) network with one feature (prices). *) A Long Short-Term Memory (LSTM) network with two features (prices and demand).
APA, Harvard, Vancouver, ISO, and other styles
10

Dutta, Jayanta. "Computational aspects of some mathematical and numerical problems." Thesis, University of North Bengal, 2015. http://ir.nbu.ac.in/handle/123456789/1850.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Stahl, Levi Russell. "OBJECT ORIENTED DEVELOPMENT OF A MATHEMATICAL EQUATION EDITOR." MSSTATE, 2005. http://sun.library.msstate.edu/ETD-db/theses/available/etd-07062005-173340/.

Full text
Abstract:
Computers since their inception have been used to solve engineering problems. Toward support of next-generation, customizable, generalized software, a mathematical equation editor has been designed, developed, and tested using object oriented (OO) programming techniques. The motivating purpose of this equation editor is to allow a user to graphically define mathematical equations to be solved in a computational partial differential equation-based problem solving environment. The OO scripting language Python was used in conjunction with the OO GUI toolkit Qt to create the editor. Analysis of the underlying abstraction of a general equation yielded the key concept of an information-holding bounding box. Such boxes hierarchically contain every character and symbol in an equation. Specific rules were formulated to spatially arrange a set of boxes into a properly formatted equation. Robust insertion logic of alphanumeric characters, mathematical symbols, and common function names was implemented for intuitive point-and-click equation building.
APA, Harvard, Vancouver, ISO, and other styles
12

Persson, Mikael. "Cableharness selection for gearboxes using mathematical optimization." Thesis, KTH, Optimeringslära och systemteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209929.

Full text
Abstract:
The Scania modular product system enables the production of thousands of different versions of gearboxes. If each version use a unique cable harness, this leads to large costs for storage and production. It is desired to find a smaller set of cable harnesses to fit the needs of all gearboxes. In this report we present two mathematical programming models to accomplish this while minimizing cost for production and storage. We propose a procedure for partitioning the data into smaller subsets without loosing model accuracy. We also show how the solution to the first model may be used as a warm start solution for the second model. The report focuses on cables for gearbox control systems used in heavy trucks manufactured by Scania. Results from testing the models against data provided by Scania is presented. These results suggest that substantial reduction in production cost can be achieved. Findings from this project can be used in similar situations, for example engine control system cables and general vehicle electric wiring.
Scanias modulsystem gör att tusentals olika växellådsvarianter är möjliga att tillverka. Om varje växellådsvariant skall ha ett eget kablage leder detta till stora lagerhållnings- och produktionskostnader. Det är därför fördelaktigt om man kan hitta en mindre uppsättning kablage som uppfyller kraven för alla växellådor. Två modeller inom matematisk optimering presenteras för att uppnå målet samtidigt som kostnader för lagerhållning och produktion minimeras. Vidare föreslås en metod för att dela upp problemet i delproblem utan att noggrannheten minskar. Vi visar även hur lösningen från den första modellen kan användas som varmstart till den andra modellen. Fokus är på kablage för växellådor till Scanias lastbilar. Resultat från test av modellerna med data från Scanias produktion presenteras. Resultaten visar på att en betydande besparing är möjlig. Rapportens slutsatser kan även användas i liknande situationer, till exempel motorstyrsystem och andra elsystem i fordon.
APA, Harvard, Vancouver, ISO, and other styles
13

Jaroudi, Rym. "Inverse Mathematical Models for Brain Tumour Growth." Licentiate thesis, Linköpings universitet, Tekniska fakulteten, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-141982.

Full text
Abstract:
We study the following well-established model of reaction-diffusion type for brain tumour growth: This equation describes the change over time of the normalised tumour cell density u as a consequence of two biological phenomena: proliferation and diffusion. We discuss a mathematical method for the inverse problem of locating the brain tumour source (origin) based on the reaction-diffusion model. Our approach consists in recovering the initial spatial distribution of the tumour cells  starting from a later state , which can be given by a medical image. We use the nonlinear Landweber regularization method to solve the inverse problem as a sequence of well-posed forward problems. We give full 3-dimensional simulations of the tumour in time on two types of data, the 3d Shepp-Logan phantom and an MRI T1-weighted brain scan from the Internet Brain Segmentation Repository (IBSR). These simulations are obtained using standard finite difference discretisation of the space and time-derivatives, generating a simplistic approach that performs well. We also give a variational formulation for the model to open the possibility of alternative derivations and modifications of the model. Simulations with synthetic images show the accuracy of our approach for locating brain tumour sources.
APA, Harvard, Vancouver, ISO, and other styles
14

Sawko, Robert. "Mathematical and computational methods of non-Newtonian, multiphase flows." Thesis, Cranfield University, 2012. http://dspace.lib.cranfield.ac.uk/handle/1826/7264.

Full text
Abstract:
The research presented in this thesis is concerned with the development of numerical techniques and mathematical models for non-Newtonian uids and two-phase ows in pipes and channels. Single phase, turbulent ow calculations of non-Newtonian uids were performed initially. Based on the literature a revised approach to wall modelling is proposed and implemented. The approach uses analytical and experimental analyses of the turbulent boundary layer structure. A comparison with the standard approach is presented. The interaction between turbulence and non-Newtonian behaviour is studied by examining the rate of strain induced by uctuating components of velocity. The statistical analysis of published DNS data is performed. Finally, a model is proposed where the turbulent rate of strain is determined from turbulence quantities used by the Reynolds-averaged Navier{Stokes model and used in the calculation of molecular viscosity. For two-phase ow, the solution procedure using periodic boundary conditions was developed under an assumption of a at interface. The numerical technique was veri ed by comparing to an analytical result obtained for laminar ow in a channel. An extension to three dimensional ow is performed. With periodic boundary conditions standard turbulence models are applied to two-phase strati ed ow. Several models and their corrections for twophase ow are assessed and a new model is proposed. The numerical studies were carried out primiarily in the open-source code OpenFOAM, but initial attempts were made in commercial packages such as STAR-CD and FLUENT. Experimental data collected from the literature are used to verify the results showing good agreement in pressure drops and phase fractions.
APA, Harvard, Vancouver, ISO, and other styles
15

Kelly, David. "Computational mechanics in practice : mathematical adaptions and experimental applications." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.570852.

Full text
Abstract:
The definition and quantification of complexity is a source of debate. A promising answer, from Crutch field, Shalizi and co-workers, identifies complexity with the amount of information required to optimally predict the future of a process. Computational mechanics computes this quantity for discrete time series; quantifying the complexity and generating minimal, optimally predictive models. Here we adapt and apply these methods to two very different problems. First, we extend computational mechanics to continuous data which cluster around discrete values. This is applied to the analysis of single molecule experimental data; allowing us to infer hidden Markov models without the necessity of assuming model architecture and allowing for the inference of degenerate states, giving advantages over previous analysis methods. The new analysis methods are demonstrated to perform well on both simulated data, in high noise and sparse data conditions; and experimental data, namely fluorescence resonance energy transfer spectra of Holliday junction conformational dynamics. Secondly, we apply computational mechanics to investigations of the HP model of protein folding. Computational mechanics was used to investigate the properties of the sequence sets folding to the highly designable structures. A hypothesised correlation between structures' designability and the statistical complexity of its sequence set was unsupported. However, methods were developed to succinctly encapsulate the non-local statistical regularities of sequence sets and used to accurately predict the structure of designing and randomly generated sequences. Finally, limitations of the standard algorithm for reconstructing predictive models are addressed. The algorithm can fail due to pair-wise comparisons of conditional distributions. A clustering method, considering all distributions simultaneously has been developed. This also makes clear when the algorithm may be effectively employed. A second issue concerns a class of processes for which computational mechanics cannot infer the correct, optimally predictive models. Adaptions to allow the inference of these processes have been devised.
APA, Harvard, Vancouver, ISO, and other styles
16

Kaya, Ilhan. "Mathematical and Computational Methods for Freeform Optical Shape Description." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5959.

Full text
Abstract:
Slow-servo single-point diamond turning as well as advances in computer controlled small lap polishing enable the fabrication of freeform optics, specifically, optical surfaces for imaging applications that are not rotationally symmetric. Freeform optical elements will have a profound importance in the future of optical technology. Orthogonal polynomials added onto conic sections have been extensively used to describe optical surface shapes. The optical testing industry has chosen to represent the departure of a wavefront under test from a reference sphere in terms of orthogonal ?-polynomials, specifically Zernike polynomials. Various forms of polynomials for describing freeform optical surfaces may be considered, however, both in optical design and in support of fabrication. More recently, radial basis functions were also investigated for optical shape description. In the application of orthogonal ?-polynomials to optical freeform shape description, there are important limitations, such as the number of terms required as well as edge-ringing and ill-conditioning in representing the surface with the accuracy demanded by most stringent optics applications. The first part of this dissertation focuses upon describing freeform optical surfaces with ? polynomials and shows their limitations when including higher orders together with possible remedies. We show that a possible remedy is to use edge clustered-fitting grids. Provided different grid types, we furthermore compared the efficacy of using different types of ? polynomials, namely Zernike and gradient orthogonal Q polynomials. In the second part of this thesis, a local, efficient and accurate hybrid method is developed in order to greatly reduce the order of polynomial terms required to achieve higher level of accuracy in freeform shape description that were shown to require thousands of terms including many higher order terms under prior art. This comes at the expense of multiple sub-apertures, and as such computational methods may leverage parallel processing. This new method combines the assets of both radial basis functions and orthogonal phi-polynomials for freeform shape description and is uniquely applicable across any aperture shape due to its locality and stitching principles. Finally in this thesis, in order to comprehend the possible advantages of parallel computing for optical surface descriptions, the benefits of making an effective use of impressive computational power offered by multi-core platforms for the computation of ?-polynomials are investigated. The ?-polynomials, specifically Zernike and gradient orthogonal Q-polynomials, are implemented with a set of recurrence based parallel algorithms on Graphics Processing Units (GPUs). The results show that more than an order of magnitude speedup is possible in the computation of ?-polynomials over a sequential implementation if the recurrence based parallel algorithms are adopted.
Ph.D.
Doctorate
Electrical Engineering and Computing
Engineering and Computer Science
Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
17

SANTOS, DEBORA ANDREA DE OLIVEIRA. "DECOMPOSITION IN MATHEMATICAL PROGRAMMING APPLIED TO COMPUTATIONAL GREEN NETWORKS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=25702@1.

Full text
Abstract:
O crescente consumo de energia já se tornou uma preocupação mundial e atualmente mais de quarenta países estão envolvidos em pesquisas e programas para criar mecanismos para economizá-la. No presente trabalho é tratado o problema de Engenharia de Tráfego com base na energia (em inglês, energy-aware Traffic Engineering) aplicado ao backbone de uma rede IP que utiliza como protocolo de roteamento um SPF (Shortest Path First), como o OSPF (Open Shortest Path First), por exemplo. Na abordagem proposta são considerados os problemas de desligamento de nós (roteadores) e circuitos, para a economia de energia; e da garantia de um nível de máxima utilização dos circuitos, para assegurar os requisitos de QoS. Para a resolução do problema de otimização, em lugar de adotarem-se métodos heurísticos, propõe-se o tratamento direto por meio de decomposição de Benders, segmentando um problema complicado e de elevada carga computacional em vários menores cuja resolução é mais simples e cuja convergência é mais rápida.
The growing energy consumption has already become a global concern and currently more than forty countries are involved in researches and programs in order to create mechanisms to save it. This work deals with the energy-aware Traffic Engineering problem applied to the backbone of an IP network in which the used routing protocol is a SPF (Shortest Path First) one, such as OSPF (Open Shortest Path First), for example. The proposed approach considers the problem of switching-off nodes (routers) and circuits, for energy saving; and it also considers the problem of ensuring a maximum utilization level by the circuits, towards to assure QoS requirements. In order to solve the optimization problem, rather than adopting heuristic methods, we propose the direct processing by means of Benders decomposition, crumbling a complicated and hard to solve problem into several smaller ones whose resolution is more simple and whose convergence is faster.
APA, Harvard, Vancouver, ISO, and other styles
18

Chung, Andy Heung Wing. "Novel mathematical and computational approaches for modelling biological systems." Thesis, University of Sussex, 2016. http://sro.sussex.ac.uk/id/eprint/60405/.

Full text
Abstract:
This work presents the development, analysis and subsequent simulations of mathematical models aimed at providing a basis for modelling atherosclerosis. This cardiovascular disease is characterized by the growth of plaque in artery walls, forming lesions that protrude into the lumen. The rupture of these lesions contributes greatly to the number of cases of stroke and myocardial infarction. These are two of the main causes of death in the UK. Any work to understand the processes by which the disease initiates and progresses has the ultimate aim of limiting the disease through either its prevention or medical treatment and thus contributes a relevant addition to the growing body of research. The literature supports the view that the cause of atherosclerotic lesions is an in inflammatory process-succinctly put, excess amounts of certain biochemical species fed into the artery wall via the bloodstream spur the focal accumulation of extraneous cells. Therefore, suitable components of a mathematical model would include descriptions of the interactions of the various biochemical species and their movement in space and time. The models considered here are in the form of partial differential equations. Specifically, the following models are examined: first, a system of reaction-diffusion equations with coupling between surface and bulk species; second, a problem of optimisation to identify an unknown boundary; and finally, a system of advection-reaction-diffusion equations to model the assembly of keratin networks inside cells. These equations are approximated and solved computationally using the finite element method. The methods and algorithms shown aim to provide more accurate and efficient means to obtain solutions to such equations. Each model in this work is extensible and with elements from each model combined, they have scope to be a platform to give a fuller model of atherosclerosis.
APA, Harvard, Vancouver, ISO, and other styles
19

Ahmed, Israr. "Mathematical and computational modelling of soft and active matter." Thesis, University of Central Lancashire, 2016. http://clok.uclan.ac.uk/18641/.

Full text
Abstract:
The collective motion of organisms such as flights of birds, swimming of school of fish, migration of bacteria and movement of herds across long distances is a fascinating phenomenon that has intrigued man for centuries. Long and details observations have resulted in numerous abstract hypothesis and theories regarding the collective motion animals and organisms. In recent years the developments in supercomputers and general computational power along with highly refined mathematical theories and equations have enabled the collective motion of particles to be investigated in a logical and systematic manner. Hence, this study is focused mathematical principles are harnessed along with computational programmes in order to obtain a better understanding of collective behaviour of particles. Two types of systems have been considered namely homogeneous and heterogeneous systems, which represent collective motion with and without obstacles respectively. The Vicsek model has been used to investigate the collective behaviour of the particles in 2D and 3D systems. Based on this, a new model was developed: the obstacle avoidance model. This showed the interaction of particles with fixed and moving obstacles. It was established using this model that the collective motion of the particles was very low when higher noise was involved in the system and the collective motion of the particles was higher when lower noise and interaction radius existed. Very little is known about the collective motion of self-propelled particles in heterogeneous mediums, especially when noise is added to the system, and when the interaction radius between particles and obstacles is changed. In the presence of moving obstacles, particles exhibited a greater collective motion than with the fixed obstacles. Collective motion showed non-monotonic behaviour and the existence of optimal noise maximised the collective motion. In the presence of moving obstacles there were fluctuations in the value of the order parameter. Collective systems studies are highly useful in order to produce artificial swarms of autonomous vehicles, to develop effective fishing strategies and to understand human interactions in crowds for devising and implementing efficient and safe crowd control policies. These will help to avoid fatalities in highly crowded situations such as music concerts and sports and entertainment events with large audiences, as well as crowded shopping centres. In this study, a new model termed the obstacle avoidance model is presented which investigates the collective motion of self-propelled particles in the heterogeneous medium. In future work this model can be extended to include a combination of a number of motionless and moving obstacles hence bringing the modelling closer to reality.
APA, Harvard, Vancouver, ISO, and other styles
20

Rizzi, Barbara <1973&gt. "Bioimaging of animal embryogenesis: mathematical methods and computational algorithms." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1658/1/Rizzi_Barbara_tesi.pdf.

Full text
Abstract:
Some fundamental biological processes such as embryonic development have been preserved during evolution and are common to species belonging to different phylogenetic positions, but are nowadays largely unknown. The understanding of cell morphodynamics leading to the formation of organized spatial distribution of cells such as tissues and organs can be achieved through the reconstruction of cells shape and position during the development of a live animal embryo. We design in this work a chain of image processing methods to automatically segment and track cells nuclei and membranes during the development of a zebrafish embryo, which has been largely validates as model organism to understand vertebrate development, gene function and healingrepair mechanisms in vertebrates. The embryo is previously labeled through the ubiquitous expression of fluorescent proteins addressed to cells nuclei and membranes, and temporal sequences of volumetric images are acquired with laser scanning microscopy. Cells position is detected by processing nuclei images either through the generalized form of the Hough transform or identifying nuclei position with local maxima after a smoothing preprocessing step. Membranes and nuclei shapes are reconstructed by using PDEs based variational techniques such as the Subjective Surfaces and the Chan Vese method. Cells tracking is performed by combining informations previously detected on cells shape and position with biological regularization constraints. Our results are manually validated and reconstruct the formation of zebrafish brain at 7-8 somite stage with all the cells tracked starting from late sphere stage with less than 2% error for at least 6 hours. Our reconstruction opens the way to a systematic investigation of cellular behaviors, of clonal origin and clonal complexity of brain organs, as well as the contribution of cell proliferation modes and cell movements to the formation of local patterns and morphogenetic fields.
APA, Harvard, Vancouver, ISO, and other styles
21

Rizzi, Barbara <1973&gt. "Bioimaging of animal embryogenesis: mathematical methods and computational algorithms." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/1658/.

Full text
Abstract:
Some fundamental biological processes such as embryonic development have been preserved during evolution and are common to species belonging to different phylogenetic positions, but are nowadays largely unknown. The understanding of cell morphodynamics leading to the formation of organized spatial distribution of cells such as tissues and organs can be achieved through the reconstruction of cells shape and position during the development of a live animal embryo. We design in this work a chain of image processing methods to automatically segment and track cells nuclei and membranes during the development of a zebrafish embryo, which has been largely validates as model organism to understand vertebrate development, gene function and healingrepair mechanisms in vertebrates. The embryo is previously labeled through the ubiquitous expression of fluorescent proteins addressed to cells nuclei and membranes, and temporal sequences of volumetric images are acquired with laser scanning microscopy. Cells position is detected by processing nuclei images either through the generalized form of the Hough transform or identifying nuclei position with local maxima after a smoothing preprocessing step. Membranes and nuclei shapes are reconstructed by using PDEs based variational techniques such as the Subjective Surfaces and the Chan Vese method. Cells tracking is performed by combining informations previously detected on cells shape and position with biological regularization constraints. Our results are manually validated and reconstruct the formation of zebrafish brain at 7-8 somite stage with all the cells tracked starting from late sphere stage with less than 2% error for at least 6 hours. Our reconstruction opens the way to a systematic investigation of cellular behaviors, of clonal origin and clonal complexity of brain organs, as well as the contribution of cell proliferation modes and cell movements to the formation of local patterns and morphogenetic fields.
APA, Harvard, Vancouver, ISO, and other styles
22

Pethiyagoda, Ravindra. "Mathematical and computational analysis of Kelvin ship wave patterns." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/101167/1/Ravindra_Pethiyagoda_Thesis.pdf.

Full text
Abstract:
This thesis concerns the form of water wave patterns generated by an idealised ship moving steadily through still water. An efficient numerical scheme for computing these waves with much finer detail than ever before is developed, and the results are used to determine the effect that certain properties, such as the ship's speed and dimensions, have on ship wakes. Two features are used to characterise the ship wave: the ship's apparent wake angle, the half-angle of the characteristic v-pattern of a ship wave, and a spectrogram produced by applying time-frequency analysis to a cross-section of the ship wave.
APA, Harvard, Vancouver, ISO, and other styles
23

Warne, David James. "Computational inference in mathematical biology: Methodological developments and applications." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/202835/1/David_Warne_Thesis.pdf.

Full text
Abstract:
Complexity in living organisms occurs on multiple spatial and temporal scales. The function of tissues depends on interactions of cells, and in turn, cell dynamics depends on intercellular and intracellular biochemical networks. A diverse range of mathematical modelling frameworks are applied in quantitative biology. Effective application of models in practice depends upon reliable statistical inference methods for experimental design, model calibration and model selection. In this thesis, new results are obtained for quantification of contact inhibition and cell motility mechanisms in prostate cancer cells, and novel computationally efficient inference algorithms suited for the study of biochemical systems are developed.
APA, Harvard, Vancouver, ISO, and other styles
24

Nikishkov, Yuri G. "Computational stability analysis of dynamical systems." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/12149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wincure, Benjamin 1966. "Computational modeling of a liquid crystal phase transition." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103308.

Full text
Abstract:
This thesis numerically solves the tensor order parameter continuum theory equations for nematic liquid crystals to investigate liquid crystal texturing mechanisms during an isotropic to nematic phase transition in a bulk unstable isotropic phase and next to solid surfaces. The Time Dependent Ginsburg Landau equation with a Landau de Gennes Helmholtz free energy density description is used to predict the shapes, textures and defect mechanisms that occur in the expanding droplets and films of a 4'-pentyl-4-cyanobiphenyl (5CB) nematic phase immediately after their nucleation from an unstable isotropic phase, due to a temperature quench. To create a robust simulation method able to tackle high curvature, defect nucleation, heterogeneous substrates and phase ordering interfaces, particular attention was paid to adapting the mathematical model and computational methods to what was previously known about the nucleation and growth events that occur experimentally during a bulk 5CB isotropic to nematic phase transition and next to decorated solid surfaces. The numerical simulations provide detailed predictions about (i) growth rates for different temperature quenches, (ii) structure of the isotropic-nematic interface, (iii) shapes of expanding nano and submicron nematic droplets, (iv) texturing within growing nano and submicron nematic droplets, (v) a new defect formation mechanism called "interfacial defect shedding", and (vi) the effect of contact angle and interface curvature next to a solid surface with anchoring switches. The main contributions of this thesis are its detailed predictions that emerge from the liquid crystal simulation results, the careful adaptation of the mathematical model and numerical method to what is currently known about early stage growth in a nematic liquid crystal phase, and the validation of new theory by the simulation results.
APA, Harvard, Vancouver, ISO, and other styles
26

Ahlberg, Marcus, and Eric Fornander. "Test Case Prioritization as a Mathematical Scheduling Problem." Thesis, KTH, Optimeringslära och systemteori, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-229728.

Full text
Abstract:
Software testing is an extremely important phase of product development where the objective is to detect hidden bugs. The usually high complexity of today’s products makes the testing very resource intensive since numerous test cases have to be generated in order to detect all potential faults. Therefore, improved strategies of the testing process is of high interest for many companies. One area where there exists potential for improvement is the order by which test cases are executed to detect faults as quickly as possible, which in research is known as the test case prioritization problem. In this thesis, an extension to this problem is studied where dependencies between test cases are present and the processing times of the test cases are known. As a first result of the thesis, a mathematical model of the test case prioritization problem with dependencies and known processing times as a mathematical scheduling problem is presented. Three different solution algorithms to this problem are subsequently evaluated: A Sidney decomposition algorithm, an own-designed heuristic algorithm and an algorithm based on Smith’s rule. The Sidney decomposition algorithm outper-formed the others in terms of execution time of the algorithm and objective value of the generated schedule. The evaluation was conducted by simulation with artificial test suites and via a case study in industry through a company in the railway domain.
Mjukvarutestning är en extremt viktigt fas i produktutveckling då det säkerställer att inga buggar finns i mjukvaran. Då nutidens produkter ofta inkluderar en komplex mjukvara, kräver mjukvarutestningen mer resurser än tidigare. Eftersom komplexiteten kräver att fler testfall för mjukvaran definieras för att upptäcka eventuella buggar. Detta har skapat ett stort intresse hos företag för strategier inom delområden av mjukvarutestning som syftar till att effektivisera och förenkla desamma. Ett av dessa uppmärksammade delområden är i vilken ordning testfallen ska utföras i syfte att upptäcka buggar i ett så tidigt skede som möjligt, vilket i litteraturen är känt som prioriteringsproblemet för testfall. I den här uppsatsen studeras en utökad version av prioriteringsproblemet där det existerar företrädesberoenden mellan testfallen samt att tiden det tar att exekvera ett testfall är känd. Som ett första delresultat presenteras en matematisk modell av detta utökade problem i form av ett matematiskt schemaläggningsproblem. Sedermera jämförs tre lösningsmetoder för denna modell. Lösningsmetoderna som jämförs är Sidneys upplösningsmetod, en egendesignad metod samt en metod baserad på Smiths regel. Sidneys upplösningsmetod var den metod som gav bäst resultat avseende både exekveringstid och numeriskt resultat. Jämförelsen genomfördes genom simulering av flera artificiellt skapade testfall samt genom en fallstudie på ett företag i järnvägsindustrin.
APA, Harvard, Vancouver, ISO, and other styles
27

Cong, Yang, and 丛阳. "Optimization models and computational methods for systems biology." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B47752841.

Full text
Abstract:
Systems biology is a comprehensive quantitative analysis of the manner in which all the components of a biological system interact functionally along with time. Mathematical modeling and computational methods are indispensable in such kind of studies, especially for interpreting and predicting the complex interactions among all the components so as to obtain some desirable system properties. System dynamics, system robustness and control method are three crucial properties in systems biology. In this thesis, the above properties are studied in four different biological systems. The outbreak and spread of infectious diseases have been questioned and studied for years. The spread mechanism and prediction about the disease could enable scientists to evaluate isolation plans to have significant effects on a particular epidemic. A differential equation model is proposed to study the dynamics of HIV spread in a network of prisons. In prisons, screening and quarantining are both efficient control manners. An optimization model is proposed to study optimal strategies for the control of HIV spread in a prison system. A primordium (plural: primordia) is an organ or tissue in its earliest recognizable stage of development. Primordial development in plants is critical to the proper positioning and development of plant organs. An optimization model and two control mechanisms are proposed to study the dynamics and robustness of primordial systems. Probabilistic Boolean Networks (PBNs) are mathematical models for studying the switching behavior in genetic regulatory networks. An algorithm is proposed to identify singleton and small attractors in PBNs which correspond to cell types and cell states. The captured problem is NP-hard in general. Our algorithm is theoretically and computationally demonstrated to be much more efficient than the naive algorithm that examines all the possible states. The goal of studying the long-term behavior of a genetic regulatory network is to study the control strategies such that the system can obtain desired properties. A control method is proposed to study multiple external interventions meanwhile minimizing the control cost. Robustness is a paramount property for living organisms. The impact degree is a measure of robustness of a metabolic system against the deletion of single or multiple reaction(s). An algorithm is proposed to study the impact degree in Escherichia coli metabolic system. Moreover, approximation method based on Branching process is proposed for estimating the impact degree of metabolic networks. The effectiveness of our method is assured by testing with real-world Escherichia coli, Bacillus subtilis, Saccharomyces cerevisiae and Homo Sapiens metabolic systems.
published_or_final_version
Mathematics
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
28

Saigre, Thomas. "Modélisation mathématique, simulation et réduction d’ordre de flux oculaires et leurs interactions : construire le jumeau numérique de l'oeil." Electronic Thesis or Diss., Strasbourg, 2024. http://www.theses.fr/2024STRAD052.

Full text
Abstract:
Le corps humain est un système complexe, et l'œil humain n’y fait pas exception. Malgré les avancées médicales, de nombreuses questions pathologiques subsistent. Les modèles mathématiques et computationnels complètent les études cliniques en dévoilant des mécanismes physiopathologiques complexes.L'œil, accessible de manière non invasive, offre des marqueurs biologiques utiles pour diagnostiquer des maladies. Comprendre son comportement, ses pathologies et les traitements associés est donc essentiel.Cette thèse explore la modélisation et la simulation des flux oculaires, notamment le transfert de chaleur et le flux d’humeur aqueuse. Ces approches nécessitent des validations cliniques rigoureuses et tiennent compte de nombreux paramètres, spécifiques au patient ou externes. Une analyse de sensibilité globale évalue leur impact pour guider les cliniciens. Ces analyses, coûteuses en calculs, bénéficient de méthodes de réduction de modèle certifiées, permettant des simulations précises et plus rapides, favorisant l’intégration des modèles dans la pratique clinique
The human body is a complex system, and the human eye is no exception. Despite medical advances, many pathological questions remain. Mathematical and computational models complement clinical studies by revealing complex pathophysiological mechanisms.The eye, which can be accessed non-invasively, offers useful biological markers for diagnosing diseases. Understanding its behavior, pathologies and associated treatments is therefore essential.This thesis explores the modeling and simulation of ocular flows, notably heat transfer and aqueous humor flow. These approaches require rigorous clinical validation and take into account numerous parameters, both patient-specific and external. A global sensitivity analysis assesses their impact to guide clinicians. These computationally-intensive analyses benefit from certified model reduction methods, enabling accurate and faster simulations, favoring the integration of models into clinical practice
APA, Harvard, Vancouver, ISO, and other styles
29

Zhu, Tulong. "Meshless methods in computational mechanics." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/11795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Cheung, On-pong, and 張安邦. "Computational fluid dynamics simulations on the natural ventilation bahaviour within a building cluster." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45590084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Khoury, Imad. "Mathematical and computational tools for the manipulation of musical cyclic rhythms." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=101858.

Full text
Abstract:
This thesis presents and analyzes tools and experiments that aim at achieving multiple yet related goals in the exploration and manipulation of musical cyclic rhythms. The work presented in this thesis may be viewed as a preliminary study for the ultimate future goal of developing a general computational theory of rhythm. Given a family of rhythms, how does one reconstruct its ancestral rhythms? How should one change a rhythm's cycle length while preserving its musicologically salient properties, and hence be able to confirm or disprove popular or historical beliefs regarding its origins and evolution? How should one compare musical rhythms? How should one automatically generate rhythmic patterns? All these questions are addressed and, to a certain extent, solved in our study, and serve as a basis for the development of novel general tools, implemented in Matlab, for the manipulation of rhythms.
APA, Harvard, Vancouver, ISO, and other styles
32

Arunakirinathar, Kanagaratnam. "Mathematical and computational aspects of the enhanced strain finite element method." Doctoral thesis, University of Cape Town, 1995. http://hdl.handle.net/11427/15964.

Full text
Abstract:
Bibliography: pages 102-107.
This thesis deals with further investigations of the enhanced strain finite element method, with particular attention given to the analysis of the method for isoparametric elements. It is shown that the results established earlier by B D Reddy and J C Simo for affine-equivalent meshes carry over to the case of isoparameric elements. That is, the method is stable and convergent provided that a set of three conditions are met, and convergence is at the same rate as in the standard method. The three conditions differ in some respects, though, from their counterparts for the affine case. A procedure for recovering the stress is shown to lead to an approximate stress which converges at the optimal rate to the actual stress. The concept of the equivalent parallelogram associated with a quadrilateral is introduced. The quadrilateral may be regarded as a perturbation of this parallelogram, which is most conveniently described by making use of properties of the isoparametric map which defines the quadrilateral. The equivalent parallelogram generates a natural means of defining a regular family of quadrilaterals; this definition is used together with other properties to obtain in a relatively simple manner estimates, in appropriate seminorms or norms, of the isoparametric map and it's Jacobian, for use in the determination of finite element interpolation error estimates, with regard to computations, a new basis for enhanced strains is introduced, and various examples have been tested. The results obtained are compared with those obtained using other bases, and with those found from an assumed stress approach. Favourable comparisons are obtained in most cases, with the present basis exhibiting an improvement over existing bases. Convergence of the finite element results are verified; it is observed numerically that the improvement of results due to enhancement is as a result of a smaller constant appearing in the error estimates.
APA, Harvard, Vancouver, ISO, and other styles
33

Codony, David. "Mathematical and computational modeling of flexoelectricity at mesoscopic and atomistic scales." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671925.

Full text
Abstract:
This PhD thesis focuses on the development of mathematical and computational models for flexoelectricity, a relatively new electromechanical coupling that is present in any dielectric at the micron and sub-micron scales. The work is framed in the context of both continuum and quantum mechanics, and explores the gap between these two disciplines. On the one hand, the focus is put on the mathematical modeling of the flexoelectric effect by means of continuum (electro-) mechanics, and the development of computational techniques required to numerically solve the associated boundary value problems. The novel computational infrastructure developed in this work is able to predict the performance of engineered devices for electromechanical transduction at sub-micron scales, where flexoelectricity is always present, without any particular restrictions in geometry, material choice, boundary conditions or nonlinearity. The numerical examples within this document show that flexoelectricity can be harnessed in multiple different ways towards the development of breakthrough applications in nanotechnology. On the other hand, the flexoelectric effect is also studied at an atomistic level by means of quantum mechanics. This work proposes a novel methodology to quantify the flexoelectric properties of dielectric materials, by means of connecting ab-initio atomistic simulations with the proposed models at a coarser, continuum scales. The developed approach sheds some light on a controversial topic within the density functional theory community, where large disagreements among different theoretical derivations are typically found. The ab-initio computations serve not only to assess the material parameters within the continuum models, but also to validate their inherent assumptions regarding the relevant physics at the nanoscale.
Aquesta tesi doctoral es centra en el desenvolupament de models matemàtics i computacionals per a la flexoelectricitat, un acoblament electromecànic relativament nou que es present en qualsevol material dielèctric a les escales microscòpica i nanoscòpica. El treball s'emmarca tant en el context de la mecànica del medi continu com de la mecànica quàntica, i explora l'espai entre aquestes dues disciplines. Per una banda, s'estudien els models matemàtics de l¿'efecte flexoelèctric mitjançant la mecànica del medi continu, i es desenvolupen tècniques computacionals necessàries per la resolució numèrica dels problemes de valor de contorn associats. La nova infraestructura computacional desenvolupada en aquest treball és capaç de predir el rendiment de dispositius funcionals per a la transducció electromecànica a la nanoescala, on la flexoelectricitat és sempre present, sense cap tipus de limitació en quant a geometria, propietats materials, condicions de contorn o no-linearitat. Els exemples numèrics en aquest document demostren que la flexoelectritat es pot aprofitar de diverses maneres per tal de desenvolupar aplicacions nanotecnològiques innovadores. Per altra banda, el efecte flexoelèctric es estudiat també a nivell atomístic mitjançant la mecànica quàntica. Aquest treball proposa una metodologia nova per quantificar les propietats flexoelèctriques de materials dielèctrics, connectant les simulacions atomístiques amb els models continus proposats. El mètode desenvolupat clarifica un tema controvertit en la comunitat de la teoria del funcional de la densitat (DFT), on els càlculs teòrics estan típicament en desacord entre ells. Les simulacions atomístiques no només serveixen per calcular els paràmetres flexoelèctrics dels materials considerats en models continus, sinó també per validar les hipòtesis en les quals es basen en relació amb les físiques rellevants a la nanoescala.
APA, Harvard, Vancouver, ISO, and other styles
34

Riposo, Julien. "Computational and Mathematical Methods for Data Analysis in Biology and Finance." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066177/document.

Full text
Abstract:
Les mathématiques sont comprises en tant qu’ensemble d’idées abstraites, dans le sens où le monde réel – ou plutôt réalité – n’a pas à intervenir. Pourtant, certains faits mathématiques observables dans des données expérimentales ou simulées peuvent être contre-intuitifs. La thèse est divisée en deux parties : premièrement, on étudie mathématiquement les matrices du genre celles dont nous avons discutées en biologie et finance. En particulier, nous mettons en évidence le fait contre-intuitif suivant : pour ces matrices, le vecteur propre associé à la plus haute valeur propre est très proche de la somme de chacune des lignes de la matrice, colonne par colonne. Nous discutons aussi d’applications en théorie des graphes avec bon nombre de simulations numériques. Dans un second temps, nous attaquons le problème des contacts géniques : à partir d’une carte de contact génique, un vrai défi actuel est de retrouver la structure tridimensionnelle de l’ADN. Nous proposons diverses méthodes d’analyse matricielle de données, dont une met en évidence l’existence, dans le noyau, de zones disjointes où les interactions sont de différents types. Ces zones sont des compartiments nucléaires. Avec d’autres données biologiques, nous mettons en évidence la fonction biologique de chacun de ces compartiments. Les outils d’analyses sont ceux utilisés en finance pour analyser des matrices d’auto-corrélation, ou même des séries temporelles
Mathematics are understood as a set of abstract ideas, in the measure of the real world – or reality – has no way to intervene. However, some observable mathematical facts in experimental or simulated data can be counter-intuitive. The PhD is divided into two parts: first, we mathematically study the matrices of the same type of the ones in biology and finance. In particular, we show the following counter-intuitive fact: for these matrices, the eigenvector associated with the highest eigenvalue is close to the sum of each row, column by column. We also discuss some applications to graph theory with many numerical simulations and data analysis.On the other hand, we will face the genetic contact problem: from a contact map, a real current challenge is to find the DNA 3D-structure. We propose several matrix analysis methods, which one show disjoinct areas in the nucleus where the DNA interactions are different. These areas are nuclear compartments. With other biological features, we characterize the biological function of each of the compartments. The analysis tools are the ones already used in finance to analyze the autocorrelation matrices, or even time series
APA, Harvard, Vancouver, ISO, and other styles
35

Lao, Bert Juan. "Diversified approach to the mathematical and computational modeling of biological systems." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1562159981&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Calabro, Katherine Weaver. "Improved mathematical and computational tools for modeling photon propagation in tissue." Thesis, Boston University, 2013. https://hdl.handle.net/2144/12720.

Full text
Abstract:
Thesis (Ph.D.)--Boston University
Light interacts with biological tissue through two predominant mechanisms: scattering and absorption, which are sensitive to the size and density of cellular organelles, and to biochemical composition (ex. hemoglobin), respectively. During the progression of disease, tissues undergo a predictable set of changes in cell morphology and vascularization, which directly affect their scattering and absorption properties. Hence, quantification of these optical property differences can be used to identify the physiological biomarkers of disease with interest often focused on cancer. Diffuse reflectance spectroscopy is a diagnostic tool, wherein broadband visible light is transmitted through a fiber optic probe into a turbid medium, and after propagating through the sample, a fraction of the light is collected at the surface as reflectance. The measured reflectance spectrum can be analyzed with appropriate mathematical models to extract the optical properties of the tissue, and from these, a set of physiological properties. A number of models have been developed for this purpose using a variety of approaches -- from diffusion theory, to computational simulations, and empirical observations. However, these models are generally limited to narrow ranges of tissue and probe geometries. In this thesis, reflectance models were developed for a much wider range of measurement parameters, and influences such as the scattering phase function and probe design were investigated rigorously for the first time. The results provide a comprehensive understanding of the factors that influence reflectance, with novel insights that, in some cases, challenge current assumptions in the field. An improved Monte Carlo simulation program, designed to run on a graphics processing unit (GPU), was built to simulate the data used in the development of the reflectance models. Rigorous error analysis was performed to identify how inaccuracies in modeling assumptions can be expected to affect the accuracy of extracted optical property values from experimentallyacquired reflectance spectra. From this analysis, probe geometries that offer the best robustness against error in estimation of physiological properties from tissue, are presented. Finally, several in vivo studies demonstrating the use of reflectance spectroscopy for both research and clinical applications are presented.
APA, Harvard, Vancouver, ISO, and other styles
37

Bartl, Eduard. "Mathematical foundations of graded knowledge spaces." Diss., Online access via UMI:, 2009.

Find full text
Abstract:
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
38

Perera, Sattambiralalage Anura Lalindra. "Mathematical modelling of two-phase for industrial applications." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Cotton, Charles E. "A computational study of thiocyanate based laser flash photolysis reporters." Virtual Press, 2006. http://liblink.bsu.edu/uhtbin/catkey/1348346.

Full text
Abstract:
Radical chemistry has always been a very active area of research. This is due to the fact that radicals are both very numerous in variety and very reactive. A radical is any chemical species that possesses one or more unpaired electrons. These unpaired electrons usually lead to the extremely reactive characteristics of the chemical species. This reactivity can be beneficial; this is true in the case of polymer chemistry. For instance, some plastics are synthesized through a radical chain reaction. In addition, radicals are used in the synthesis of novel organic compounds with the goal of creating new pharmaceuticals. Radical reactivity can be detrimental as well; radicals have been implicated in a number of ailments including heart disease and cancer. One particular view of cancer cells is that their DNA is somehow mutated; a radical could cause this mutation. In fact, one radical species in particular is known to oxidize DNA, the hydroxyl radical.Unfortunately, the electronic structures of most radicals do not lend themselves to direct study by modem spectroscopic methods. Recently, researchers have discovered that hydroxyl radical, being very reactive in nature, easily complexes with other species. If these complexes are spectrosopically active, then we can study the radical reactivity indirectly through a "reporter" molecule. One such approach uses the transient visible absorbance of the complexes of hydroxyl radical with the thiocyanate anion. In addition, there is other experimental evidence that suggests that thiocyanate anion complexes with other radicals as well. These experiments have been very successful in improving our understanding of radical chemistry, but very little is known about the electronic structure or connectivities of these complexes.Our research is comprised of a systematic theoretical study of the structure, vibrational frequencies, and spectroscopic properties of complexes of hydroxyl radical with thiocyanate anion. In addition, we will investigate the structures, vibrational frequencies, and spectroscopic properties of complexes of thiocyanate anion and other radical species.The ultimate goal of our research is to determine the feasibility of utilizing thiocyanate anion as an LFP reporter for radical species other than hydroxyl radical.Our theoretical approach is based in computerized, mathematical models of the properties of the species being studied, based on quantum mechanics and density functional theory as implemented in the computational chemistry software Gaussian 03. Our study includes calculations that provide the energies, optimized geometry, vibrational frequencies, charge and spin densities, and other properties of the various species. This consists of the various isolated radicals and anions, complexes, transitions states, pre-reactive complexes, and structural isomers.
Department of Chemistry
APA, Harvard, Vancouver, ISO, and other styles
40

Yang, Pengyi. "Ensemble methods and hybrid algorithms for computational and systems biology." Thesis, The University of Sydney, 2012. https://hdl.handle.net/2123/28979.

Full text
Abstract:
Modern molecular biology increasingly relies on the application of high-throughput technologies for studying the function, interaction, and integration of genes, proteins, and a variety of other molecules on a large scale. The application of those high throughput technologies has led to the exponential growth of biological data, making modern molecular biology a data-intensive science. Huge effort has been directed to the development of robust and efficient computational algorithms in order to make sense of these extremely large and complex biological data, giving rise to several interdisciplinary fields, such as computational and systems biology. Machine learning and data mining are disciplines dealing with knowledge discovery from large data, and their application to computational and systems biology has been extremely fruitful. However, the ever-increasing size and complexity of the biological data require novel computational solutions to be developed. This thesis attempts to contribute to these inter-disciplinary fields by deve10ping and applying different ensemble learning methods and hybrid algorithms for solving a variety of problems in computational and systems biology. Through the study of different types of data generated from a variety of biological systems using different high-throughput approaches, we demonstrate that ensemble learning methods and hybrid algorithms are general, flexible, and highly effective tools for computational and systems biology.
APA, Harvard, Vancouver, ISO, and other styles
41

O'Neill, II Martin Joseph. "Computational Epidemiology - Analyzing Exposure Risk: A Deterministic, Agent-Based Approach." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc11017/.

Full text
Abstract:
Many infectious diseases are spread through interactions between susceptible and infectious individuals. Keeping track of where each exposure to the disease took place, when it took place, and which individuals were involved in the exposure can give public health officials important information that they may use to formulate their interventions. Further, knowing which individuals in the population are at the highest risk of becoming infected with the disease may prove to be a useful tool for public health officials trying to curtail the spread of the disease. Epidemiological models are needed to allow epidemiologists to study the population dynamics of transmission of infectious agents and the potential impact of infectious disease control programs. While many agent-based computational epidemiological models exist in the literature, they focus on the spread of disease rather than exposure risk. These models are designed to simulate very large populations, representing individuals as agents, and using random experiments and probabilities in an attempt to more realistically guide the course of the modeled disease outbreak. The work presented in this thesis focuses on tracking exposure risk to chickenpox in an elementary school setting. This setting is chosen due to the high level of detailed information realistically available to school administrators regarding individuals' schedules and movements. Using an agent-based approach, contacts between individuals are tracked and analyzed with respect to both individuals and locations. The results are then analyzed using a combination of tools from computer science and geographic information science.
APA, Harvard, Vancouver, ISO, and other styles
42

Strömberg, Fredrik. "Computational Aspects of Maass Waveforms." Doctoral thesis, Uppsala University, Department of Mathematics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4778.

Full text
Abstract:

The topic of this thesis is computation of Mass waveforms, and we consider a number of different cases: Congruence subgroups of the modular group and Dirichlet characters (chapter 1); congruence subgroups and general multiplier systems and real weight (chapter 2); and noncongruence subgroups (chapter 3). In each case we first discuss the necessary theoretical background. We then outline the algorithm and display some of the results obtained by it.

APA, Harvard, Vancouver, ISO, and other styles
43

Wildman, Raymond A. "Geometry optimization and computational electromagnetics methods and applications /." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 191 p, 2008. http://proquest.umi.com/pqdweb?did=1481670101&sid=23&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Fischer, Manfred M. "Computational Neural Networks: An attractive class of mathematical models for transportation research." WU Vienna University of Economics and Business, 1997. http://epub.wu.ac.at/4158/1/WSG_DP_5797.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Graham, James. "Biomedical image computing : the development and application of mathematical and computational models." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/biomedical-image-computing-the-development-and-application-of-mathematical-and-computational-models(0e48e05c-3f86-40f2-932a-28df29292cb8).html.

Full text
Abstract:
Biomedical images contain a great deal of information that is useful and a great deal that is not. Computational analysis and interpretation of biomedical images involves extraction of some or all of the useful information. The useless information can take the form of unwanted clutter or noise that can obscure the useful information or inhibit the interpretation. Various mathematical and computational processes may be applied to reduce the effects of noise and distracting content. The most successful approaches involve the use of mathematical or computational models that express the properties of the required information. Interpretation of images involves finding objects or structures in the image that match the properties of the model. This dissertation describes the development and application of different models required for the interpretation of a variety of different image types arising from clinical medicine or biomedical research. These include:* neural network models, * Point Distribution Models, and the associated Active Shape Models, which have become part of the research toolkit of many academic and commercial organisations, * models of the appearance of nerve fibres in noisy confocal microscope images,* models of pose changes in carpal bones during wrist motion, A number of different application problem are described, in which variants of these methods have been developed and used: * cytogenetics, * proteomics, * assessing bone quality, * segmentation of magnetic resonance images, * measuring nerve fibres * inferring 3D motion from 2D cinefluoroscopy sequences. The methods and applications represented here encompass the progression of biomedical image analysis from early developments, where computational power became adequate to the challenges posed by biomedical image data, to recent, highly computationally-intensive methods.
APA, Harvard, Vancouver, ISO, and other styles
46

Fogarty, Laurel. "From social learning to culture : mathematical and computational models of cultural evolution." Thesis, University of St Andrews, 2012. http://hdl.handle.net/10023/3598.

Full text
Abstract:
Humans are unique in the extent and complexity of their cultures. As a species, we generate extensive knowledge and innumerable norms, attitudes, traditions, skills, beliefs and technologies that we share with those around us through teaching, imitation and language. These cultural practices have their roots in our uniquely potent ability for social learning. This thesis sets out to elucidate the process of cultural evolution using a series of mathematical and computational models. These models first investigate the evolution of the capacity for social learning, the rare ability to teach, and the evolution of the smart and strategic use of social learning, in the animal lineage. They go on to investigate the implications of these strategies and mechanisms for culture and find that the form human culture takes is dependant on the amount and nature of social learning as well as on the underlying learning strategies deployed. The thesis also investigates the effect that culture has had on the human evolutionary niche. Cultural practices fundamentally change the selection pressures to which humans are subject and these in turn change both our cultures and our genes through gene-culture coevolution. Finally, a demographic cultural niche construction model is presented, which investigates the application of cultural evolution modelling, cultural niche construction theory and demographic models to the growing problem of sex-ratio imbalance in modern China and considers the implications for policy-making. The analyses presented in this thesis support the argument that the uniquely potent human ability to transmit acquired information through teaching, imitation and other forms of social learning, and through this to shape our cultural and ecological environments, has played and continues to play a central role in human evolution.
APA, Harvard, Vancouver, ISO, and other styles
47

Ekaka-a, Nwamue. "Computational and mathematical modelling of plant species interactions in a harsh climate." Thesis, University of Chester, 2009. http://hdl.handle.net/10034/118016.

Full text
Abstract:
This thesis will consider the following assumptions which are based on a few insights about the artic climate: (1)the artic climate can be characterised by a growing season called summer and a dormat season called winter (2)in the summer season growing conditions are reasonably favourable and species are more likely to compete for plentiful resources (3)in the winter season there would be no further growth and the plant populations would instead by subjected to fierce weather events such as storms which is more likely to lead to the destruction of some or all of the biomass. Under these assumptions, is it possible to find those change in the environment that might cause mutualism (see section 1.9.2) from competition (see section 1.9.1) to change? The primary aim of this thesis to to provide a prototype simulation of growth of two plant species in the artic that: (1)take account of different models for summer and winter seasons (2)permits the effects of changing climate to be seen on each type of plant species interaction.
APA, Harvard, Vancouver, ISO, and other styles
48

Young, Po-yuk, and 楊寶玉. "Profile of good computational estimators related mathematical variables and common strategies used." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31957626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Henderson, Donald Mackenzie. "A mathematical and computational analysis of the biomechanics of walking theropod dinosaurs." Thesis, University of Bristol, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Leonard, Katherine H. L. "Mathematical and computational modelling of tissue engineered bone in a hydrostatic bioreactor." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:05845740-1a74-4e19-95ea-6b5229d1af27.

Full text
Abstract:
In vitro tissue engineering is a method for developing living and functional tissues external to the body, often within a device called a bioreactor to control the chemical and mechanical environment. However, the quality of bone tissue engineered products is currently inadequate for clinical use as the implant cannot bear weight. In an effort to improve the quality of the construct, hydrostatic pressure, the pressure in a fluid at equilibrium that is required to balance the force exerted by the weight of the fluid above, has been investigated as a mechanical stimulus for promoting extracellular matrix deposition and mineralisation within bone tissue. Thus far, little research has been performed into understanding the response of bone tissue cells to mechanical stimulation. In this thesis we investigate an in vitro bone tissue engineering experimental setup, whereby human mesenchymal stem cells are seeded within a collagen gel and cultured in a hydrostatic pressure bioreactor. In collaboration with experimentalists a suite of mathematical models of increasing complexity is developed and appropriate numerical methods are used to simulate these models. Each of the models investigates different aspects of the experimental setup, from focusing on global quantities of interest through to investigating their detailed local spatial distribution. The aim of this work is to increase understanding of the underlying physical processes which drive the growth and development of the construct, and identify which factors contribute to the highly heterogeneous spatial distribution of the mineralised extracellular matrix seen experimentally. The first model considered is a purely temporal model, where the evolution of cells, solid substrate, which accounts for the initial collagen scaffold and deposited extracellular matrix along with attendant mineralisation, and fluid in response to the applied pressure are examined. We demonstrate that including the history of the mechanical loading of cells is important in determining the quantity of deposited substrate. The second and third models extend this non-spatial model, and examine biochemically and biomechanically-induced spatial patterning separately. The first of these spatial models demonstrates that nutrient diffusion along with nutrient-dependent mass transfer terms qualitatively reproduces the heterogeneous spatial effects seen experimentally. The second multiphase model is used to investigate whether the magnitude of the shear stresses generated by fluid flow, can qualitatively explain the heterogeneous mineralisation seen in the experiments. Numerical simulations reveal that the spatial distribution of the fluid shear stress magnitude is highly heterogeneous, which could be related to the spatial heterogeneity in the mineralisation seen experimentally.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography