Dissertations / Theses on the topic 'Numerical computation and mathematical software'

To see the other types of publications on this topic, follow the link: Numerical computation and mathematical software.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Numerical computation and mathematical software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bienvenu, Kirk Jr. "Underwater Acoustic Signal Analysis Toolkit." ScholarWorks@UNO, 2017. https://scholarworks.uno.edu/td/2398.

Full text
Abstract:
This project started early in the summer of 2016 when it became evident there was a need for an effective and efficient signal analysis toolkit for the Littoral Acoustic Demonstration Center Gulf Ecological Monitoring and Modeling (LADC-GEMM) Research Consortium. LADC-GEMM collected underwater acoustic data in the northern Gulf of Mexico during the summer of 2015 using Environmental Acoustic Recording Systems (EARS) buoys. Much of the visualization of data was handled through short scripts and executed through terminal commands, each time requiring the data to be loaded into memory and parameters to be fed through arguments. The vision was to develop a graphical user interface (GUI) that would increase the productivity of manual signal analysis. It has been expanded to make several calculations autonomously for cataloging and meta data storage of whale clicks. Over the last year and a half, a working prototype has been developed with MathWorks matrix laboratory (MATLAB), an integrated development environment (IDE). The prototype is now very modular and can accept new tools relatively quickly when development is completed. The program has been named Banshee, as the mythical creatures are known to “wail”. This paper outlines the functionality of the GUI, explains the benefits of frequency analysis, the physical models that facilitate these analytics, and the mathematics performed to achieve these models.
APA, Harvard, Vancouver, ISO, and other styles
2

Lesage, Pierre-Yves. "Numerical computation and software design." Thesis, Cranfield University, 1999. http://dspace.lib.cranfield.ac.uk/handle/1826/11134.

Full text
Abstract:
The development of simulation tools is becoming an important area in industry, recently fostered by the tremendous improvements in computer hardware. Many physical problems can be simulated by being modelled by mathematical equations which can then be solved numerically. This thesis is concerned with the development of a Finite Difference solver for time dependent partial differential equations. The development involves a number of challenging requirements that the solver must meet: to have the capacity of solving conservation and non-conservation laws (using several numerical techniques), to be robust, efficient and to have a modular and extendible design. Firstly, we focus on the architecture of the program and how an original design approach was used in order to carry out its development. A combination of Object- Oriented Design and Structured Design was adopted.
APA, Harvard, Vancouver, ISO, and other styles
3

Lesage, P.-Y. "Numerical computation and software design." Thesis, Cranfield University, 1999. http://dspace.lib.cranfield.ac.uk/handle/1826/11134.

Full text
Abstract:
The development of simulation tools is becoming an important area in industry, recently fostered by the tremendous improvements in computer hardware. Many physical problems can be simulated by being modelled by mathematical equations which can then be solved numerically. This thesis is concerned with the development of a Finite Difference solver for time dependent partial differential equations. The development involves a number of challenging requirements that the solver must meet: to have the capacity of solving conservation and non-conservation laws (using several numerical techniques), to be robust, efficient and to have a modular and extendible design. Firstly, we focus on the architecture of the program and how an original design approach was used in order to carry out its development. A combination of Object- Oriented Design and Structured Design was adopted.
APA, Harvard, Vancouver, ISO, and other styles
4

Chang, Tyler Hunter. "Mathematical Software for Multiobjective Optimization Problems." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/98915.

Full text
Abstract:
In this thesis, two distinct problems in data-driven computational science are considered. The main problem of interest is the multiobjective optimization problem, where the tradeoff surface (called the Pareto front) between multiple conflicting objectives must be approximated in order to identify designs that balance real-world tradeoffs. In order to solve multiobjective optimization problems that are derived from computationally expensive blackbox functions, such as engineering design optimization problems, several methodologies are combined, including surrogate modeling, trust region methods, and adaptive weighting. The result is a numerical software package that finds approximately Pareto optimal solutions that are evenly distributed across the Pareto front, using minimal cost function evaluations. The second problem of interest is the closely related problem of multivariate interpolation, where an unknown response surface representing an underlying phenomenon is approximated by finding a function that exactly matches available data. To solve the interpolation problem, a novel algorithm is proposed for computing only a sparse subset of the elements in the Delaunay triangulation, as needed to compute the Delaunay interpolant. For high-dimensional data, this reduces the time and space complexity of Delaunay interpolation from exponential time to polynomial time in practice. For each of the above problems, both serial and parallel implementations are described. Additionally, both solutions are demonstrated on real-world problems in computer system performance modeling.
Doctor of Philosophy
Science and engineering are full of multiobjective tradeoff problems. For example, a portfolio manager may seek to build a financial portfolio with low risk, high return rates, and minimal transaction fees; an aircraft engineer may seek a design that maximizes lift, minimizes drag force, and minimizes aircraft weight; a chemist may seek a catalyst with low viscosity, low production costs, and high effective yield; or a computational scientist may seek to fit a numerical model that minimizes the fit error while also minimizing a regularization term that leverages domain knowledge. Often, these criteria are conflicting, meaning that improved performance by one criterion must be at the expense of decreased performance in another criterion. The solution to a multiobjective optimization problem allows decision makers to balance the inherent tradeoff between conflicting objectives. A related problem is the multivariate interpolation problem, where the goal is to predict the outcome of an event based on a database of past observations, while exactly matching all observations in that database. Multivariate interpolation problems are equally as prevalent and impactful as multiobjective optimization problems. For example, a pharmaceutical company may seek a prediction for the costs and effects of a proposed drug; an aerospace engineer may seek a prediction for the lift and drag of a new aircraft design; or a search engine may seek a prediction for the classification of an unlabeled image. Delaunay interpolation offers a unique solution to this problem, backed by decades of rigorous theory and analytical error bounds, but does not scale to high-dimensional "big data" problems. In this thesis, novel algorithms and software are proposed for solving both of these extremely difficult problems.
APA, Harvard, Vancouver, ISO, and other styles
5

Moosbrugger, John C. "Numerical computation of metal/mold boundary heat flux in sand castings using a finite element enthalpy model." Thesis, Georgia Institute of Technology, 1985. http://hdl.handle.net/1853/16365.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lawson, Jane. "Towards error control for the numerical solution of parabolic equations." Thesis, University of Leeds, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yau, Shuk-Han Ada. "Numerical analysis of finite difference schemes in automatically generated mathematical modeling software." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35407.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 64-65).
by Shuk-Han Ada Yau.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
8

Rebaza-Vasquez, Jorge. "Computation and continuation of equilibrium-to-periodic and periodic-to-periodic connections." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/28991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Bin. "A Mathematical Framework on Machine Learning: Theory and Application." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3876.

Full text
Abstract:
The dissertation addresses the research topics of machine learning outlined below. We developed the theory about traditional first-order algorithms from convex opti- mization and provide new insights in nonconvex objective functions from machine learning. Based on the theory analysis, we designed and developed new algorithms to overcome the difficulty of nonconvex objective and to accelerate the speed to obtain the desired result. In this thesis, we answer the two questions: (1) How to design a step size for gradient descent with random initialization? (2) Can we accelerate the current convex optimization algorithms and improve them into nonconvex objective? For application, we apply the optimization algorithms in sparse subspace clustering. A new algorithm, CoCoSSC, is proposed to improve the current sample complexity under the condition of the existence of noise and missing entries. Gradient-based optimization methods have been increasingly modeled and inter- preted by ordinary differential equations (ODEs). Existing ODEs in the literature are, however, inadequate to distinguish between two fundamentally different meth- ods, Nesterov’s acceleration gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method. In this paper, we derive high-resolution ODEs as more accurate surrogates for the two methods in addition to Nesterov’s acceleration gradient method for general convex functions (NAG-C), respectively. These novel ODEs can be integrated into a general framework that allows for a fine-grained anal- ysis of the discrete optimization algorithms through translating properties of the amenable ODEs into those of their discrete counterparts. As a first application of this framework, we identify the effect of a term referred to as gradient correction in NAG-SC but not in the heavy-ball method, shedding deep insight into why the for- mer achieves acceleration while the latter does not. Moreover, in this high-resolution ODE framework, NAG-C is shown to boost the squared gradient norm minimization at the inverse cubic rate, which is the sharpest known rate concerning NAG-C itself. Finally, by modifying the high-resolution ODE of NAG-C, we obtain a family of new optimization methods that are shown to maintain the accelerated convergence rates as NAG-C for minimizing convex functions.
APA, Harvard, Vancouver, ISO, and other styles
10

Wijns, Christopher P. "Exploring conceptual geodynamic models : numerical method and application to tectonics and fluid flow." University of Western Australia. School of Earth and Geographical Sciences, 2005. http://theses.library.uwa.edu.au/adt-WU2005.0068.

Full text
Abstract:
Geodynamic modelling, via computer simulations, offers an easily controllable method for investigating the behaviour of an Earth system and providing feedback to conceptual models of geological evolution. However, most available computer codes have been developed for engineering or hydrological applications, where strains are small and post-failure deformation is not studied. Such codes cannot simultaneously model large deformation and porous fluid flow. To remedy this situation in the face of tectonic modelling, a numerical approach was developed to incorporate porous fluid flow into an existing high-deformation code called Ellipsis. The resulting software, with these twin capabilities, simulates the evolution of highly deformed tectonic regimes where fluid flow is important, such as in mineral provinces. A realistic description of deformation depends on the accurate characterisation of material properties and the laws governing material behaviour. Aside from the development of appropriate physics, it can be a difficult task to find a set of model parameters, including material properties and initial geometries, that can reproduce some conceptual target. In this context, an interactive system for the rapid exploration of model parameter space, and for the evaluation of all model results, replaces the traditional but time-consuming approach of finding a result via trial and error. The visualisation of all solutions in such a search of parameter space, through simple graphical tools, adds a new degree of understanding to the effects of variations in the parameters, the importance of each parameter in controlling a solution, and the degree of coverage of the parameter space. Two final applications of the software code and interactive parameter search illustrate the power of numerical modelling within the feedback loop to field observations. In the first example, vertical rheological contrasts between the upper and lower crust, most easily related to thermal profiles and mineralogy, exert a greater control over the mode of crustal extension than any other parameters. A weak lower crust promotes large fault spacing with high displacements, often overriding initial close fault spacing, to lead eventually to metamorphic core complex formation. In the second case, specifically tied to the history of compressional orogenies in northern Nevada, exploration of model parameters shows that the natural reactivation of early normal faults in the Proterozoic basement, regardless of basement topography or rheological contrasts, would explain the subsequent elevation and gravitationally-induced thrusting of sedimentary layers over the Carlin gold trend, providing pathways and ponding sites for mineral-bearing fluids.
APA, Harvard, Vancouver, ISO, and other styles
11

Lux, Thomas Christian Hansen. "Interpolants, Error Bounds, and Mathematical Software for Modeling and Predicting Variability in Computer Systems." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/100059.

Full text
Abstract:
Function approximation is an important problem. This work presents applications of interpolants to modeling random variables. Specifically, this work studies the prediction of distributions of random variables applied to computer system throughput variability. Existing approximation methods including multivariate adaptive regression splines, support vector regressors, multilayer perceptrons, Shepard variants, and the Delaunay mesh are investigated in the context of computer variability modeling. New methods of approximation using Box splines, Voronoi cells, and Delaunay for interpolating distributions of data with moderately high dimension are presented and compared with existing approaches. Novel theoretical error bounds are constructed for piecewise linear interpolants over functions with a Lipschitz continuous gradient. Finally, a mathematical software that constructs monotone quintic spline interpolants for distribution approximation from data samples is proposed.
Doctor of Philosophy
It is common for scientists to collect data on something they are studying. Often scientists want to create a (predictive) model of that phenomenon based on the data, but the choice of how to model the data is a difficult one to answer. This work proposes methods for modeling data that operate under very few assumptions that are broadly applicable across science. Finally, a software package is proposed that would allow scientists to better understand the true distribution of their data given relatively few observations.
APA, Harvard, Vancouver, ISO, and other styles
12

Hohn, Jennifer Lynn. "Generalized Probabilistic Bowling Distributions." TopSCHOLAR®, 2009. http://digitalcommons.wku.edu/theses/82.

Full text
Abstract:
Have you ever wondered if you are better than the average bowler? If so, there are a variety of ways to compute the average score of a bowling game, including methods that account for a bowler’s skill level. In this thesis, we discuss several different ways to generate bowling scores randomly. For each distribution, we give results for the expected value and standard deviation of each frame's score, the expected value of the game’s final score, and the correlation coefficient between the score of the first and second roll of a single frame. Furthermore, we shall generalize the results in each distribution for an frame game on pins. Additionally, we shall generalize the number of possible games when bowling frames on pins. Then, we shall derive the frequency distribution of each frame’s scores and the arithmetic mean for frames on pins. Finally, to summarize the variety of distributions, we shall make tables that display the results obtained from each distribution used to model a particular bowler’s score. We evaluate the special case when bowling 10 frames on 10 pins, which represents a standard bowling game.
APA, Harvard, Vancouver, ISO, and other styles
13

Penzl, T. "Numerical solution of generalized Lyapunov equations." Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199800893.

Full text
Abstract:
Two efficient methods for solving generalized Lyapunov equations and their implementations in FORTRAN 77 are presented. The first one is a generalization of the Bartels--Stewart method and the second is an extension of Hammarling's method to generalized Lyapunov equations. Our LAPACK based subroutines are implemented in a quite flexible way. They can handle the transposed equations and provide scaling to avoid overflow in the solution. Moreover, the Bartels--Stewart subroutine offers the optional estimation of the separation and the reciprocal condition number. A brief description of both algorithms is given. The performance of the software is demonstrated by numerical experiments.
APA, Harvard, Vancouver, ISO, and other styles
14

Heiter, Pascal Frederik [Verfasser]. "Curvature based criteria for slow invariant manifold computation: from differential geometry to numerical software implementations for model reduction in hydrocarbon combustion / Pascal Frederik Heiter." Ulm : Universität Ulm, 2017. http://d-nb.info/1131710533/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Astorino, Matteo. "Interaction Fluide-Structure dans le Système Cardiovasculaire. Analyse Numérique et Simulation." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2010. http://tel.archives-ouvertes.fr/tel-00845352.

Full text
Abstract:
Dans cette thèse, nous proposons et analysons des méthodes numériques partitionnées pour la simulation de phénomènes d'interaction fluide-structure (IFS) dans le système cardiovasculaire. Nous considérons en particulier l'interaction mécanique du sang avec la paroi des grosses artères, avec des valves cardiaques et avec le myocarde. Dans les algorithmes IFS partitionnés, le couplage entre le fluide et la structure peut être imposé de manière implicite, semi-implicite ou explicite. Dans la première partie de cette thèse, nous faisons l'analyse de convergence d'un algorithme de projection semi-implicite. Puis, nous proposons une nouvelle version de ce schéma qui possède de meilleures propriétés de stabilité. La modification repose sur un couplage Robin-Robin résultant d'une ré-interprétation de la formulation de Nitsche. Dans la seconde partie, nous nous intéressons à la simulation de valves cardiaques. Nous proposons une stratégie partionnée permettant la prise en compte du contact entre plusieurs structures immergées dans un fluide. Nous explorons également l'utilisation d'une technique de post-traitement récente, basée sur la notion de structures Lagrangiennes cohérentes, pour analyser qualitativement l'hémodynamique complexe en aval des valves aortiques. Dans la dernière partie, nous proposons un modèle original de valves cardiaques. Ce modèle simplifié offre un compromis entre les approches 0D classiques et les simulations complexes d'interaction fluide-structure 3D. Diverses simulations numériques sont présentées pour illustrer l'efficacité et la robustesse de ce modèle, qui permet d'envisager des simulations réalistes de l'hémodynamique cardiaque, à un coût de calcul modéré.
APA, Harvard, Vancouver, ISO, and other styles
16

Skjerven, Brian M. "A parallel implementation of an agent-based brain tumor model." Link to electronic thesis, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-060507-172337/.

Full text
Abstract:
Thesis (M.S.) -- Worcester Polytechnic Institute.
Keywords: Visualization; Numerical analysis; Computational biology; Scientific computation; High-performance computing. Includes bibliographical references (p.19).
APA, Harvard, Vancouver, ISO, and other styles
17

Burgos, Sylvestre Jean-Baptiste Louis. "The computation of Greeks with multilevel Monte Carlo." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:6453a93b-9daf-4bfe-8c77-9cd6802f77dd.

Full text
Abstract:
In mathematical finance, the sensitivities of option prices to various market parameters, also known as the “Greeks”, reflect the exposure to different sources of risk. Computing these is essential to predict the impact of market moves on portfolios and to hedge them adequately. This is commonly done using Monte Carlo simulations. However, obtaining accurate estimates of the Greeks can be computationally costly. Multilevel Monte Carlo offers complexity improvements over standard Monte Carlo techniques. However the idea has never been used for the computation of Greeks. In this work we answer the following questions: can multilevel Monte Carlo be useful in this setting? If so, how can we construct efficient estimators? Finally, what computational savings can we expect from these new estimators? We develop multilevel Monte Carlo estimators for the Greeks of a range of options: European options with Lipschitz payoffs (e.g. call options), European options with discontinuous payoffs (e.g. digital options), Asian options, barrier options and lookback options. Special care is taken to construct efficient estimators for non-smooth and exotic payoffs. We obtain numerical results that demonstrate the computational benefits of our algorithms. We discuss the issues of convergence of pathwise sensitivities estimators. We show rigorously that the differentiation of common discretisation schemes for Ito processes does result in satisfactory estimators of the the exact solutions’ sensitivities. We also prove that pathwise sensitivities estimators can be used under some regularity conditions to compute the Greeks of options whose underlying asset’s price is modelled as an Ito process. We present several important results on the moments of the solutions of stochastic differential equations and their discretisations as well as the principles of the so-called “extreme path analysis”. We use these to develop a rigorous analysis of the complexity of the multilevel Monte Carlo Greeks estimators constructed earlier. The resulting complexity bounds appear to be sharp and prove that our multilevel algorithms are more efficient than those derived from standard Monte Carlo.
APA, Harvard, Vancouver, ISO, and other styles
18

Karimli, Nigar. "Parameter Estimation and Optimal Design Techniques to Analyze a Mathematical Model in Wound Healing." TopSCHOLAR®, 2019. https://digitalcommons.wku.edu/theses/3114.

Full text
Abstract:
For this project, we use a modified version of a previously developed mathematical model, which describes the relationships among matrix metalloproteinases (MMPs), their tissue inhibitors (TIMPs), and extracellular matrix (ECM). Our ultimate goal is to quantify and understand differences in parameter estimates between patients in order to predict future responses and individualize treatment for each patient. By analyzing parameter confidence intervals and confidence and prediction intervals for the state variables, we develop a parameter space reduction algorithm that results in better future response predictions for each individual patient. Moreover, use of another subset selection method, namely Structured Covariance Analysis, that considers identifiability of parameters, has been included in this work. Furthermore, to estimate parameters more efficiently and accurately, the standard error (SE- )optimal design method is employed, which calculates optimal observation times for clinical data to be collected. Finally, by combining different parameter subset selection methods and an optimal design problem, different cases for both finding optimal time points and intervals have been investigated.
APA, Harvard, Vancouver, ISO, and other styles
19

Myers, Jeremy. "Computational Fluid Dynamics in a Terminal Alveolated Bronchiole Duct with Expanding Walls: Proof-of-Concept in OpenFOAM." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/5011.

Full text
Abstract:
Mathematical Biology has found recent success applying Computational Fluid Dynamics (CFD) to model airflow in the human lung. Detailed modeling of flow patterns in the alveoli, where the oxygen-carbon dioxide gas exchange occurs, has provided data that is useful in treating illnesses and designing drug-delivery systems. Unfortunately, many CFD software packages have high licensing fees that are out of reach for independent researchers. This thesis uses three open-source software packages, Gmsh, OpenFOAM, and ParaView, to design a mesh, create a simulation, and visualize the results of an idealized terminal alveolar sac model. This model successfully demonstrates that OpenFOAM can be used to model airflow in the acinar region of the lung under biologically relevant conditions.
APA, Harvard, Vancouver, ISO, and other styles
20

Kimeu, Joseph M. "Fractional Calculus: Definitions and Applications." TopSCHOLAR®, 2009. http://digitalcommons.wku.edu/theses/115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Zhao, Yue. "Modelling avian influenza in bird-human systems : this thesis is presented in the partial fulfillment of the requirement for the degree of Masters of Information Science in Mathematics at Massey University, Albany, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/1145.

Full text
Abstract:
In 1997, the first human case of avian influenza infection was reported in Hong Kong. Since then, avian influenza has become more and more hazardous for both animal and human health. Scientists believed that it would not take long until the virus mutates to become contagious from human to human. In this thesis, we construct avian influenza with possible mutation situations in bird-human systems. Also, possible control measures for humans are introduced in the systems. We compare the analytical and numerical results and try to find the most efficient control measures to prevent the disease.
APA, Harvard, Vancouver, ISO, and other styles
22

Kwizera, Petero. "Matrix Singular Value Decomposition." UNF Digital Commons, 2010. http://digitalcommons.unf.edu/etd/381.

Full text
Abstract:
This thesis starts with the fundamentals of matrix theory and ends with applications of the matrix singular value decomposition (SVD). The background matrix theory coverage includes unitary and Hermitian matrices, and matrix norms and how they relate to matrix SVD. The matrix condition number is discussed in relationship to the solution of linear equations. Some inequalities based on the trace of a matrix, polar matrix decomposition, unitaries and partial isometies are discussed. Among the SVD applications discussed are the method of least squares and image compression. Expansion of a matrix as a linear combination of rank one partial isometries is applied to image compression by using reduced rank matrix approximations to represent greyscale images. MATLAB results for approximations of JPEG and .bmp images are presented. The results indicate that images can be represented with reasonable resolution using low rank matrix SVD approximations.
APA, Harvard, Vancouver, ISO, and other styles
23

Perez, Luis G. "Development of a Methodology that Couples Satellite Remote Sensing Measurements to Spatial-Temporal Distribution of Soil Moisture in the Vadose Zone of the Everglades National Park." FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1663.

Full text
Abstract:
Spatial-temporal distribution of soil moisture in the vadose zone is an important aspect of the hydrological cycle that plays a fundamental role in water resources management, including modeling of water flow and mass transport. The vadose zone is a critical transfer and storage compartment, which controls the partitioning of energy and mass linked to surface runoff, evapotranspiration and infiltration. This dissertation focuses on integrating hydraulic characterization methods with remote sensing technologies to estimate the soil moisture distribution by modeling the spatial coverage of soil moisture in the horizontal and vertical dimensions with high temporal resolution. The methodology consists of using satellite images with an ultrafine 3-m resolution to estimate soil surface moisture content that is used as a top boundary condition in the hydrologic model, SWAP, to simulate transport of water in the vadose zone. To demonstrate the methodology, herein developed, a number of model simulations were performed to forecast a range of possible moisture distributions in the Everglades National Park (ENP) vadose zone. Intensive field and laboratory experiments were necessary to prepare an area of interest (AOI) and characterize the soils, and a framework was developed on ArcGIS platform for organizing and processing of data applying a simple sequential data approach, in conjunction with SWAP. An error difference of 3.6% was achieved when comparing radar backscatter coefficient (σ0) to surface Volumetric Water Content (VWC); this result was superior to the 6.1% obtained by Piles during a 2009 NASA SPAM campaign. A registration error (RMSE) of 4% was obtained between model and observations. These results confirmed the potential use of SWAP to simulate transport of water in the vadose zone of the ENP. Future work in the ENP must incorporate the use of preferential flow given the great impact of macropore on water and solute transport through the vadose zone. Among other recommendations, there is a need to develop procedures for measuring the ENP peat shrinkage characteristics due to changes in moisture content in support of the enhanced modeling of soil moisture distribution.
APA, Harvard, Vancouver, ISO, and other styles
24

Wojtacki, Kajetan Tomasz. "Coupling between transport, mechanical properties and degradation by dissolution of rock reservoir." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS153/document.

Full text
Abstract:
L'objectif de cette thèse est d'analyser l'évolution des propriétés mécaniques et de transport effectives de roches aquifères,qui sont soumises à une dégradation progressive par attaque chimique due à la dissolution par CO2.L'étude proposée porte sur les conditions à long terme et en champ lointain, lorsque la dégradation de la matrice poreuse peut être supposée homogène à l'échelle de l'échantillon.La morphologie du réseau de pores et du squelette solide définissant les propriétés macroscopiques majeures de la roche (perméabilité, élasticité),la modélisation d'un tel matériau poreux doit être basée sur une caractérisation morphologique et statistique des roches étudiées.Tout d'abord, une méthode de reconstruction inspirée du processus naturel de formation des grès est développée afin d'obtenir des représentations statistiquement équivalentes à de véritables échantillons.Les échantillons générés sont sélectionnés afin de satisfaire les informations morphologiques extraites de l'analyse des images microtomographiques d'échantillons de roche naturelle.Une méthodologie afin d'estimer les propriétés mécaniques équivalentes des échantillons générés, fondées directement sur des maillages réguliers considérés comme images binaires, est présentée.Le comportement mécanique équivalent est obtenu dans le cadre de l'homogénéisation périodique.Mais en raison du manque de périodicité géométrique des échantillons considérés, deux approches différentes sont développées :la reconstruction de VER par symétrie de réflexion ou l'addition d'une couche homogène associée à une méthode de point fixe.L’évolution de la perméabilité est estimée de manière classique en utilisant la méthode de mise à l'échelle dans la forme de la loi de Darcy. Enfin, la dissolution chimique du matériau est abordée par dilatation morphologique de la phase poreuse.De plus, une analyse détaillée de l'évolution des descripteurs morphologiques liée aux modifications de la microstructure lors des étapes de dissolution est présentée.La relation entre les propriétés morphologiques - perméabilité - modules d'élasticité est également fournie.La méthodologie développée dans ce travail pourra être facilement appliquée à d'autres classes de matériaux hétérogènes
The aim of this thesis is to analyse evolution of effective mechanical and transport properties of rock aquifer, which is subjected to progressive chemical degradation due to CO2 dissolution. The proposed study focuses on long-term and far field conditions, when degradation of porous matrix can be assumed to be homogeneous at sample scale. It is very well known that morphology of pore network and solid skeleton defines important macroscopic properties of the rock (permeability, stiffness). Therefore, modelling of such porous material should be based on morphological and statistical characterisation of investigated rocks. First of all, in order to obtain statistically equivalent representations of real specimen a reconstruction method inspired by natural process of sandstone formation is adapted. Then the selected generated samples satisfy morphological informations which are extracted by analysing microtomography of the natural rock sample. Secondly, a methodology to estimate effective mechanical properties of investigated material, based directly on binary images, is featured. Effective mechanical behaviour is obtain within the framework of periodic homogenization, However due to lack of geometrical periodicity two different approaches are used (reflectional symmetry of considered RVE and a fixed point method, using additional layer spread over the considered geometry). Evolution of permeability is estimated in classical way using upscaling method in the form of Darcy's law. Finally, chemical dissolution of material is tackled in a simplified way by performing morphological dilation of porous phase. Detailed analysis of chosen morphological descriptors evolution, triggered by modifications of microstructures is provided. The relation between morphological properties – permeability – elastic moduli is also provided. The methodology developed in this work could be easily applied to other heterogeneous materials
APA, Harvard, Vancouver, ISO, and other styles
25

Sahama, Tony. "Some practical issues in the design and analysis of computer experiments." Thesis, Victoria University, Melbourne, 2003. https://eprints.qut.edu.au/60715/1/Sahama_2003compressed.pdf.

Full text
Abstract:
Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.
APA, Harvard, Vancouver, ISO, and other styles
26

Weil, Jacques-Arthur. "Méthodes effectives en théorie de Galois différentielle et applications à l'intégrabilité de systèmes dynamiques." Habilitation à diriger des recherches, Université de Limoges, 2013. http://tel.archives-ouvertes.fr/tel-00933064.

Full text
Abstract:
Mes recherches portent essentiellement sur l''elaboration de m'ethodes de calcul formel pour l''etude constructive des 'equations diff'erentielles lin'eaires, plus particuli'erement autour de la th'eorie de Galois diff'erentielle. Celles-ci vont du d'eveloppement de la th'eorie sous-jacente aux algorithmes, en incluant leur implantation en Maple. Ces travaux ont en commun une approche exp'erimentale des math'ematiques o'u l'on met l'accent sur l'examen d'exemples les plus pertinents possibles. L''etude d'etaill'ee de cas provenant de la m'ecanique rationnelle ou de la physique th'eorique nourrit en retour le d'eveloppement de th'eories math'ematiques idoines. Mes travaux s'articulent suivant trois grands th'emes interd'ependants : la th'eorie de Galois diff'erentielle effective, ses applications 'a l'int'egrabilit'e de syst'emes hamiltoniens et des applications en physique th'eorique.
APA, Harvard, Vancouver, ISO, and other styles
27

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Full text
Abstract:
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.
APA, Harvard, Vancouver, ISO, and other styles
28

Olivier, Géraldine. "Adaptation de maillage anisotrope par prescription de champ de métriques appliquée aux simulations instationnaires en géométrie mobile." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2011. http://tel.archives-ouvertes.fr/tel-00739406.

Full text
Abstract:
Cette thèse s'intéresse aux simulations dépendantes du temps impliquant des géometries fixes ou mobiles. Ce type de simulations est l'objet d'attentes grandissantes de la part des industriels, qui souhaiteraient voir réaliser ce type de calculs de façon systématique au sein de leurs centres de recherche, ce qui n'est clairement pas le cas à l'heure actuelle. Ce travail tente de satisfaire en partie cette demande et vise notamment à améliorer la précision ainsi que l'efficacité en termes de temps de calcul des algorithmes actuellement utilisés dans ce contexte. Les méthodes d'adaptation de maillage anisotrope par prescription d'un champ de métriques, qui ont aujourd'hui atteint une certaine maturité, notamment dans leur application aux simulations stationnaires, constituent une piste très prometteuse pour l'amélioration des calculs évoluant en temps, mais leur extension dans ce contexte est loin d'être triviale. Quant à leur utilisation sur les simulations en géométries mobiles, seules quelques tentatives peuvent être répertoriées, et très peu portent sur des problèmes réalistes en trois dimensions. Cette étude présente plusieurs nouveautés sur ces questions, notamment l'extension de l'adaptation de maillage multi-échelles par champ de métriques aux problèmes instationnaires en géométries fixes et mobiles. Par ailleurs, essentiellement dans une optique de réduction des temps de calculs, une stratégie originale à été adoptée pour réaliser des calculs impliquant des maillages mobiles. Notamment, il est démontré par la pratique dans cette thèse qu'il possible de déplacer des objets en trois dimensions sur de grandes distances en maintenant le nombre de sommets du maillage constant, c'est-à-dire en limitant les types d'opérations de modification de maillage autorisés. Il en résulte un gain conséquent en terme de temps de calcul aussi bien au niveau du déplacement de maillage qu'au niveau de la résolution numérique. Par ailleurs, un nouveau schéma est proposé qui permet de gérer les changements de connectivité du maillage de manière cohérente avec la description Arbitrary-Lagrangian-Eulerian des équations physiques. La plupart de ces nouvelles méthodes ont été appliquées à la simulation d'écoulements fluides compressibles autour de géometries complexes en deux et trois dimensions d'espace.
APA, Harvard, Vancouver, ISO, and other styles
29

(9876842), T. Janz. "Irrigation scheduling : a mathematical model for water movement in cropped soils incorporating sink term and evaporation front." Thesis, 1992. https://figshare.com/articles/thesis/Irrigation_scheduling_a_mathematical_model_for_water_movement_in_cropped_soils_incorporating_sink_term_and_evaporation_front/13430102.

Full text
Abstract:
The principles of using mathematical models to describe processes involved in the movement of water in soils are surveyed from the literature. Various models are considered within a classification system based on the degree of empiricism or mechanism of the approach. Empirical models are compared and contrasted with mechanistic models and the role of these models in agricultural practice is discussed. A new empirical mathematical model to describe the uptake of water by plant roots is developed through a sink term and combined with well established models including the Richards' equation to provide a paradigm for the movement of water throughout the soil/plant system. Methods of solution of the model are considered and a finite difference method is employed to provide a computer implementation of the solutions under a range of initial and boundary conditions. The computer simulation was found to be easily adapted to a variety of field situations. In particular, the introduction of the 'evaporation front' concept and its embodiment in the new sink term, provide insights into the criteria for scheduling irrigations, laying the basis for field verification and investigation. The use of this mathematical model for determining an optimal irrigation regime is discussed in relation to conventional scheduling methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Mefford, Tim. "A parallel numerical computation of nucleon scattering from nuclei, including full spin coupling and coulomb forces." Thesis, 1995. http://hdl.handle.net/1957/34619.

Full text
Abstract:
The microscopic, momentum space, optical potential description of spin 1/2 x 1/2 scattering is extended to include the coupling of the singlet-triplet spin channels and the exact handling of the Coulomb force. Computing performance in constructing the optical potential and in solving the coupled-channels Lippmann-Schwinger equations is enhanced by parallelization via the PVM library. Cross sections and spin observables are predicted for p - ����C and p - ��He elastic scattering at 500 MeV. The complete set of nucleon-trinucleon reactions is calculated to investigate the sensitivity of these reactions to charge-symmetry breaking effects.
Graduation date: 1996
APA, Harvard, Vancouver, ISO, and other styles
31

Goens, Jokisch Andres Wilhelm. "Improving Model-Based Software Synthesis: A Focus on Mathematical Structures." 2021. https://tud.qucosa.de/id/qucosa%3A74884.

Full text
Abstract:
Computer hardware keeps increasing in complexity. Software design needs to keep up with this. The right models and abstractions empower developers to leverage the novelties of modern hardware. This thesis deals primarily with Models of Computation, as a basis for software design, in a family of methods called software synthesis. We focus on Kahn Process Networks and dataflow applications as abstractions, both for programming and for deriving an efficient execution on heterogeneous multicores. The latter we accomplish by exploring the design space of possible mappings of computation and data to hardware resources. Mapping algorithms are not at the center of this thesis, however. Instead, we examine the mathematical structure of the mapping space, leveraging its inherent symmetries or geometric properties to improve mapping methods in general. This thesis thoroughly explores the process of model-based design, aiming to go beyond the more established software synthesis on dataflow applications. We starting with the problem of assessing these methods through benchmarking, and go on to formally examine the general goals of benchmarks. In this context, we also consider the role modern machine learning methods play in benchmarking. We explore different established semantics, stretching the limits of Kahn Process Networks. We also discuss novel models, like Reactors, which are designed to be a deterministic, adaptive model with time as a first-class citizen. By investigating abstractions and transformations in the Ohua language for implicit dataflow programming, we also focus on programmability. The focus of the thesis is in the models and methods, but we evaluate them in diverse use-cases, generally centered around Cyber-Physical Systems. These include the 5G telecommunication standard, automotive and signal processing domains. We even go beyond embedded systems and discuss use-cases in GPU programming and microservice-based architectures.
APA, Harvard, Vancouver, ISO, and other styles
32

Márquez, Braconi Agustín Daniel. "Framework para aprendizaje activo." Bachelor's thesis, 2018. http://hdl.handle.net/11086/14276.

Full text
Abstract:
Tesis (Lic. en Cs. de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2018.
Muchos proyectos de Machine Learning de la actualidad precisan de un gran número de datos etiquetados para poder entrenar los algoritmos. El etiquetado de los mismos tiene un gran costo tanto económico como de tiempo. Una solución a este problema es el Active Learning, una forma inteligente de seleccionar qué instancias etiquetar para maximizar el aprendizaje. Para facilitar esta tarea propongo realizar un framework de software que sirva para desplegar proyectos de este tipo. El framework desarrollado fue puesto a prueba logrando excelentes resultados, mostrando que a partir de un mismo conjunto de datos, si se seleccionan las instancias a etiquetar inteligentemente se puede lograr el rendimiento máximo con una cantidad considerablemente menor de ejemplos.
Many Machine Learning projects today require a large number of tagged data to train the algorithms. The labeling of them has a great cost both economic and time. A solution to this problem is Active Learning, an intelligent way to select which instances to label to maximize learning. To facilitate this task I propose to make a software framework that serves to deploy projects of this type. The developed framework was tested achieving excellent results, showing that from the same data set, if you select the instances to label intelligently you can achieve maximum performance with a considerably smaller number of examples.
Márquez Braconi, Agustín Daniel. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.
APA, Harvard, Vancouver, ISO, and other styles
33

Ulerich, Rhys David. "Reducing turbulence- and transition-driven uncertainty in aerothermodynamic heating predictions for blunt-bodied reentry vehicles." Thesis, 2014. http://hdl.handle.net/2152/26886.

Full text
Abstract:
Turbulent boundary layers approximating those found on the NASA Orion Multi-Purpose Crew Vehicle (MPCV) thermal protection system during atmospheric reentry from the International Space Station have been studied by direct numerical simulation, with the ultimate goal of reducing aerothermodynamic heating prediction uncertainty. Simulations were performed using a new, well-verified, openly available Fourier/B-spline pseudospectral code called Suzerain equipped with a ``slow growth'' spatiotemporal homogenization approximation recently developed by Topalian et al. A first study aimed to reduce turbulence-driven heating prediction uncertainty by providing high-quality data suitable for calibrating Reynolds-averaged Navier--Stokes turbulence models to address the atypical boundary layer characteristics found in such reentry problems. The two data sets generated were Ma[approximate symbol] 0.9 and 1.15 homogenized boundary layers possessing Re[subscript theta, approximate symbol] 382 and 531, respectively. Edge-to-wall temperature ratios, T[subscript e]/T[subscript w], were close to 4.15 and wall blowing velocities, v[subscript w, superscript plus symbol]= v[subscript w]/u[subscript tau], were about 8 x 10-3 . The favorable pressure gradients had Pohlhausen parameters between 25 and 42. Skin frictions coefficients around 6 x10-3 and Nusselt numbers under 22 were observed. Near-wall vorticity fluctuations show qualitatively different profiles than observed by Spalart (J. Fluid Mech. 187 (1988)) or Guarini et al. (J. Fluid Mech. 414 (2000)). Small or negative displacement effects are evident. Uncertainty estimates and Favre-averaged equation budgets are provided. A second study aimed to reduce transition-driven uncertainty by determining where on the thermal protection system surface the boundary layer could sustain turbulence. Local boundary layer conditions were extracted from a laminar flow solution over the MPCV which included the bow shock, aerothermochemistry, heat shield surface curvature, and ablation. That information, as a function of leeward distance from the stagnation point, was approximated by Re[subscript theta], Ma[subscript e], [mathematical equation], v[subscript w, superscript plus sign], and T[subscript e]/T[subscript w] along with perfect gas assumptions. Homogenized turbulent boundary layers were initialized at those local conditions and evolved until either stationarity, implying the conditions could sustain turbulence, or relaminarization, implying the conditions could not. Fully turbulent fields relaminarized subject to conditions 4.134 m and 3.199 m leeward of the stagnation point. However, different initial conditions produced long-lived fluctuations at leeward position 2.299 m. Locations more than 1.389 m leeward of the stagnation point are predicted to sustain turbulence in this scenario.
text
APA, Harvard, Vancouver, ISO, and other styles
34

Guidoum, Arsalane. "Conception d'un Pro Logiciel Interactif sous R pour la Simulation de Processus de Diffusion." Phd thesis, 2012. http://tel.archives-ouvertes.fr/tel-00735806.

Full text
Abstract:
Dans ce travail, on propose un nouveau package Sim.DiffProc pour la simulation des processus de diffusion, muni d'une interface graphique (GUI), sous langage R. Le développement de l'outil informatique (logiciels et matériels) ces dernières années, nous a motivé de réaliser ce travail. A l'aide de ce package, nous pouvons traiter beaucoup de problèmes théoriques difficiles liée à l'utilisation des processus de diffusion, pour des recherches pratiques, tels que la simulation numérique trajectoires de la solution d'une ÉDS. Ce qui permet à beaucoup d'utilisateurs dans différents domaines à l'employer comme outil sophistiqué à la modélisation de leurs problèmes pratiques. Le problème de dispersion d'un polluant, en présence d'un domaine attractif que nous avons traité dans ce travail en est un bon exemple. Cet exemple montre l'utilité et l'importance pratique des processus de diffusion dans la modélisation simulation de situations réelles complexes. La fonction de densité de la variable aléatoire tau(c) "instant de premier passage" de la frontière de domaine d'attraction peut être utilisée pour déterminer le taux de concentration des particules polluantes à l'intérieur du domaine. Les études de simulation et les analyses statistiques mises en application à l'aide du package Sim.DiffProc, se présentent efficaces et performantes, comparativement aux résultats théoriques explicitement ou approximativement déterminés par les modèles de processus de diffusion considérés.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography