Dissertations / Theses on the topic 'Differential equations, Partial Data processing'

To see the other types of publications on this topic, follow the link: Differential equations, Partial Data processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 48 dissertations / theses for your research on the topic 'Differential equations, Partial Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

He, Chuan. "Numerical solutions of differential equations on FPGA-enhanced computers." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Karasev, Peter A. "Feedback augmentation of pde-based image segmentation algorithms using application-specific exogenous data." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50257.

Full text
Abstract:
This thesis is divided into five chapters. The scope of problems considered is defined in chapter I. Next, chapter II provides background material on image processing with partial differential equations and a review of prior work in the field. Chapter III covers the medical imaging portion of the research; the key contribution is a control-based algorithm for interactive image segmentation. Applications of the feedback-augmented level set method to fracture reconstruction and surgical planning are shown. Problems in vision-based control are considered in Chapters IV and V. A method of improving performance in closed-loop target tracking using level set segmentation is developed, with unmanned aerial vehicle or next-generation missile guidance being the primary applications of interest. Throughout this thesis, the two application types are connected into a unified viewpoint of open-loop systems that are augmented by exogenous data.
APA, Harvard, Vancouver, ISO, and other styles
3

Lazcano, Vanel. "Some problems in depth enhanced video processing." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/373917.

Full text
Abstract:
In this thesis we tackle two problems, namely, the data interpolation prob- lem in the context of depth computation both for images and for videos, and the problem of the estimation of the apparent movement of objects in image sequences. The rst problem deals with completion of depth data in a region of an image or video where data are missing due to occlusions, unreliable data, damage or lost of data during acquisition. In this thesis we tackle it in two ways. First, we propose a non-local gradient-based energy which is able to complete planes locally. We consider this model as an extension of the bilateral lter to the gradient domain. We have successfully evaluated our model to complete synthetic depth images and also incomplete depth maps provided by a Kinect sensor. The second approach to tackle the problem is an experimental study of the Biased Absolutely Minimizing Lipschitz Extension (biased AMLE in short) for anisotropic interpolation of depth data to big empty regions without informa- tion. The AMLE operator is a cone interpolator, but the biased AMLE is an exponential cone interpolator which makes it more addapted to depth maps of real scenes that usually present soft convex or concave surfaces. Moreover, the biased AMLE operator is able to expand depth data to huge regions. By con- sidering the image domain endowed with an anisotropic metric, the proposed method is able to take into account the underlying geometric information in order not to interpolate across the boundary of objects at di erent depths. We have proposed a numerical model to compute the solution of the biased AMLE which is based on the eikonal operators. Additionally, we have extended the proposed numerical model to video sequences. The second problem deals with the motion estimation of the objects in a video sequence. This problem is known as the optical ow computation. The Optical ow problem is one of the most challenging problems in computer vision. Traditional models to estimate it fail in presence of occlusions and non-uniform illumination. To tackle these problems we proposed a variational model to jointly estimate optical ow and occlusion. Moreover, the proposed model is able to deal with the usual drawback of variational methods in dealing with fast displacements of objects in the scene which are larger than the object it- self. The addition of a term that balance gradient and intensities increases the robustness to illumination changes of the proposed model. The inclusions of a supplementary matches given by exhaustive search in speci cs locations helps to follow large displacements.
En esta tesis se abordan dos problemas: interpolación de datos en el contexto del cálculo de disparidades tanto para imágenes como para video, y el problema de la estimación del movimiento aparente de objetos en una secuencia de imágenes. El primer problema trata de la completación de datos de profundidad en una región de la imagen o video dónde los datos se han perdido debido a oclusiones, datos no confiables, datos dañados o pérdida de datos durante la adquisición. En esta tesis estos problemas se abordan de dos maneras. Primero, se propone una energía basada en gradientes no-locales, energía que puede (localmente) completar planos. Se considera este modelo como una extensión del filtro bilateral al dominio del gradiente. Se ha evaluado en forma exitosa el modelo para completar datos sintéticos y también mapas de profundidad incompletos de un sensor Kinect. El segundo enfoque, para abordar el problema, es un estudio experimental del biased AMLE (Biased Absolutely Minimizing Lipschitz Extension) para interpolación anisotrópica de datos de profundidad en grandes regiones sin información. El operador AMLE es un interpolador de conos, pero el operador biased AMLE es un interpolador de conos exponenciales lo que lo hace estar más adaptado a mapas de profundidad de escenas reales (las que comunmente presentan superficies convexas, concavas y suaves). Además, el operador biased AMLE puede expandir datos de profundidad a regiones grandes. Considerando al dominio de la imagen dotado de una métrica anisotrópica, el método propuesto puede tomar en cuenta información geométrica subyacente para no interpolar a través de los límites de los objetos a diferentes profundidades. Se ha propuesto un modelo numérico, basado en el operador eikonal, para calcular la solución del biased AMLE. Adicionalmente, se ha extendido el modelo numérico a sequencias de video. El cálculo del flujo óptico es uno de los problemas más desafiantes para la visión por computador. Los modelos tradicionales fallan al estimar el flujo óptico en presencia de oclusiones o iluminación no uniforme. Para abordar este problema se propone un modelo variacional para conjuntamente estimar flujo óptico y oclusiones. Además, el modelo propuesto puede tolerar, una limitación tradicional de los métodos variacionales, desplazamientos rápidos de objetos que son más grandes que el tamaño objeto en la escena. La adición de un término para el balance de gradientes e intensidades aumenta la robustez del modelo propuesto ante cambios de iluminación. La inclusión de correspondencias adicionales (obtenidas usando búsqueda exhaustiva en ubicaciones específicas) ayuda a estimar grandes desplazamientos.
APA, Harvard, Vancouver, ISO, and other styles
4

Michel, Thomas. "Analyse mathématique et calibration de modèles de croissance tumorale." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0222/document.

Full text
Abstract:
Cette thèse présente des travaux sur l’étude et la calibration de modèles d’équations aux dérivées partielles pour la croissance tumorale. La première partie porte sur l’analyse d’un modèle de croissance tumorale pour le cas de métastases au foie de tumeurs gastro-intestinales (GIST). Le modèle est un système d’équations aux dérivées partielles couplées et prend en compte plusieurs traitements dont un traitement anti-angiogénique. Le modèle permet de reproduire des données cliniques. La première partie de ce travail concerne la preuve d’existence/unicité de la solution du modèle. La seconde partie du travail porte sur l’étude du comportement asymptotique de la solution du modèle lorsqu’un paramètre du modèle, décrivant la capacité de la tumeur à évacuer la nécrose, converge vers 0. La seconde partie de la thèse concerne le développement d’un modèle de croissance pour des sphéroïdes tumoraux ainsi que sur la calibration de ce modèle à partir de données expérimentales in vitro. L’objectif est de développer un modèle permettant de reproduire quantitativement la distribution des cellules proliférantes à l’intérieur d’un sphéroïde en fonction de la concentration en nutriments. Le travail de modélisation et de calibration du modèle a été effectué à partir de données expérimentales permettant d’obtenir la répartition spatiale de cellules proliférantes dans un sphéroïde tumoral
In this thesis, we present several works on the study and the calibration of partial differential equations models for tumor growth. The first part is devoted to the mathematical study of a model for tumor drug resistance in the case of gastro-intestinal tumor (GIST) metastases to the liver. The model we study consists in a coupled partial differential equations system and takes several treatments into account, such as a anti-angiogenic treatment. This model is able to reproduce clinical data. In a first part, we present the proof of the existence/uniqueness of the solution to this model. Then, in a second part, we study the asymptotic behavior of the solution when a parameter of this model, describing the capacity of the tumor to evacuate the necrosis, goes to 0. In the second part of this thesis, we present the development of model for tumor spheroids growth. We also present the model calibration thanks to in vitro experimental data. The main objective of this work is to reproduce quantitatively the proliferative cell distribution in a spheroid, as a function of the concentration of nutrients. The modeling and calibration of this model have been done thanks to experimental data consisting of proliferative cells distribution in a spheroid
APA, Harvard, Vancouver, ISO, and other styles
5

Sum, Kwok-wing Anthony. "Partial differential equation based methods in medical image processing." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B38958624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ozmen, Neslihan. "Image Segmentation And Smoothing Via Partial Differential Equations." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610395/index.pdf.

Full text
Abstract:
In image processing, partial differential equation (PDE) based approaches have been extensively used in segmentation and smoothing applications. The Perona-Malik nonlinear diffusion model is the first PDE based method used in the image smoothing tasks. Afterwards the classical Mumford-Shah model was developed to solve both image segmentation and smoothing problems and it is based on the minimization of an energy functional. It has numerous application areas such as edge detection, motion analysis, medical imagery, object tracking etc. The model is a way of finding a partition of an image by using a piecewise smooth representation of the image. Unfortunately numerical procedures for minimizing the Mumford-Shah functional have some difficulties because the problem is non convex and it has numerous local minima, so approximate approaches have been proposed. Two such methods are the Ambrosio-Tortorelli approximation and the Chan-Vese active contour method. Ambrosio and Tortorelli have developed a practical numerical implementation of the Mumford-Shah model which based on an elliptic approximation of the original functional. The Chan-Vese model is a piecewise constant generalization of the Mumford-Shah functional and it is based on level set formulation. Another widely used image segmentation technique is the &ldquo
Active Contours (Snakes)&rdquo
model and it is correlated with the Chan-Vese model. In this study, all these approaches have been examined in detail. Mathematical and numerical analysis of these models are studied and some experiments are performed to compare their performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Sum, Kwok-wing Anthony, and 岑國榮. "Partial differential equation based methods in medical image processing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38958624.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kadhum, Nashat Ibrahim. "The spline approach to the numerical solution of parabolic partial differential equations." Thesis, Loughborough University, 1988. https://dspace.lboro.ac.uk/2134/6725.

Full text
Abstract:
This thesis is concerned with the Numerical Solution of Partial Differential Equations. Initially some definitions and mathematical background are given, accompanied by the basic theories of solving linear systems and other related topics. Also, an introduction to splines, particularly cubic splines and their identities are presented. The methods used to solve parabolic partial differential equations are surveyed and classified into explicit or implicit (direct and iterative) methods. We concentrate on the Alternating Direction Implicit (ADI), the Group Explicit (GE) and the Crank-Nicolson (C-N) methods. A new method, the Splines Group Explicit Iterative Method is derived, and a theoretical analysis is given. An optimum single parameter is found for a special case. Two criteria for the acceleration parameters are considered; they are the Peaceman-Rachford and the Wachspress criteria. The method is tested for different numbers of both parameters. The method is also tested using single parameters, i. e. when used as a direct method. The numerical results and the computational complexity analysis are compared with other methods, and are shown to be competitive. The method is shown to have good stability property and achieves high accuracy in the numerical results. Another direct explicit method is developed from cubic splines; the splines Group Explicit Method which includes a parameter that can be chosen to give optimum results. Some analysis and the computational complexity of the method is given, with some numerical results shown to confirm the efficiency and compatibility of the method. Extensions to two dimensional parabolic problems are given in a further chapter. In this thesis the Dirichlet, the Neumann and the periodic boundary conditions for linear parabolic partial differential equations are considered. The thesis concludes with some conclusions and suggestions for further work.
APA, Harvard, Vancouver, ISO, and other styles
9

Elyan, Eyad, and Hassan Ugail. "Reconstruction of 3D human facial images using partial differential equations." Academy Publisher, 2007. http://hdl.handle.net/10454/2644.

Full text
Abstract:
One of the challenging problems in geometric modeling and computer graphics is the construction of realistic human facial geometry. Such geometry are essential for a wide range of applications, such as 3D face recognition, virtual reality applications, facial expression simulation and computer based plastic surgery application. This paper addresses a method for the construction of 3D geometry of human faces based on the use of Elliptic Partial Differential Equations (PDE). Here the geometry corresponding to a human face is treated as a set of surface patches, whereby each surface patch is represented using four boundary curves in the 3-space that formulate the appropriate boundary conditions for the chosen PDE. These boundary curves are extracted automatically using 3D data of human faces obtained using a 3D scanner. The solution of the PDE generates a continuous single surface patch describing the geometry of the original scanned data. In this study, through a number of experimental verifications we have shown the efficiency of the PDE based method for 3D facial surface reconstruction using scan data. In addition to this, we also show that our approach provides an efficient way of facial representation using a small set of parameters that could be utilized for efficient facial data storage and verification purposes.
APA, Harvard, Vancouver, ISO, and other styles
10

Dong, Bin. "Applications of variational models and partial differential equations in medical image and surface processing." Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1872060431&sid=3&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Brandvik, Tobias. "The implementation of PDE solvers on parallel processors." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Chun Yang. "A second order ADI method for 2D parabolic equations with mixed derivative." Thesis, University of Macau, 2012. http://umaclib3.umac.mo/record=b2592940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Manay, Siddharth. "Applications of anti-geometric diffusion of computer vision : thresholding, segmentation, and distance functions." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/33626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Al-Aboodi, Maher. "Enhanced receiver architectures for processing multi GNSS signals in a single chain : based on partial differential equations mathematical model." Thesis, University of Buckingham, 2016. http://bear.buckingham.ac.uk/136/.

Full text
Abstract:
The focus of our research is on designing a new architecture (RF front-end and digital) for processing multi GNSS signals in a single receiver chain. The motivation is to save in overhead cost (size, processing time and power consumption) of implementing multiple signal receivers side-by-side on-board Smartphones. This thesis documents the new multi-signal receiver architecture that we have designed. Based on this architecture, we have achieved/published eight novel contributions. Six of these implementations focus on multi GNSS signal receivers, and the last two are for multiplexing Bluetooth and GPS received signals in a single processing chain. We believe our work in terms of the new innovative and novel techniques achieved is a major contribution to the commercial world especially that of Smartphones. Savings in both silicon size and processing time will be highly beneficial to reduction of costs but more importantly for conserving the energy of the battery. We are proud that we have made this significant contribution to both industry and the scientific research and development arena. The first part of the work focus on the Two GNSS signal detection front-end approaches that were designed to explore the availability of the L1 band of GPS, Galileo and GLONASS at an early stage. This is so that the receiver devotes appropriate resources to acquire them. The first approach was based on folding the carrier frequency of all the three GNSS signals with their harmonics to the First Nyquist Zone (FNZ), as depicted by the BandPass Sampling Receiver technique (BPSR). Consequently, there is a unique power distribution of these folded signals based on the actual present signals that can be detected to alert the digital processing parts to acquire it. Volterra Series model is used to estimate the existing power in the FNZ by extracting the kernels of these folded GNSS signals, if available. The second approach filters out the right-side lobe of the GLONASS signal and the left-side lobe of the Galileo signal, prior to the folding process in our BPSR implementation. This filtering is important to enable none overlapped folding of these two signals with the GPS signal in the FNZ. The simulation results show that adopting these two approaches can save much valuable acquisition processing time. Our Orthogonal BandPass Sampling Receiver and Orthogonal Complex BandPass Sampling Receiver are two methods designed to capture any two wireless signals simultaneously and use a single channel in the digital domain to process them, including tracking and decoding, concurrently. The novelty of the two receivers is centred on the Orthogonal Integrated Function (OIF) that continuously harmonies the two received signals to form a single orthogonal signal allowing the “tracking and decoding” to be carried out by a single digital channel. These receivers employ a Hilbert Transform for shifting one of the input signals by 90-degrees. Then, the BPSR technique is used to fold back the two received signals to the same reference frequency in the FNZ. Results show that these designed methods also reduce the sampling frequency to a rate proportional to the maximum bandwidth, instead of the summation of bandwidths, of the input signals. Two combined GPS L1CA and L2C signal acquisition channels are designed based on applying the idea of the OIF to enhance the power consumption and the implementation complexity in the existing combination methods and also to enhance the acquisition sensitivity. This is achieved by removing the Doppler frequency of the two signals; our methods add the in-phase component of the L2C signal together with the in-phase component of the L1CA signal, which is then shifted by 90-degree before adding it to the remaining components of these two signals, resulting in an orthogonal form of the combined signals. This orthogonal signal is then fed to our developed version of the parallel-code-phase-search engine. Our simulation results illustrate that the acquisition sensitivity of these signals is improved successfully by 5.0 dB, which is necessary for acquiring weak signals in harsh environments. The last part of this work focuses on the tracking stage when specifically multiplexing Bluetooth and L1CA GPS signals in a single channel based on using the concept of the OIF, where the tracking channel can be shared between the two signals without losing the lock or degrading its performance. Two approaches are designed for integrating the two signals based on the mathematical analysis of the main function of the tracking channel, which the Phase-Locked Loop (PLL). A mathematical model of a set of differential equations has been developed to evaluate the PLL when it used to track and demodulated two signals simultaneously. The simulation results proved that the implementation of our approaches has reduced by almost half the size and processing time.
APA, Harvard, Vancouver, ISO, and other styles
15

Butt, Muhammad Akmal. "Continuous and discrete approaches to morphological image analysis with applications : PDEs, curve evolution, and distance transforms." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/15465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ugail, Hassan, and Eyad Elyan. "Efficient 3D data representation for biometric applications." IOS Press, 2007. http://hdl.handle.net/10454/2683.

Full text
Abstract:
Yes
An important issue in many of today's biometric applications is the development of efficient and accurate techniques for representing related 3D data. Such data is often available through the process of digitization of complex geometric objects which are of importance to biometric applications. For example, in the area of 3D face recognition a digital point cloud of data corresponding to a given face is usually provided by a 3D digital scanner. For efficient data storage and for identification/authentication in a timely fashion such data requires to be represented using a few parameters or variables which are meaningful. Here we show how mathematical techniques based on Partial Differential Equations (PDEs) can be utilized to represent complex 3D data where the data can be parameterized in an efficient way. For example, in the case of a 3D face we show how it can be represented using PDEs whereby a handful of key facial parameters can be identified for efficient storage and verification.
APA, Harvard, Vancouver, ISO, and other styles
17

Cereijo, Martinez Maria. "A new parallel technique for the solution of sparse nonlinear equations." FIU Digital Commons, 1994. http://digitalcommons.fiu.edu/etd/2097.

Full text
Abstract:
Solving nonlinear systems of equations is a central problem in numerical analysis, with enormous significance for science and engineering. A special case, sparse systems of equations, occurs frequently in various applications. Sparsity occurs in the analysis of many types of complex systems because of the local nature of the dependence or connectivity among system components. One such system which may be modeled by a nonlinear sparse set of equations is the power system load flow analysis. This is a mathematical study performed by electrical utilities to monitor the electrical power system. The data from system components are used to create a set of nonlinear equations. These equations are then solved to find the voltage profile of the power network. With these data, control and security of the power system are achieved. Solving problems of this type is very time consuming when the system is large. This dissertation proposes a highly parallel computer architecture for solving large sets of nonlinear sparse equations. The goal of this architecture is to reduce the processing time required to solve this type of problem. In particular, the load flow problem is analyzed and implemented on this architecture. For the FPL network, the speed is increased by a factor of about 2000.
APA, Harvard, Vancouver, ISO, and other styles
18

Mantzel, William. "Parametric estimation of randomly compressed functions." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49053.

Full text
Abstract:
Within the last decade, a new type of signal acquisition has emerged called Compressive Sensing that has proven especially useful in providing a recoverable representation of sparse signals. This thesis presents similar results for Compressive Parametric Estimation. Here, signals known to lie on some unknown parameterized subspace may be recovered via randomized compressive measurements, provided the number of compressive measurements is a small factor above the product of the parametric dimension with the subspace dimension with an additional logarithmic term. In addition to potential applications that simplify the acquisition hardware, there is also the potential to reduce the computational burden in other applications, and we explore one such application in depth in this thesis. Source localization by matched-field processing (MFP) generally involves solving a number of computationally intensive partial differential equations. We introduce a technique that mitigates this computational workload by ``compressing'' these computations. Drawing on key concepts from the recently developed field of compressed sensing, we show how a low-dimensional proxy for the Green's function can be constructed by backpropagating a small set of random receiver vectors. Then, the source can be located by performing a number of ``short'' correlations between this proxy and the projection of the recorded acoustic data in the compressed space. Numerical experiments in a Pekeris ocean waveguide are presented which demonstrate that this compressed version of MFP is as effective as traditional MFP even when the compression is significant. The results are particularly promising in the broadband regime where using as few as two random backpropagations per frequency performs almost as well as the traditional broadband MFP, but with the added benefit of generic applicability. That is, the computationally intensive backpropagations may be computed offline independently from the received signals, and may be reused to locate any source within the search grid area. This thesis also introduces a round-robin approach for multi-source localization based on Matched-Field Processing. Each new source location is estimated from the ambiguity function after nulling from the data vector the current source location estimates using a robust projection matrix. This projection matrix effectively minimizes mean-square energy near current source location estimates subject to a rank constraint that prevents excessive interference with sources outside of these neighborhoods. Numerical simulations are presented for multiple sources transmitting through a generic Pekeris ocean waveguide that illustrate the performance of the proposed approach which compares favorably against other previously published approaches. Furthermore, the efficacy with which randomized back-propagations may also be incorporated for computational advantage (as in the case of compressive parametric estimation) is also presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Lu, Xinyou. "Inversion of controlled-source audio-frequency magnetotelluric data /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/6799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Maxwell, David A. "Initial data for black holes and rough spacetimes /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/5776.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wei, Fajin. "Stochastic Infinity-Laplacian equation and One-Laplacian equation in image processing and mean curvature flows : finite and large time behaviours." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/7345.

Full text
Abstract:
The existence of pathwise stationary solutions of this stochastic partial differential equation (SPDE, for abbreviation) is demonstrated. In Part II, a connection between certain kind of state constrained controlled Forward-Backward Stochastic Differential Equations (FBSDEs) and Hamilton-Jacobi-Bellman equations (HJB equations) are demonstrated. The special case provides a probabilistic representation of some geometric flows, including the mean curvature flows. Part II includes also a probabilistic proof of the finite time existence of the mean curvature flows.
APA, Harvard, Vancouver, ISO, and other styles
22

Ugail, Hassan. "3D facial data fitting using the biharmonic equation." ACTA Press, 2006. http://hdl.handle.net/10454/2684.

Full text
Abstract:
This paper discusses how a boundary-based surface fitting approach can be utilised to smoothly reconstruct a given human face where the scan data corresponding to the face is provided. In particular, the paper discusses how a solution to the Biharmonic equation can be used to set up the corresponding boundary value problem. We show how a compact explicit solution method can be utilised for efficiently solving the chosen Biharmonic equation. Thus, given the raw scan data of a 3D face, we extract a series of profile curves from the data which can then be utilised as boundary conditions to solve the Biharmonic equation. The resulting solution provides us a continuous single surface patch describing the original face.
APA, Harvard, Vancouver, ISO, and other styles
23

Hocking, Laird Robert. "Shell-based geometric image and video inpainting." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/281805.

Full text
Abstract:
The subject of this thesis is a class of fast inpainting methods (image or video) based on the idea of filling the inpainting domain in successive shells from its boundary inwards. Image pixels (or video voxels) are filled by assigning them a color equal to a weighted average of either their already filled neighbors (the ``direct'' form of the method) or those neighbors plus additional neighbors within the current shell (the ``semi-implicit'' form). In the direct form, pixels (voxels) in the current shell may be filled independently, but in the semi-implicit form they are filled simultaneously by solving a linear system. We focus in this thesis mainly on the image inpainting case, where the literature contains several methods corresponding to the {\em direct} form of the method - the semi-implicit form is introduced for the first time here. These methods effectively differ only in the order in which pixels (voxels) are filled, the weights used for averaging, and the neighborhood that is averaged over. All of them are very fast, but at the same time all of them leave undesirable artifacts such as ``kinking'' (bending) or blurring of extrapolated isophotes. This thesis has two main goals. First, we introduce new algorithms within this class, which are aimed at reducing or eliminating these artifacts, and also target a specific application - the 3D conversion of images and film. The first part of this thesis will be concerned with introducing 3D conversion as well as Guidefill, a method in the above class adapted to the inpainting problems arising in 3D conversion. However, the second and more significant goal of this thesis is to study these algorithms as a class. In particular, we develop a mathematical theory aimed at understanding the origins of artifacts mentioned. Through this, we seek is to understand which artifacts can be eliminated (and how), and which artifacts are inevitable (and why). Most of the thesis is occupied with this second goal. Our theory is based on two separate limits - the first is a {\em continuum} limit, in which the pixel width →0, and in which the algorithm converges to a partial differential equation. The second is an asymptotic limit in which h is very small but non-zero. This latter limit, which is based on a connection to random walks, relates the inpainted solution to a type of discrete convolution. The former is useful for studying kinking artifacts, while the latter is useful for studying blur. Although all the theoretical work has been done in the context of image inpainting, experimental evidence is presented suggesting a simple generalization to video. Finally, in the last part of the thesis we explore shell-based video inpainting. In particular, we introduce spacetime transport, which is a natural generalization of the ideas of Guidefill and its predecessor, coherence transport, to three dimensions (two spatial dimensions plus one time dimension). Spacetime transport is shown to have much in common with shell-based image inpainting methods. In particular, kinking and blur artifacts persist, and the former of these may be alleviated in exactly the same way as in two dimensions. At the same time, spacetime transport is shown to be related to optical flow based video inpainting. In particular, a connection is derived between spacetime transport and a generalized Lucas-Kanade optical flow that does not distinguish between time and space.
APA, Harvard, Vancouver, ISO, and other styles
24

Bhikkaji, Bharath. "Model Reduction and Parameter Estimation for Diffusion Systems." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-4252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kahelras, Mohamed. "Conception d'observateurs pour différentes classes de systèmes à retards non linéaires." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS005/document.

Full text
Abstract:
Le retard est un phénomène naturel présent dans la majorité des systèmes physiques et dans les applications d’ingénierie, ainsi, les systèmes à retard ont été un domaine de recherche très actif en automatique durant les 60 dernières années. La conception d’observateur est un des sujets les plus importants qui a été étudié, ceci est dû à l’importance des observateurs en automatique et dans les systèmes de commande en absence de capteur pour mesurer une variable. Dans ce travail, l’objectif principal est de concevoir des observateurs pour différentes classes de systèmes à retard avec un retard arbitrairement large, et ce en utilisant différentes approches. Dans la première partie de cette thèse, la conception d’un observateur a été réalisée pour une classe de systèmes non linéaires triangulaires avec une sortie échantillonnée et un retard arbitraire. Une l’autre difficulté majeure avec cette classe de systèmes est le fait que la matrice d’état dépend du signal de sortie non-retardé qui est immesurable. Un nouvel observateur en chaine, composé de sous-observateurs en série est conçu pour compenser les retards arbitrairement larges. Dans la seconde partie de ce travail, un nouvel observateur a été conçu pour un autre type de systèmes non linéaires triangulaires, où le retard a été considéré, cette fois-ci, comme une équation aux dérivées partielles de type hyperbolique du premier ordre. La transformation inverse en backstepping et le concept de l’observateur en chaine ont été utilisés lors de la conception de cet observateur afin d’assurer son efficacité en cas de grands retards. Dans la dernière partie de cette thèse, la conception d’un nouvel observateur a été réalisée pour un type de système modélisé par des équations paraboliques non linéaires où les mesures sont issues d’un nombre fini de points du domaine spatial. Cet observateur est constitué d’une série de sous-observateurs en chaine. Chaque sous-observateur compense une fraction du retard global. L'analyse de la stabilité des systèmes d’erreur a été fondée sur différentes fonctionnelles Lyapunov-Krasovskii. Par ailleurs, différents instruments mathématiques ont été employés au cours des différentes preuves présentées. Les résultats de simulation ont été présentés dans le but de confirmer l'exactitude des résultats théoriques
Time-delay is a natural phenomenon that is present in most physical systems and engineering applications, thus, delay systems have been an active area of research in control engineering for more than 60 years. Observer design is one of the most important subject that has been dealt with, this is due to the importance of observers in control engineering systems not only when sensing is not sufficient but also when a sensing reliability is needed. In this work, the main goal was to design observers for different classes of nonlinear delayed systems with an arbitrary large delay, using different approaches. In the first part, the problem of observer design is addressed for a class of triangular nonlinear systems with not necessarily small delay and sampled output measurements. Another major difficulty with this class of systems is the fact that the state matrix is dependent on the un-delayed output signal which is not accessible to measurement. A new chain observer, composed of sub-observers in series, is designed to compensate for output sampling and arbitrary large delays.In the second part of this work, another kind of triangular nonlinear delayed systems was considered, where this time the delay was considered as a first order hyperbolic partial differential equation. The inverse backstepping transformation was invoked and a chain observer was developed to ensure its effectiveness in case of large delays. Finally, a new observer was designed for a class of nonlinear parabolic partial differential equations under point measurements, in the case of large delays. The observer was composed of several chained sub-observers. Each sub-observer compensates a fraction of the global delay. The stability analyses of the error systems were based on different Lyapunov-Krasovskii functionals. Also different mathematical tools have been used in order to prove the results. Simulation results were presented to confirm the accuracy of the theoretical results
APA, Harvard, Vancouver, ISO, and other styles
26

Coullon, Hélène. "Modélisation et implémentation de parallélisme implicite pour les simulations scientifiques basées sur des maillages." Thesis, Orléans, 2014. http://www.theses.fr/2014ORLE2029/document.

Full text
Abstract:
Le calcul scientifique parallèle est un domaine en plein essor qui permet à la fois d’augmenter la vitesse des longs traitements, de traiter des problèmes de taille plus importante ou encore des problèmes plus précis. Ce domaine permet donc d’aller plus loin dans les calculs scientifiques, d’obtenir des résultats plus pertinents, car plus précis, ou d’étudier des problèmes plus volumineux qu’auparavant. Dans le monde plus particulier de la simulation numérique scientifique, la résolution d’équations aux dérivées partielles (EDP) est un calcul particulièrement demandeur de ressources parallèles. Si les ressources matérielles permettant le calcul parallèle sont de plus en plus présentes et disponibles pour les scientifiques, à l’inverse leur utilisation et la programmation parallèle se démocratisent difficilement. Pour cette raison, des modèles de programmation parallèle, des outils de développement et même des langages de programmation parallèle ont vu le jour et visent à simplifier l’utilisation de ces machines. Il est toutefois difficile, dans ce domaine dit du “parallélisme implicite”, de trouver le niveau d’abstraction idéal pour les scientifiques, tout en réduisant l’effort de programmation. Ce travail de thèse propose tout d’abord un modèle permettant de mettre en oeuvre des solutions de parallélisme implicite pour les simulations numériques et la résolution d’EDP. Ce modèle est appelé “Structured Implicit Parallelism for scientific SIMulations” (SIPSim), et propose une vision au croisement de plusieurs types d’abstraction, en tentant de conserver les avantages de chaque vision. Une première implémentation de ce modèle, sous la forme d’une librairie C++ appelée SkelGIS, est proposée pour les maillages cartésiens à deux dimensions. Par la suite, SkelGIS, et donc l’implémentation du modèle, est étendue à des simulations numériques sur les réseaux (permettant l’application de simulations représentant plusieurs phénomènes physiques). Les performances de ces deux implémentations sont évaluées et analysées sur des cas d’application réels et complexes et démontrent qu’il est possible d’obtenir de bonnes performances en implémentant le modèle SIPSim
Parallel scientific computations is an expanding domain of computer science which increases the speed of calculations and offers a way to deal with heavier or more accurate calculations. Thus, the interest of scientific computations increases, with more precised results and bigger physical domains to study. In the particular case of scientific numerical simulations, solving partial differential equations (PDEs) is an especially heavy calculation and a perfect applicant to parallel computations. On one hand, it is more and more easy to get an access to very powerfull parallel machines and clusters, but on the other hand parallel programming is hard to democratize, and most scientists are not able to use these machines. As a result, high level programming models, framework, libraries, languages etc. have been proposed to hide technical details of parallel programming. However, in this “implicit parallelism” field, it is difficult to find the good abstraction level while keeping a low programming effort. This thesis proposes a model to write implicit parallelism solutions for numerical simulations such as mesh-based PDEs computations. This model is called “Structured Implicit Parallelism for scientific SIMulations” (SIPSim), and proposes an approach at the crossroads of existing solutions, taking advantage of each one. A first implementation of this model is proposed, as a C++ library called SkelGIS, for two dimensional Cartesian meshes. A second implementation of the model, and an extension of SkelGIS, proposes an implicit parallelism solution for network-simulations (which deals with simulations with multiple physical phenomenons), and is studied in details. A performance analysis of both these implementations is given on real case simulations, and it demonstrates that the SIPSim model can be implemented efficiently
APA, Harvard, Vancouver, ISO, and other styles
27

Fahlaoui, Tarik. "Réduction de modèles et apprentissage de solutions spatio-temporelles paramétrées à partir de données : application à des couplages EDP-EDO." Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2535.

Full text
Abstract:
On s’intéresse dans cette thèse à l’apprentissage d’un modèle réduit précis et stable, à partir de données correspondant à la solution d’une équation aux dérivées partielles (EDP), et générées par un solveur haute fidélité (HF). Pour ce faire, on utilise la méthode Dynamic Mode Decomposition (DMD) ainsi que la méthode de réduction Proper Orthogonal Decomposition (POD). Le modèle réduit appris est facilement interprétable, et par une analyse spectrale a posteriori de ce modèle on peut détecter les anomalies lors de la phase d’apprentissage. Les extensions au cas de couplage EDP-EDO, ainsi qu’au cas d’EDP d’ordre deux en temps sont présentées. L’apprentissage d’un modèle réduit dans le cas d’un système dynamique contrôlé par commutation, où la règle de contrôle est apprise à l’aide d’un réseau de neurones artificiel (ANN), est également traité. Un inconvénient de la réduction POD, est la difficile interprétation de la représentation basse dimension. On proposera alors l’utilisation de la méthode Empirical Interpolation Method (EIM). La représentation basse dimension est alors intelligible, et consiste en une restriction de la solution en des points sélectionnés. Cette approche sera ensuite étendue au cas d’EDP dépendant d’un paramètre, et où l’algorithme Kernel Ridge Regression (KRR) nous permettra d’apprendre la variété solution. Ainsi, on présentera l’apprentissage d’un modèle réduit paramétré. L’extension au cas de données bruitées ou bien au cas d’EDP d’évolution non linéaire est présentée en ouverture
In this thesis, an algorithm for learning an accurate reduced order model from data generated by a high fidelity solver (HF solver) is proposed. To achieve this goal, we use both Dynamic Mode Decomposition (DMD) and Proper Orthogonal Decomposition (POD). Anomaly detection, during the learning process, can be easily done by performing an a posteriori spectral analysis on the reduced order model learnt. Several extensions are presented to make the method as general as possible. Thus, we handle the case of coupled ODE/PDE systems or the case of second order hyperbolic equations. The method is also extended to the case of switched control systems, where the switching rule is learnt by using an Artificial Neural Network (ANN). The reduced order model learnt allows to predict time evolution of the POD coefficients. However, the POD coefficients have no interpretable meaning. To tackle this issue, we propose an interpretable reduction method using the Empirical Interpolation Method (EIM). This reduction method is then adapted to the case of third-order tensors, and combining with the Kernel Ridge Regression (KRR) we can learn the solution manifold in the case of parametrized PDEs. In this way, we can learn a parametrized reduced order model. The case of non-linear PDEs or disturbed data is finally presented in the opening
APA, Harvard, Vancouver, ISO, and other styles
28

Antelo, Junior Ernesto Willams Molina. "Estimação conjunta de atraso de tempo subamostral e eco de referência para sinais de ultrassom." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2616.

Full text
Abstract:
CAPES
Em ensaios não destrutivos por ultrassom, o sinal obtido a partir de um sistema de aquisição de dados real podem estar contaminados por ruído e os ecos podem ter atrasos de tempo subamostrais. Em alguns casos, esses aspectos podem comprometer a informação obtida de um sinal por um sistema de aquisição. Para lidar com essas situações, podem ser utilizadas técnicas de estimativa de atraso temporal (Time Delay Estimation ou TDE) e também técnicas de reconstrução de sinais, para realizar aproximações e obter mais informações sobre o conjunto de dados. As técnicas de TDE podem ser utilizadas com diversas finalidades na defectoscopia, como por exemplo, para a localização precisa de defeitos em peças, no monitoramento da taxa de corrosão em peças, na medição da espessura de um determinado material e etc. Já os métodos de reconstrução de dados possuem uma vasta gama de aplicação, como nos NDT, no imageamento médico, em telecomunicações e etc. Em geral, a maioria das técnicas de estimativa de atraso temporal requerem um modelo de sinal com precisão elevada, caso contrário, a localização dessa estimativa pode ter sua qualidade reduzida. Neste trabalho, é proposto um esquema alternado que estima de forma conjunta, uma referência de eco e atrasos de tempo para vários ecos a partir de medições ruidosas. Além disso, reinterpretando as técnicas utilizadas a partir de uma perspectiva probabilística, estendem-se suas funcionalidades através de uma aplicação conjunta de um estimador de máxima verossimilhança (Maximum Likelihood Estimation ou MLE) e um estimador máximo a posteriori (MAP). Finalmente, através de simulações, resultados são apresentados para demonstrar a superioridade do método proposto em relação aos métodos convencionais.
Abstract (parágrafo único): In non-destructive testing (NDT) with ultrasound, the signal obtained from a real data acquisition system may be contaminated by noise and the echoes may have sub-sample time delays. In some cases, these aspects may compromise the information obtained from a signal by an acquisition system. To deal with these situations, Time Delay Estimation (TDE) techniques and signal reconstruction techniques can be used to perform approximations and also to obtain more information about the data set. TDE techniques can be used for a number of purposes in the defectoscopy, for example, for accurate location of defects in parts, monitoring the corrosion rate in pieces, measuring the thickness of a given material, and so on. Data reconstruction methods have a wide range of applications, such as NDT, medical imaging, telecommunications and so on. In general, most time delay estimation techniques require a high precision signal model, otherwise the location of this estimate may have reduced quality. In this work, an alternative scheme is proposed that jointly estimates an echo model and time delays for several echoes from noisy measurements. In addition, by reinterpreting the utilized techniques from a probabilistic perspective, its functionalities are extended through a joint application of a maximum likelihood estimator (MLE) and a maximum a posteriori (MAP) estimator. Finally, through simulations, results are presented to demonstrate the superiority of the proposed method over conventional methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Bringmann, Philipp. "Adaptive least-squares finite element method with optimal convergence rates." Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/22350.

Full text
Abstract:
Die Least-Squares Finite-Elemente-Methoden (LSFEMn) basieren auf der Minimierung des Least-Squares-Funktionals, das aus quadrierten Normen der Residuen eines Systems von partiellen Differentialgleichungen erster Ordnung besteht. Dieses Funktional liefert einen a posteriori Fehlerschätzer und ermöglicht die adaptive Verfeinerung des zugrundeliegenden Netzes. Aus zwei Gründen versagen die gängigen Methoden zum Beweis optimaler Konvergenzraten, wie sie in Carstensen, Feischl, Page und Praetorius (Comp. Math. Appl., 67(6), 2014) zusammengefasst werden. Erstens scheinen fehlende Vorfaktoren proportional zur Netzweite den Beweis einer schrittweisen Reduktion der Least-Squares-Schätzerterme zu verhindern. Zweitens kontrolliert das Least-Squares-Funktional den Fehler der Fluss- beziehungsweise Spannungsvariablen in der H(div)-Norm, wodurch ein Datenapproximationsfehler der rechten Seite f auftritt. Diese Schwierigkeiten führten zu einem zweifachen Paradigmenwechsel in der Konvergenzanalyse adaptiver LSFEMn in Carstensen und Park (SIAM J. Numer. Anal., 53(1), 2015) für das 2D-Poisson-Modellproblem mit Diskretisierung niedrigster Ordnung und homogenen Dirichlet-Randdaten. Ein neuartiger expliziter residuenbasierter Fehlerschätzer ermöglicht den Beweis der Reduktionseigenschaft. Durch separiertes Markieren im adaptiven Algorithmus wird zudem der Datenapproximationsfehler reduziert. Die vorliegende Arbeit verallgemeinert diese Techniken auf die drei linearen Modellprobleme das Poisson-Problem, die Stokes-Gleichungen und das lineare Elastizitätsproblem. Die Axiome der Adaptivität mit separiertem Markieren nach Carstensen und Rabus (SIAM J. Numer. Anal., 55(6), 2017) werden in drei Raumdimensionen nachgewiesen. Die Analysis umfasst Diskretisierungen mit beliebigem Polynomgrad sowie inhomogene Dirichlet- und Neumann-Randbedingungen. Abschließend bestätigen numerische Experimente mit dem h-adaptiven Algorithmus die theoretisch bewiesenen optimalen Konvergenzraten.
The least-squares finite element methods (LSFEMs) base on the minimisation of the least-squares functional consisting of the squared norms of the residuals of first-order systems of partial differential equations. This functional provides a reliable and efficient built-in a posteriori error estimator and allows for adaptive mesh-refinement. The established convergence analysis with rates for adaptive algorithms, as summarised in the axiomatic framework by Carstensen, Feischl, Page, and Praetorius (Comp. Math. Appl., 67(6), 2014), fails for two reasons. First, the least-squares estimator lacks prefactors in terms of the mesh-size, what seemingly prevents a reduction under mesh-refinement. Second, the first-order divergence LSFEMs measure the flux or stress errors in the H(div) norm and, thus, involve a data resolution error of the right-hand side f. These difficulties led to a twofold paradigm shift in the convergence analysis with rates for adaptive LSFEMs in Carstensen and Park (SIAM J. Numer. Anal., 53(1), 2015) for the lowest-order discretisation of the 2D Poisson model problem with homogeneous Dirichlet boundary conditions. Accordingly, some novel explicit residual-based a posteriori error estimator accomplishes the reduction property. Furthermore, a separate marking strategy in the adaptive algorithm ensures the sufficient data resolution. This thesis presents the generalisation of these techniques to three linear model problems, namely, the Poisson problem, the Stokes equations, and the linear elasticity problem. It verifies the axioms of adaptivity with separate marking by Carstensen and Rabus (SIAM J. Numer. Anal., 55(6), 2017) in three spatial dimensions. The analysis covers discretisations with arbitrary polynomial degree and inhomogeneous Dirichlet and Neumann boundary conditions. Numerical experiments confirm the theoretically proven optimal convergence rates of the h-adaptive algorithm.
APA, Harvard, Vancouver, ISO, and other styles
30

Ugail, Hassan. "3D data modelling and processing using partial differential equations." 2007. http://hdl.handle.net/10454/2672.

Full text
Abstract:
No
In this paper we discuss techniques for 3D data modelling and processing where the data are usually provided as point clouds which arise from 3D scanning devices. The particular approaches we adopt in modelling 3D data involves the use of Partial Differential Equations (PDEs). In particular we show how the continuous and discrete versions of elliptic PDEs can be used for data modelling. We show that using PDEs it is intuitively possible to model data corresponding to complex scenes. Furthermore, we show that data can be stored in compact format in the form of PDE boundary conditions. In order to demonstrate the methodology we utlise several examples of practical nature.
APA, Harvard, Vancouver, ISO, and other styles
31

Stals, Linda. "Parallel multigrid on unstructured grids using adaptive finite element methods." Phd thesis, 1995. http://hdl.handle.net/1885/138505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Gi-Ren, and 劉聚仁. "Partial Differential Equations with Random Initial Data." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/96700796325659019971.

Full text
Abstract:
博士
國立臺灣大學
數學研究所
101
In this thesis, we study the limiting distributions of linear systems of partial differential equations with subordinated Gaussian random initial data. When the initial data is non-random, the solutions of the linear systems are given by the convolution of the Green kernels and the initial data. Therefore, the evolution of the solutions is totally determined by their initial data. However, the information regarding the initial data is obtained through some process of measurement, resulting in measurement error. In our work, we use the second-order homogeneous random field to model these measurement error and apply the spectral representation method to study the covariance matrix functions of the random solution vector fields. In view of that the solution fields can be thought of as the weighted sum of correlated random variables, we will also consider the limiting distributions of the random solution fields from di↵erent viewpoints, including macroscopic scales and microscopic scales. When the random initial data is weakly dependent, our results can be thought of as a generalized central limit theorem. There are two contributions for the new results. The first one is that the initial data is modeled by two cross-correlated subordinated Gaussian random fields. We use the method of Feynman diagrams to analyze the asymptotic behavior of the covariance matrix function of the random solution field induced by the random initial data. Second, the limit of the random solution vector field under the macroscopic/microscopic coordinate systems is represented by a L2-convergent series of mutually independent Gaussian random fields. We also study the limiting distributions of the solution vector field when its random initial data is long-range dependent. Compared to the previous case, the limiting law of the rescaled solution vector field is non-Gaussian, which is represented by multiple Wiener integrals. In contrast to the existing mathematical literature we found that there is a competition relationship between the effect coming from two components of the random initial data. That is, one of the two components of the random initial data will be determined dominantly the structure of the limiting distribution of the random part of the solution vector field.
APA, Harvard, Vancouver, ISO, and other styles
33

Chen, Lujuan. "Parallel processing strategies for solving differential equations and approximation problems." Phd thesis, 1995. http://hdl.handle.net/1885/138070.

Full text
Abstract:
The n ed for effective parallel methods for solving problems in science and engineering well recognized. Not only do we require algorithms which map well onto practical parallel machines, we also need methods which have well understood convergence and stability properties. In this thesis we devise techniques and parallel algorithms for solving differential equations and for data approximation, the two fundamental building blocks in scientific and engineering computing.
APA, Harvard, Vancouver, ISO, and other styles
34

Liu, Kun. "Discontinuous Galerkin Methods for Parabolic Partial Differential Equations with Random Input Data." Thesis, 2013. http://hdl.handle.net/1911/71989.

Full text
Abstract:
This thesis discusses and develops one approach to solve parabolic partial differential equations with random input data. The stochastic problem is firstly transformed into a parametrized one by using finite dimensional noise assumption and the truncated Karhunen-Loeve expansion. The approach, Monte Carlo discontinuous Galerkin (MCDG) method, randomly generates $M$ realizations of uncertain coefficients and approximates the expected value of the solution by averaging M numerical solutions. This approach is applied to two numerical examples. The first example is a two-dimensional parabolic partial differential equation with random convection term and the second example is a benchmark problem coupling flow and transport equations. I first apply polynomial kernel principal component analysis of second order to generate M realizations of random permeability fields. They are used to obtain M realizations of random convection term computed from solving the flow equation. Using this approach, I solve the transport equation M times corresponding to M velocity realizations. The MCDG solution spreads toward the whole domain from the initial location and the contaminant does not leave the initial location completely as time elapses. The results show that MCDG solution is realistic, because it takes the uncertainty in velocity fields into consideration. Besides, in order to correct overshoot and undershoot solutions caused by the high level of oscillation in random velocity realizations, I solve the transport equation on meshes of finer resolution than of the permeability, and use a slope limiter as well as lower and upper bound constraints to address this difficulty. Finally, future work is proposed.
APA, Harvard, Vancouver, ISO, and other styles
35

Shipton, Jarrod Jay. "The application of non-linear partial differential equations for the removal of noise in audio signal processing." Thesis, 2017. https://hdl.handle.net/10539/24988.

Full text
Abstract:
A dissertation submitted in fulfllment for the degree of Masters of Science in the Faculty of Science School of Computer Science and Applied Mathematics October 2017.
This work explores a new method of applying partial di erential equations to audio signal processing, particularly that of noise removal. Two methods are explored and compared to the method of noise removal used in the free software Audacity(R). The rst of these methods uses a non-linear variation of the di usion equation in two dimensions, coupled with a non-linear sink/source term, in order to lter the imaginary and real components of an array of overlapping windows of the signal's Fourier transform. The second model is that of a non-linear di usion function applied to the magnitude of the Fourier transform in order to estimate the noise power spectrum to be used in a spectral subtraction noise removal technique. The technique in this work features nite di erence methods to approximate the solutions of each of the models.
LG2018
APA, Harvard, Vancouver, ISO, and other styles
36

Garza, David Marcelo. "Application of automatic differentiation to trajectory optimization via direct multiple shooting." Thesis, 2003. http://wwwlib.umi.com/cr/utexas/fullcit?p3119648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Jugoo, Vikash R. "Computer analysis of equations using Mathematica." Thesis, 2001. http://hdl.handle.net/10413/3968.

Full text
Abstract:
In this thesis we analyse particular differential equations that arise in physical situations. This is achieved with the aid of the computer software package called Mathematica. We first describe the basic features of Mathematica highlighting its capabilities in performing calculations in mathematics. Then we consider a first order Newtonian equation representing the trajectory of a particle around a spherical object. Mathematica is used to solve the Newtonian equation both analytically and numerically. Graphical plots of the trajectories of the planetary bodies Mercury, Earth and Jupiter are presented. We attempt a similar analysis for the corresponding relativistic equation governing the orbits of gravitational objects. Only numerical results are possible in this case. We also perform a perturbative analysis of the relativistic equation and determine the amount of perihelion shift. The second equation considered is the Emden-Fowler equation of order two which arises in many physical problems, including certain inhomogeneous cosmological applications. The analytical features of this equation are investigated using Mathematica and the Lie analysis of differential equations. Different cases of the related autonomous form of the Emden-Fowler equation are investigated and graphically represented. Thereafter, we generate a number of profiles of the energy density and the pressure for a particular solution which demonstrates that a numerical approach for studying inhomogeneity, in cosmological models in general relativity, is feasible.
Thesis (M.Sc.)-University of Natal, Durban, 2001.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Hairong. "Sparse array representations and some selected array operations on GPUs." Thesis, 2014. http://hdl.handle.net/10539/15329.

Full text
Abstract:
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science. Johannesburg, 2014.
A multi-dimensional data model provides a good conceptual view of the data in data warehousing and On-Line Analytical Processing (OLAP). A typical representation of such a data model is as a multi-dimensional array which is well suited when the array is dense. If the array is sparse, i.e., has a few number of non-zero elements relative to the product of the cardinalities of the dimensions, using a multi-dimensional array to represent the data set requires extremely large memory space while the actual data elements occupy a relatively small fraction of the space. Existing storage schemes for Multi-Dimensional Sparse Arrays (MDSAs) of higher dimensions k (k > 2), focus on optimizing the storage utilization, and offer little flexibility in data access efficiency. Most efficient storage schemes for sparse arrays are limited to matrices that are arrays in 2 dimensions. In this dissertation, we introduce four storage schemes for MDSAs that handle the sparsity of the array with two primary goals; reducing the storage overhead and maintaining efficient data element access. These schemes, including a well known method referred to as the Bit Encoded Sparse Storage (BESS), were evaluated and compared on four basic array operations, namely construction of a scheme, large scale random element access, sub-array retrieval and multi-dimensional aggregation. The four storage schemes being proposed, together with the evaluation results are: i.) The extended compressed row storage (xCRS) which extends CRS method for sparse matrix storage to sparse arrays of higher dimensions and achieves the best data element access efficiency among the methods compared; ii.) The bit encoded xCRS (BxCRS) which optimizes the storage utilization of xCRS by applying data compression methods with run length encoding, while maintaining its data access efficiency; iii.) A hybrid approach (Hybrid) that provides the best control of the balance between the storage utilization and data manipulation efficiency by combining xCRS and BESS. iv.) The PATRICIA trie compressed storage (PTCS) which uses PATRICIA trie to store the valid non-zero array elements. PTCS supports efficient data access, and has a unique property of supporting update operations conveniently. v.) BESS performs the best for the multi-dimensional aggregation, closely followed by the other schemes. We also addressed the problem of accelerating some selected array operations using General Purpose Computing on Graphics Processing Unit (GPGPU). The experimental results showed different levels of speed up, ranging from 2 to over 20 times, on large scale random element access and sub-array retrieval. In particular, we utilized GPUs on the computation of the cube operator, a special case of multi-dimensional aggregation, using BESS. This resulted in a 5 to 8 times of speed up compared with our CPU only implementation. The main contributions of this dissertation include the developments, implementations and evaluations of four efficient schemes to store multi-dimensional sparse arrays, as well as utilizing massive parallelism of GPUs for some data warehousing operations.
APA, Harvard, Vancouver, ISO, and other styles
39

Webster, Clayton G. "Sparse grid stochastic collocation techniques for the numerical solution of partial differential equations with random input data." 2007. http://etd.lib.fsu.edu/theses/available/etd-03302007-154630.

Full text
Abstract:
Thesis (Ph. D.)--Florida State University, 2007.
Advisor: Max Gunzburger, Florida State University, College of Arts and Sciences, Dept. of Mathematics and School of Computational Science. Title and description from dissertation home page (viewed July 5, 2007). Document formatted into pages; contains xv, 160 pages. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
40

Chen, Andy Bowei. "Application of quantitative analysis in treatment of osteoporosis and osteoarthritis." Thesis, 2013. http://hdl.handle.net/1805/3662.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
As our population ages, treating bone and joint ailments is becoming increasingly important. Both osteoporosis, a bone disease characterized by a decreased density of mineral in bone, and osteoarthritis, a joint disease characterized by the degeneration of cartilage on the ends of bones, are major causes of decreased movement ability and increased pain. To combat these diseases, many treatments are offered, including drugs and exercise, and much biomedical research is being conducted. However, how can we get the most out of the research we perform and the treatment we do have? One approach is through computational analysis and mathematical modeling. In this thesis, quantitative methods of analysis are applied in different ways to two systems: osteoporosis and osteoarthritis. A mouse model simulating osteoporosis is treated with salubrinal and knee loading. The bone and cell data is used to formulate a system of differential equations to model the response of bone to each treatment. Using Particle Swarm Optimization, optimal treatment regimens are found, including a consideration of budgetary constraints. Additionally, an in vitro model of osteoarthritis in chondrocytes receives RNA silencing of Lrp5. Microarray analysis of gene expression is used to further elucidate the mode of regulation of ADAMTS5, an aggrecanase associated with cartilage degradation, by Lrp5, including the development of a mathematical model. The math model of osteoporosis reveals a quick response to salubrinal and a delayed but substantial response to knee loading. Consideration of cost effectiveness showed that as budgetary constraints increased, treatment did not start until later. The quantitative analysis of ADAMTS5 regulation suggested the involvement of IL1B and p38 MAPK. This research demonstrates the application of quantitative methods to further the usefulness of biomedical and biomolecular research into treatment and signaling pathways. Further work using these techniques can help uncover a bigger picture of osteoarthritis's mode of action and ideal treatment regimens for osteoporosis.
APA, Harvard, Vancouver, ISO, and other styles
41

Zimmermann, Aleksandra [Verfasser]. "Renormalized solutions for nonlinear partial differential equations with variable exponents and L1-data / vorgelegt von Aleksandra Zimmermann geb. Zmorzynska." 2010. http://d-nb.info/100773745X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

"Overlapping domain decomposition methods for some nonlinear PDEs." 2013. http://library.cuhk.edu.hk/record=b5884471.

Full text
Abstract:
Yan, Kan.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2013.
Includes bibliographical references (leaves 64-[66]).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts also in Chinese.
APA, Harvard, Vancouver, ISO, and other styles
43

Nagy, Oliver. "Full wave models in linear time invariant signal processing." Phd thesis, 2011. http://hdl.handle.net/1885/150278.

Full text
Abstract:
Full wave models have assumed a niche role in signal processing due to their complexity, limitation to (mostly) empty space, often absent sources, lack of intuition, and insufficient integration with the theory of signal processing in general. This thesis removes these shortcomings and derives closed form solutions for source generated wave fields in the presence of arbitrarily shaped objects. This is made possible by including absorption effects, substantially augmenting the eigenfunction expansion method, and combining both with linear system theory. The result is a generic closed form state space model and impulse response formula which reduces wave models to linear filters, both theoretically and practically, yet contains only elementary functions. The reasons for this is the explicit use of rectangular regions which also furnishes a considerably simpler derivation of some existing results like dimensionality scaling laws because the corresponding Fourier series naturally connects dimensionality with signal bandwidth. Furthermore, the thesis uses the Fourier series to link boundary value problems with information theory and shows how to store and retrieve information in a wave field, exemplified by an elastic rubber string. Despite simplifying some existing results, the main focus is on source generated wave fields and their connection to linear filters, as underlined with closed form solutions for waves in a wave guide, around the human head, in an urban street area, and more. These examples also emphasise how the results from this thesis allow everyone with a signal processing - but not necessarily a Partial Differential Equation (PDE) - background to use full wave models because these have been reduced to linear filters with a closed form impulse- and frequency response. This makes full wave models less intimidating to approach, adds intuition, and reduces many complex wave problems to simpler ones, as shown for the source localisation problem in highly reverberant environments. The thesis also extends the reach of signal processing beyond standard wave problems to include linearised water models, bio-heat transfer, chemical diffusion, and more in the same fashion.
APA, Harvard, Vancouver, ISO, and other styles
44

Calder, Jeffrey. "Sobolev Gradient Flows and Image Processing." Thesis, 2010. http://hdl.handle.net/1974/5986.

Full text
Abstract:
In this thesis we study Sobolev gradient flows for Perona-Malik style energy functionals and generalizations thereof. We begin with first order isotropic flows which are shown to be regularizations of the heat equation. We show that these flows are well-posed in the forward and reverse directions which yields an effective linear sharpening algorithm. We furthermore establish a number of maximum principles for the forward flow and show that edges are preserved for a finite period of time. We then go on to study isotropic Sobolev gradient flows with respect to higher order Sobolev metrics. As the Sobolev order is increased, we observe an increasing reluctance to destroy fine details and texture. We then consider Sobolev gradient flows for non-linear anisotropic diffusion functionals of arbitrary order. We establish existence, uniqueness and continuous dependence on initial data for a broad class of such equations. The well-posedness of these new anisotropic gradient flows opens the door to a wide variety of sharpening and diffusion techniques which were previously impossible under L2 gradient descent. We show how one can easily use this framework to design an anisotropic sharpening algorithm which can sharpen image features while suppressing noise. We compare our sharpening algorithm to the well-known shock filter and show that Sobolev sharpening produces natural looking images without the "staircasing" artifacts that plague the shock filter.
Thesis (Master, Mathematics & Statistics) -- Queen's University, 2010-08-25 10:44:12.23
APA, Harvard, Vancouver, ISO, and other styles
45

Mkolesia, Andrew Chikondi. "Algorithms for image segmentation in fermentation." 2011. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1000637.

Full text
Abstract:
M. Tech. Mathematical Technology.
Aims of this research project is to mathematically analyse froth patterns and build a database of the images at different stages of the fermentation process, so that a decision-making procedure can be developed, which enables a computer to react according to what has been observed. This would allow around-the-clock observation which is not possible with humans. In addition, mechanised decision-making would minimize errors usually associated with human actions. Different mathematical algorithms for image processing will be considered and compared. These algorithms have been designed for different image processing situations. In this dissertation the algorithms will be applied to froth images in particular and will be used to simulate the human eye for decision-making in the fermentation process. The preamble of the study will be to consider algorithms for the detection of edges and then analyse these edges. MATLAB will be used to do the pre-processing of the images and to write and test any new algorithms designed for this project.
APA, Harvard, Vancouver, ISO, and other styles
46

Kasaiezadeh, Mahabadi Seyed Alireza. "Development of New Global Optimization Algorithms Using Stochastic Level Set Method with Application in: Topology Optimization, Path Planning and Image Processing." Thesis, 2012. http://hdl.handle.net/10012/6803.

Full text
Abstract:
A unique mathematical tool is developed to deal with global optimization of a set of engineering problems. These include image processing, mechanical topology optimization, and optimal path planning in a variational framework, as well as some benchmark problems in parameter optimization. The optimization tool in these applications is based on the level set theory by which an evolving contour converges toward the optimum solution. Depending upon the application, the objective function is defined, and then the level set theory is used for optimization. Level set theory, as a member of active contour methods, is an extension of the steepest descent method in conventional parameter optimization to the variational framework. It intrinsically suffers from trapping in local solutions, a common drawback of gradient based optimization methods. In this thesis, methods are developed to deal with this drawbacks of the level set approach. By investigating the current global optimization methods, one can conclude that these methods usually cannot be extended to the variational framework; or if they can, the computational costs become drastically expensive. To cope with this complexity, a global optimization algorithm is first developed in parameter space and compared with the existing methods. This method is called "Spiral Bacterial Foraging Optimization" (SBFO) method because it is inspired by the aggregation process of a particular bacterium called, Dictyostelium Discoideum. Regardless of the real phenomenon behind the SBFO, it leads to new ideas in developing global optimization methods. According to these ideas, an effective global optimization method should have i) a stochastic operator, and/or ii) a multi-agent structure. These two properties are very common in the existing global optimization methods. To improve the computational time and costs, the algorithm may include gradient-based approaches to increase the convergence speed. This property is particularly available in SBFO and it is the basis on which SBFO can be extended to variational framework. To mitigate the computational costs of the algorithm, use of the gradient based approaches can be helpful. Therefore, SBFO as a multi-agent stochastic gradient based structure can be extended to multi-agent stochastic level set method. In three steps, the variational set up is formulated: i) A single stochastic level set method, called "Active Contours with Stochastic Fronts" (ACSF), ii) Multi-agent stochastic level set method (MSLSM), and iii) Stochastic level set method without gradient such as E-ARC algorithm. For image processing applications, the first two steps have been implemented and show significant improvement in the results. As expected, a multi agent structure is more accurate in terms of ability to find the global solution but it is much more computationally expensive. According to the results, if one uses an initial level set with enough holes in its topology, a single stochastic level set method can achieve almost the same level of accuracy as a multi-agent structure can obtain. Therefore, for a topology optimization problem for which a high level of calculations (at each iteration a finite element model should be solved) is required, only ACSF with initial guess with multiple holes is implemented. In some applications, such as optimal path planning, objective functions are usually very complicated; finding a closed-form equation for the objective function and its gradient is therefore impossible or sometimes very computationally expensive. In these situations, the level set theory and its extensions cannot be directly employed. As a result, the Evolving Arc algorithm that is inspired by "Electric Arc" in nature, is proposed. The results show that it can be a good solution for either unconstrained or constrained problems. Finally, a rigorous convergence analysis for SBFO and ACSF is presented that is new amongst global optimization methods in both parameter and variational framework.
APA, Harvard, Vancouver, ISO, and other styles
47

Goldani, Moghaddam Hassan. "Applications of Generic Interpolants In the Investigation and Visualization of Approximate Solutions of PDEs on Coarse Unstructured Meshes." Thesis, 2010. http://hdl.handle.net/1807/24757.

Full text
Abstract:
In scientific computing, it is very common to visualize the approximate solution obtained by a numerical PDE solver by drawing surface or contour plots of all or some components of the associated approximate solutions. These plots are used to investigate the behavior of the solution and to display important properties or characteristics of the approximate solutions. In this thesis, we consider techniques for drawing such contour plots for the solution of two and three dimensional PDEs. We first present three fast contouring algorithms in two dimensions over an underlying unstructured mesh. Unlike standard contouring algorithms, our algorithms do not require a fine structured approximation. We assume that the underlying PDE solver generates approximations at some scattered data points in the domain of interest. We then generate a piecewise cubic polynomial interpolant (PCI) which approximates the solution of a PDE at off-mesh points based on the DEI (Differential Equation Interpolant) approach. The DEI approach assumes that accurate approximations to the solution and first-order derivatives exist at a set of discrete mesh points. The extra information required to uniquely define the associated piecewise polynomial is determined based on almost satisfying the PDE at a set of collocation points. In the process of generating contour plots, the PCI is used whenever we need an accurate approximation at a point inside the domain. The direct extension of the both DEI-based interpolant and the contouring algorithm to three dimensions is also investigated. The use of the DEI-based interpolant we introduce for visualization can also be used to develop effective Adaptive Mesh Refinement (AMR) techniques and global error estimates. In particular, we introduce and investigate four AMR techniques along with a hybrid mesh refinement technique. Our interest is in investigating how well such a `generic' mesh selection strategy, based on properties of the problem alone, can perform compared with a special-purpose strategy that is designed for a specific PDE method. We also introduce an \`{a} posteriori global error estimator by introducing the solution of a companion PDE defined in terms of the associated PCI.
APA, Harvard, Vancouver, ISO, and other styles
48

Soliman, Muller Mark. "Developing a Neural Signal Processor Using the Extended Analog Computer." Thesis, 2013. http://hdl.handle.net/1805/3452.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Neural signal processing to decode neural activity has been an active research area in the last few decades. The next generation of advanced multi-electrode neuroprosthetic devices aim to detect a multiplicity of channels from multiple electrodes, making the relatively time-critical processing problem massively parallel and pushing the computational demands beyond the limits of current embedded digital signal processing (DSP) techniques. To overcome these limitations, a new hybrid computational technique was explored, the Extended Analog Computer (EAC). The EAC is a digitally confgurable analog computer that takes advantage of the intrinsic ability of manifolds to solve partial diferential equations (PDEs). They are extremely fast, require little power, and have great potential for mobile computing applications. In this thesis, the EAC architecture and the mechanism of the formation of potential/current manifolds was derived and analyzed to capture its theoretical mode of operation. A new mode of operation, resistance mode, was developed and a method was devised to sample temporal data and allow their use on the EAC. The method was validated by demonstration of the device solving linear diferential equations and linear functions, and implementing arbitrary finite impulse response (FIR) and infinite impulse response (IIR) linear flters. These results were compared to conventional DSP results. A practical application to the neural computing task was further demonstrated by implementing a matched filter with the EAC simulator and the physical prototype to detect single fiber action potential from multiunit data streams derived from recorded raw electroneurograms. Exclusion error (type 1 error) and inclusion error (type 2 error) were calculated to evaluate the detection rate of the matched filter implemented on the EAC. The detection rates were found to be statistically equivalent to that from DSP simulations with exclusion and inclusion errors at 0% and 1%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography