Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Smoothing problems.

Rozprawy doktorskie na temat „Smoothing problems”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 29 najlepszych rozpraw doktorskich naukowych na temat „Smoothing problems”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Eichmann, Katrin. "Smoothing stochastic bang-bang problems". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16799.

Pełny tekst źródła
Streszczenie:
Motiviert durch das Problem der optimalen Strategie beim Handel einer großen Aktienposition, behandelt diese Arbeit ein stochastisches Kontrollproblem mit zwei besonderen Eigenschaften. Zum einen wird davon ausgegangen, dass das Kontrollproblem eine exponentielle Verzögerung in der Kontrollvariablen beinhaltet, zum anderen nehmen wir an, dass die Koeffizienten des Kontrollproblems linear in der Kontrollvariablen sind. Wir erhalten ein degeneriertes stochastisches Kontrollproblem, dessen Lösung - sofern sie existiert - Bang-Bang-Charakter hat. Die resultierende Unstetigkeit der optimalen Kontrolle führt dazu, dass die Existenz einer optimalen Lösung nicht selbstverständlich ist und bewiesen werden muss. Es wird eine Folge von stochastischen Kontrollproblemen mit Zustandsprozessen konstruiert, deren jeweilige Diffusionsmatrix invertierbar ist und die ursprüngliche degenerierte Diffusionsmatrix approximiert. Außerdem stellen die Kostenfunktionale der Folge eine konvexe Approximation des ursprünglichen linearen Kostenfunktionals dar. Um die Konvergenz der Lösungen dieser Folge zu zeigen, stellen wir die Kontrollprobleme in Form von stochastischen Vorwärts-Rückwärts-Differential-gleichungen (FBSDEs) dar. Wir zeigen, dass die zu der konstruierten Folge von Kontrollproblemen gehörigen Lösungen der Vorwärts-Rückwärtsgleichungen – zumindest für eine Teilfolge - in Verteilung konvergieren. Mit Hilfe einer Konvexitätsannahme der Koeffizienten ist es möglich, einen Kontroll-prozess auf einem passenden Wahrscheinlichkeitsraum zu konstruieren, der optimal für das ursprüngliche stochastische Kontrollproblem ist. Neben der damit bewiesenen Existenz einer optimalen (Bang-Bang-) Lösung, wird damit auch eine glatte Approximation der unstetigen Bang-Bang-Lösung erreicht, welche man für die numerische Approximation des Problems verwenden kann. Die Ergebnisse werden schließlich dann in Form von numerischen Simulationen auf das Problem der optimalen Handels¬ausführung angewendet.
Motivated by the problem of how to optimally execute a large stock position, this thesis considers a stochastic control problem with two special properties. First, the control problem has an exponential delay in the control variable, and so the present value of the state process depends on the moving average of past control decisions. Second, the coefficients are assumed to be linear in the control variable. It is shown that a control problem with these properties generates a mathematically challenging problem. Specifically, it becomes a stochastic control problem whose solution (if one exists) has a bang-bang nature. The resulting discontinuity of the optimal solution creates difficulties in proving the existence of an optimal solution and in solving the problem with numerical methods. A sequence of stochastic control problems with state processes is constructed, whose diffusion matrices are invertible and approximate the original degenerate diffusion matrix. The cost functionals of the sequence of control problems are convex approximations of the original linear cost functional. To prove the convergence of the solutions, the control problems are written in the form of forward-backward stochastic differential equations (FBSDEs). It is then shown that the solutions of the FBSDEs corresponding to the constructed sequence of control problems converge in law, at least along a subsequence. By assuming convexity of the coefficients, it is then possible to construct from this limit an admissible control process which, for an appropriate reference stochastic system, is optimal for our original stochastic control problem. In addition to proving the existence of an optimal (bang-bang) solution, we obtain a smooth approximation of the discontinuous optimal bang-bang solution, which can be used for the numerical solution of the problem. These results are then applied to the optimal execution problem in form of numerical simulations.
Style APA, Harvard, Vancouver, ISO itp.
2

Herrick, David Richard Mark. "Wavelet methods for curve and surface estimation". Thesis, University of Bristol, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310601.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Xu, Song. "Non-interior path-following methods for complementarity problems /". Thesis, Connect to this title online; UW restricted, 1998. http://hdl.handle.net/1773/5793.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Lowe, Matthew. "Extended and Unscented Kalman Smoothing for Re-linearization of Nonlinear Problems with Applications". Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-dissertations/457.

Pełny tekst źródła
Streszczenie:
The Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) and Ensemble Kalman Filter (EnKF) are commonly implemented practical solutions for solving nonlinear state space estimation problems; all based on the linear state space estimator, the Kalman Filter. Often, the UKF and EnKF are cited as a superior methods to the EKF with respect to error-based performance criteria. The UKF in turn has the advantage over the EnKF of smaller computational complexity. In practice however the UKF often fails to live up to this expectation, with performance which does not surpass the EKF and estimates which are not as robust as the EnKF. This work explores the geometry of alternative sigma point sets, which form the basis of the UKF, contributing several new sets along with novel methods used to generate them. In particular, completely novel systems of sigma points that preserve higher order statistical moments are found and evaluated. Additionally a new method for scaling and problem specific tuning of sigma point sets is introduced as well as a discussion of why this is necessary, and a new way of thinking about UKF systems in relation to the other two Kalman Filter methods. An Iterated UKF method is also introduced, similar to the smoothing iterates developed previously for the EKF. The performance of all of these methods is demonstrated using problem exemplars with the improvement of the contributed methods highlighted.
Style APA, Harvard, Vancouver, ISO itp.
5

Eichmann, Katrin [Verfasser], Peter [Akademischer Betreuer] Imkeller, Ying [Akademischer Betreuer] Hu i Michael [Akademischer Betreuer] Kupper. "Smoothing stochastic bang-bang problems : with application to the optimal execution problem / Katrin Eichmann. Gutachter: Peter Imkeller ; Ying Hu ; Michael Kupper". Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://d-nb.info/1041284543/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Klann, Esther. "Regularization of linear ill-posed problems in two steps : combination of data smoothing and reconstruction methods". kostenfrei, 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=979913039.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Padoan, Simone. "Computational methods for complex problems in extreme value theory". Doctoral thesis, Università degli studi di Padova, 2008. http://hdl.handle.net/11577/3427194.

Pełny tekst źródła
Streszczenie:
Rare events are part of the real world but inevitably environmental extreme events may have a massive impact on everyday life. We are familiar, for example, with the consequences and damage caused by hurricanes and floods etc. Consequently, there is considerable attention in studying, understanding and predicting the nature of such phenomena and the problems caused by them, not least because of the possible link between extreme climate events and global warming or climate change. Thus the study of extreme events has become ever more important, both in terms of probabilistic and statistical research. This thesis aims to provide statistical modelling and methods for making inferences about extreme events for two types of process. First, non-stationary univariate processes; second, spatial stationary processes. In each case the statistical aspects focus on model fitting and parameter estimation with applications to the modelling of environmental processes including, in particular, nonstationary extreme temperature series and spatially recorded rainfall measures.
Style APA, Harvard, Vancouver, ISO itp.
8

Rau, Christian, i rau@maths anu edu au. "Curve Estimation and Signal Discrimination in Spatial Problems". The Australian National University. School of Mathematical Sciences, 2003. http://thesis.anu.edu.au./public/adt-ANU20031215.163519.

Pełny tekst źródła
Streszczenie:
In many instances arising prominently, but not exclusively, in imaging problems, it is important to condense the salient information so as to obtain a low-dimensional approximant of the data. This thesis is concerned with two basic situations which call for such a dimension reduction. The first of these is the statistical recovery of smooth edges in regression and density surfaces. The edges are understood to be contiguous curves, although they are allowed to meander almost arbitrarily through the plane, and may even split at a finite number of points to yield an edge graph. A novel locally-parametric nonparametric method is proposed which enjoys the benefit of being relatively easy to implement via a `tracking' approach. These topics are discussed in Chapters 2 and 3, with pertaining background material being given in the Appendix. In Chapter 4 we construct concomitant confidence bands for this estimator, which have asymptotically correct coverage probability. The construction can be likened to only a few existing approaches, and may thus be considered as our main contribution. ¶ Chapter 5 discusses numerical issues pertaining to the edge and confidence band estimators of Chapters 2-4. Connections are drawn to popular topics which originated in the fields of computer vision and signal processing, and which surround edge detection. These connections are exploited so as to obtain greater robustness of the likelihood estimator, such as with the presence of sharp corners. ¶ Chapter 6 addresses a dimension reduction problem for spatial data where the ultimate objective of the analysis is the discrimination of these data into one of a few pre-specified groups. In the dimension reduction step, an instrumental role is played by the recently developed methodology of functional data analysis. Relatively standar non-linear image processing techniques, as well as wavelet shrinkage, are used prior to this step. A case study for remotely-sensed navigation radar data exemplifies the methodology of Chapter 6.
Style APA, Harvard, Vancouver, ISO itp.
9

Yilmaz, Asim Egemen. "Finite Element Modeling Of Electromagnetic Scattering Problems Via Hexahedral Edge Elements". Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608587/index.pdf.

Pełny tekst źródła
Streszczenie:
In this thesis, quadratic hexahedral edge elements have been applied to the three dimensional for open region electromagnetic scattering problems. For this purpose, a semi-automatic all-hexahedral mesh generation algorithm is developed and implemented. Material properties inside the elements and along the edges are also determined and prescribed during the mesh generation phase in order to be used in the solution phase. Based on the condition number quality metric, the generated mesh is optimized by means of the Particle Swarm Optimization (PSO) technique. A framework implementing hierarchical hexahedral edge elements is implemented to investigate the performance of linear and quadratic hexahedral edge elements. Perfectly Matched Layers (PMLs), which are implemented by using a complex coordinate transformation, have been used for mesh truncation in the software. Sparse storage and relevant efficient matrix ordering are used for the representation of the system of equations. Both direct and indirect sparse matrix solution methods are implemented and used. Performance of quadratic hexahedral edge elements is deeply investigated over the radar cross-sections of several curved or flat objects with or without patches. Instead of the de-facto standard of 0.1 wavelength linear element size, 0.3-0.4 wavelength quadratic element size was observed to be a new potential criterion for electromagnetic scattering and radiation problems.
Style APA, Harvard, Vancouver, ISO itp.
10

Audiard, Corentin. "Problèmes aux limites dispersifs linéaires non homogènes, application au système d’Euler-Korteweg". Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10261/document.

Pełny tekst źródła
Streszczenie:
Le but principal de cette thèse est d'obtenir des résultats d'existence et d'unicité pour des équations aux dérivées partielles dispersives avec conditions aux limites non homogènes. L'approche privilégiée est l'adaptation de techniques issues de la théorie classique des problèmes aux limites hyperboliques (que l'on rappelle au chapitre 1, en améliorant légèrement un résultat). On met en évidence au chapitre 3 une classe d'équations linéaires qu'on peut qualifier de dispersives satisfaisant des critères “minimaux”, et des résultats d'existence et d'unicité pour le problème aux limites associé à celles-ci sont obtenus au chapitre 4.Le fil rouge du mémoire est le modèle d'Euler-Korteweg, pour lequel on aborde l'analyse du problème aux limites sur une version linéarisée au chapitre 2. Toujours pour cette version linéarisée, on prouve un effet Kato-régularisant au chapitre 3. Enfin l'analyse numérique du modèle est abordée au chapitre 5. Pour cela, on commence par utiliser les résultats précédents pour décrire une manière simple d'obtenir les conditions aux limites dites transparentes dans le cadre des équations précédemment décrites puis on utilise ces conditions aux limites pour le modèle d'Euler-Korteweg semi-linéaire afin d'observer la stabilité/instabilité des solitons, ainsi qu'un phénomène d'explosion en temps fini
The main aim of this thesis is to obtain well-posedness results for boundary value problems especially with non-homogeneous boundary conditions. The approach that we chose here is to adapt technics from the classical theory of hyperbolic boundary value problems (for which we give a brief survey in the first chapter, and a slight generalization). In chapter 3 we delimitate a class of linear dispersive equations, and we obtain well-posedness results for corresponding boundary value problems in chapter 4.The leading thread of this memoir is the Euler-Korteweg model. The boundary value problem for a linearized version is investigated in chapter 2, and the Kato-smoothing effect is proved (also for the linearized model) in chapter 3. Finally, the numerical analysis of the model is made in chapter 5. To begin with, we use the previous abstract results to show a simple way of deriving the so-called transparent boundary conditions for the equations outlined in chapter 3, and those conditions are then used to numerically solve the semi-linear Euler-Korteweg model. This allow us to observe the stability and instability of solitons, as well as a finite time blow up
Style APA, Harvard, Vancouver, ISO itp.
11

Heinrich, André. "Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration". Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-108923.

Pełny tekst źródła
Streszczenie:
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
Style APA, Harvard, Vancouver, ISO itp.
12

Kempthorne, Daryl Matthew. "The development of virtual leaf surface models for interactive agrichemical spray applications". Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/84525/12/84525%28thesis%29.pdf.

Pełny tekst źródła
Streszczenie:
This project constructed virtual plant leaf surfaces from digitised data sets for use in droplet spray models. Digitisation techniques for obtaining data sets for cotton, chenopodium and wheat leaves are discussed and novel algorithms for the reconstruction of the leaves from these three plant species are developed. The reconstructed leaf surfaces are included into agricultural droplet spray models to investigate the effect of the nozzle and spray formulation combination on the proportion of spray retained by the plant. A numerical study of the post-impaction motion of large droplets that have formed on the leaf surface is also considered.
Style APA, Harvard, Vancouver, ISO itp.
13

Sanja, Rapajić. "Iterativni postupci sa regularizacijom za rešavanje nelinearnih komplementarnih problema". Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2005. https://www.cris.uns.ac.rs/record.jsf?recordId=6022&source=NDLTD&language=en.

Pełny tekst źródła
Streszczenie:
U doktorskoj disertaciji razmatrani su iterativni postupci za rešavanje nelinearnih komplementarnih problema (NCP). Problemi ovakvog tipa javljaju se u teoriji optimizacije, inženjerstvu i ekonomiji. Matematički modeli mnogih prirodnih, društvenih i tehničkih procesa svode se takođe na ove probleme. Zbog izuzetno velike zastupljenosti NCP problema, njihovo rešavanje je veoma aktuelno. Među mnogobrojnim numeričkim postupcima koji se koriste u tu svrhu, u ovoj disertaciji posebna pažnja posvećena jegeneralizovanim postupcima Njutnovog tipa i iterativnim postupcima sa re-gularizacijom matrice jakobijana. Definisani su novi postupci za rešavanje NCP i dokazana je njihova lokalna ili globalna konvergencija. Dobijeni teorijski rezultati testirani su na relevantnim numeričkim primerima.
Iterative methods for nonlinear complementarity problems (NCP) are con-sidered in this doctoral dissertation. NCP problems appear in many math-ematical models from economy, engineering and optimization theory. Solv-ing NCP is very atractive in recent years. Among many numerical methods for NCP, we are interested in generalized Newton-type methods and Jaco-bian smoothing methođs. Several new methods for NCP are defined in this dissertation and their local or global convergence is proved. Theoretical results are tested on relevant numerical examples.
Style APA, Harvard, Vancouver, ISO itp.
14

Chen, Jein-Shan. "Merit functions and nonsmooth functions for the second-order cone complementarity problem /". Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/5782.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Wu, Di. "Cauchy problem for the incompressible Navier-Stokes equation with an external force and Gevrey smoothing effect for the Prandtl equation". Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCC194/document.

Pełny tekst źródła
Streszczenie:
Dans cette thèse on étudie des équations de la mécanique des fluides. On considère deux modèles : les équations de Navier-Stokes équation dans R3 en présence d'une force extérieure, et l'équation de Prandtl dans le demi plan. Pour le système de Navier-Stokes, on s'intéresse à l'existence locale en temps, l'unicité, le comportement global en temps et des critères d'explosion. Pour l'équation de Prandtl dans le demi plan, on s'intéresse à la régularité Gevrey. Le manuscrit est constitué de quatre chapitres. Dans le premier chapitre, on introduit quelques concepts de base sur les équations de la mécanique des fluides et on rappelle le sens physique des deux modèles précédents ainsi que quelques résultats mathématiques. Ensuite on énonce brièvement nos principaux résultats et les motivations. Enfin on mentionne quelques problèmes ouverts. Le second chapitre est consacré au problème de Cauchy pour les équations de Navier-Stokes dans R3 en présence d'une petite force extérieure, peu régulière. On démontre l'existence locale en temps pour ce système pour toute donnée initiale appartenant à un espace de Besov critique avec régularité négative. On obtient de plus trois résultats d'unicité pour ces solutions. Enfin on étudie le comportement en temps grand et la stabilité de solutions a priori globales. Le troisième chapitre traite d'un critère d'explosion pour les équations de Navier-Stokes avec une force extérieure indépendante du temps. On met en place une décomposition en profils pour les équations de Navier-Stokes forcées. Cette décomposition permet de faire un lien entre les équations forcées et non forcées, ce qui permet de traduire une information d'explosion de la solution non forcée vers la solution forcée. Dans le Chapitre 4 on étudie l'effet régularisant Gevrey de la solution locale en temps de l'équation de Prandtl dans le demi plan. Il est bien connu que l'équation de couche limite de Prandtl est instable pour des données initiales générales, et bien posée dans des espaces de Sobolev pour des données initiales monotones. Sous une hypothèse de monotonie de la vitesse tangentielle du flot, on démontre la régularité Gevrey pour la solution de l'équation de Prandtl dans le demi plan pour des données initiales dans un espace de Sobolev
This thesis deals with equations of fluid dynamics. We consider the following two models: one is the Navier-Stokes equation in R3 with an external force, the other one is the Prandtl equation on the half plane. For the Navier-Stokes system, we focus on the local in time existence, uniqueness, long-time behavior and blowup criterion. For the Prandtl equation on the half-plane, we consider the Gevrey regularity. This thesis consists in four chapters. In the first chapter, we introduce some background on equations of fluid dynamics and recall the physical meaning of the above two models as well as some well-known mathematical results. Next, we state our main results and motivations briefly. At last we mention some open problems. The second chapter is devoted to the Cauchy problem for the Navier-Stokes equation equipped with a small rough external force in R3. We show the local in time existence for this system for any initial data belonging to a critical Besov space with negative regularity. Moreover we obtain three kinds of uniqueness results for the above solutions. Finally, we study the long-time behavior and stability of priori global solutions.The third chapter deals with a blow-up criterion for the Navier-Stokes equation with a time independent external force. We develop a profile decomposition for the forced Navier-Stokes equation. The decomposition enables us to connect the forced and the unforced equations, which provides the blow-up information from the unforced solution to the forced solution. In Chapter 4, we study the Gevrey smoothing effect of the local in time solution to the Prandtl equation in the half plane. It is well-known that the Prandtl boundary layer equation is unstable for general initial data, and is well-posed in Sobolev spaces for monotonic initial data. Under a monotonicity assumption on the tangential velocity of the outflow, we prove Gevrey regularity for the solution to Prandtl equation in the half plane with initial data belonging to some Sobolev space
Style APA, Harvard, Vancouver, ISO itp.
16

Santos, Carlos Alberto Silva dos. "O problema de Cauchy para as equações KdV e mKdV". Universidade Federal de Alagoas, 2009. http://repositorio.ufal.br/handle/riufal/1040.

Pełny tekst źródła
Streszczenie:
In this work we will demonstrate that the Cauchy problem associated with the Korteweg-de Vries equation, denoted by KdV, and Korteweg-de Vries modified equation, denoted by mKdV, with initial data in the space of Sobolev Hs(|R), is locally well-posed on Hs(|R), with s>3/4 for KdV and s≥1/4 for mKdV, where the notion of well-posedness includes existence, uniqueness, persistence property of solution and continuous dependence of solution with respect to the initial data. This result is based on the works of Kenig, Ponce and Vega. The technique used to obtain these results is based on fixed point Banach theorem combined with the regularizantes effects of the group associated with the linear part.
Fundação de Amparo a Pesquisa do Estado de Alagoas
Neste trabalho demonstraremos que o problema de Cauchy associado as equações de Korteweg-de Vries, denotada por KdV, e de Korteweg-de Vries modificada, denotada por mKdV, com dado inicial no espaço de Sobolev Hs(|R), é bem posto localmente em Hs(|R), com s>3/4 para a KdV e s≥1/4 para a mKdV, onde a noção de boa postura inclui a existência, unicidade, a propriedade de persistência da solução e dependência contínua da solução com relação ao dado inicial. Este resultado é baseado nos trabalhos de Kenig, Ponce e Vega. A técnica utilizada para obter tais resultados se baseia no Teorema do Ponto Fixo de Banach combinada com os efeitos regularizantes do grupo associado com a parte linear.
Style APA, Harvard, Vancouver, ISO itp.
17

Hendrich, Christopher. "Proximal Splitting Methods in Nonsmooth Convex Optimization". Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-149548.

Pełny tekst źródła
Streszczenie:
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
Style APA, Harvard, Vancouver, ISO itp.
18

Nilsson, Per Johan Fredrik. "Planning semi-autonomous drone photo missions in Google Earth". Thesis, Mittuniversitetet, Avdelningen för data- och systemvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-31473.

Pełny tekst źródła
Streszczenie:
This report covers an investigation of the methods and algorithms required to plan and perform semi-autonomous photo missions on Apple iPad devices using data exported from Google Earth. Flight time was to be minimized, taking wind velocity and aircraft performance into account. Google Earth was used both to define what photos to take, and to define the allowable mission area for the aircraft. A benchmark mission was created containing 30 photo operations in a 250 by 500 m area containing several no-fly-areas. The report demonstrates that photos taken in Google Earth can be reproduced in reality with good visual resemblance. High quality paths between all possible photo operation pairs in the benchmark mission could be found in seconds using the Theta* algorithm in a 3D grid representation with six-edge connectivity (Up, Down, North, South, East, West). Smoothing the path in a post-processing step was shown to further increase the quality of the path at a very low computational cost. An optimal route between the operations in the benchmark mission, using the paths found by Theta*, could be found in less than half a minute using a Branch-and-Bound algorithm. It was however also found that prematurely terminating the algorithm after five seconds yielded a route that was close enough to optimal not to warrant running the algorithm to completion.
Style APA, Harvard, Vancouver, ISO itp.
19

Cowling, Ann Margaret. "Some problems in kernel curve estimation". Phd thesis, 1995. http://hdl.handle.net/1885/138794.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Prvan, Tania. "Some problems in recursive estimation". Phd thesis, 1987. http://hdl.handle.net/1885/138515.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Neto, Diogo Mariano Simões. "Numerical Simulation of Frictional Contact Problems using Nagata Patches in Surface Smoothing". Doctoral thesis, 2014. http://hdl.handle.net/10316/26743.

Pełny tekst źródła
Streszczenie:
Tese de doutoramento em Engenharia Mecânica, na especialidade de Tecnologias de Produção, apresentada ao Departamento de Engenharia Mecânica da Faculdade de Ciências e Tecnologia da Universidade de Coimbra
All movements in the world involve contact and friction, from walking to car driving. The contact mechanics has application in many engineering problems, including the connection of structural members by bolts or screws, design of gears and bearings, sheet metal or bulk forming, rolling contact of car tyres, crash analysis of structures, as well as prosthetics in biomedical engineering. Due to the nonlinear and non-smooth nature of contact mechanics (contact area is not known a priori), such problems are currently solved using the finite element method within the field of computational contact mechanics. However, most of the commercial finite element software packages presently available are not entirely capable to solve frictional contact problems, demanding for efficient and robust methods. Therefore, the main objective of this study is the development of algorithms and numerical methods to apply in the numerical simulation of 3D frictional contact problems between bodies undergoing large deformations. The presented original developments are implemented in the in-house finite element code DD3IMP. The formulation of quasi-static frictional contact problems between bodies undergoing large deformations is firstly presented in the framework of the continuum mechanics, following the classical scheme used in solid mechanics. The kinematic description of the deformable bodies is presented adopting an updated Lagrangian formulation. The mechanical behaviour of the bodies is described by an elastoplastic constitutive law in conjunction with an associated flow rule, allowing to model a wide variety of contact problems arising in industrial applications. The frictional contact between the bodies is established by means of two conditions: the principle of impenetrability and the Coulomb’s friction law, both imposed to the contact interface. The augmented Lagrangian method is applied for solving the constrained minimization incremental problem resulting from the frictional contact inequalities, yielding a mixed functional involving both displacements and contact forces. The spatial discretization of the bodies is performed with isoparametric solid finite elements, while the discretization of the contact interface is carried out using the classical Node-to-Segment technique, preventing the slave nodes from penetrating on the master surface. The geometrical part of the contact elements, defined by a slave node and the closest master segment, is created by the contact search algorithm based on the selection of the closest point on the master surface, defined by the normal projection of the slave node. In the particular case of contact between a deformable body and a rigid obstacle, the master rigid surface can be described by smooth parameterizations typically used in CAD models. However, in the general case of contact between deformable bodies, the spatial discretization of both bodies with low order finite elements yields a piecewise bilinear representation of the master surface. This is the central source of problems in solving contact problems involving large sliding, since it leads to the discontinuity of the surface normal vector field. Thus, a surface smoothing procedure based on the Nagata patch interpolation is proposed to describe the master contact surfaces, which led to the development of the Node-to-Nagata contact element. The accuracy of the surface smoothing method using Nagata patches is evaluated by means of simple geometries. The nodal normal vectors required for the Nagata interpolation are evaluated from the CAD geometry in case of rigid master surfaces, while in case of deformable bodies they are approximated using the weighted average of the normal vectors of the neighbouring facets. The residual vectors and tangent matrices of the contact elements are derived coherently with the surface smoothing approach, allowing to obtain quadratic convergence rate in the generalized Newton method used for solving the nonlinear system of equations. The developed surface smoothing method and corresponding contact elements are validated through standard numerical examples with known analytical or semi-analytical solutions. More advanced frictional contact problems are studied, covering the contact of a deformable body with rigid obstacles and the contact between deformable bodies, including self-contact phenomena. The smoothing of the master surface improves the robustness of the computational methods and reduces strongly the non-physical oscillations in the contact force introduced by the traditional faceted description of the contact surface. The presented results are compared with numerical solutions obtained by other authors and experimental results, demonstrating the accuracy and performance of the implemented algorithms for highly nonlinear problems.
Todos os movimentos no mundo envolvem contato e atrito, desde andar até conduzir um carro. A mecânica do contacto tem aplicação em muitos problemas de engenharia, incluindo a ligação de elementos estruturais com parafusos, projeto de engrenagens e rolamentos, estampagem ou forjamento, contato entre os pneus e a estrada, colisão de estruturas, bem como o desenvolvimento de próteses em engenharia biomédica. Devido à natureza não-linear e não-suave da mecânica do contato (área de contato desconhecida a priori), tais problemas são atualmente resolvidos usando o método dos elementos finitos no domínio da mecânica do contato computacional. No entanto, a maioria dos programas comerciais de elementos finitos atualmente disponíveis não é totalmente capaz de resolver problemas de contato com atrito, exigindo métodos numéricos mais eficientes e robustos. Portanto, o principal objetivo deste estudo é o desenvolvimento de algoritmos e métodos numéricos para aplicar na simulação numérica de problemas de contato com atrito entre corpos envolvendo grandes deformações. Os desenvolvimentos apresentados são implementados no programa de elementos finitos DD3IMP. A formulação quasi-estática de problemas de contato com atrito entre corpos deformáveis envolvendo grandes deformações é primeiramente apresentada no âmbito da mecânica dos meios contínuos, seguindo o método clássico usado em mecânica dos sólidos. A descrição cinemática dos corpos deformáveis é apresentada adotando uma formulação Lagrangeana reatualizada. O comportamento mecânico dos corpos é descrito por uma lei constitutiva elastoplástica em conjunto com uma lei de plasticidade associada, permitindo modelar uma grande variedade de problemas de contacto envolvidos em aplicações industriais. O contacto com atrito entre os corpos é definido por duas condições: o princípio da impenetrabilidade e a lei de atrito de Coulomb, ambas impostas na interface de contato. O método do Lagrangeano aumentado é aplicado na resolução do problema de minimização com restrições resultantes das condições de contato e atrito, produzindo uma formulação mista envolvendo deslocamentos e forças de contato. A discretização espacial dos corpos é realizada com elementos finitos sólidos isoparamétricos, enquanto a discretização da interface de contacto é realizado utilizando a técnica Node-to-Segment, impedindo os nós slave de penetrar na superfície master. A parte geométrica do elemento de contacto, definida por um nó slave e o segmento master mais próximo, é criada pelo algoritmo de deteção de contacto com base na seleção do ponto mais próximo na superfície master, obtido pela projeção normal do nó slave. No caso particular de contato entre um corpo deformável e um obstáculo rígido, a superfície rígida master pode ser descrita por parametrizações normalmente utilizadas em modelos CAD. No entanto, no caso geral de contato entre corpos deformáveis, a discretização espacial dos corpos com elementos finitos lineares origina uma representação da superfície master por facetas. Esta é a principal fonte de problemas na resolução de problemas de contato envolvendo grandes escorregamentos, uma vez que a distribuição dos vetor normais à superfície é descontínua. Assim, é proposto um método de suavização para descrever as superfícies de contacto master baseado na interpolação Nagata, que conduziu ao desenvolvimento do elemento de contacto Node-to-Nagata. A precisão do método de suavização das superfícies é avaliada através de geometrias simples. Os vetores normais nodais necessários para a interpolação Nagata são avaliados a partir da geometria CAD no caso de superfícies rígidas, enquanto no caso de corpos deformáveis são aproximados utilizando a média ponderada dos vetores normais das facetas vizinhas. Tanto os vetores de segundo membro como as matrizes residuais tangentes dos elementos de contacto são obtidas de forma coerente com o método de suavização da superfície, permitindo obter convergência quadrática no método de Newton generalizado, o qual é utilizado para resolver o sistema de equações não lineares. O método de suavização das superfícies e os elementos de contacto desenvolvidos são validados através de exemplos com soluções analíticas ou semi-analíticas conhecidas. Também são estudados outros problemas de contato mais complexos, incluindo o contato de um corpo deformável com obstáculos rígidos e o contato entre corpos deformáveis, contemplando fenómenos de auto-contato. A suavização da superfície master melhora a robustez dos métodos computacionais e reduz fortemente as oscilações na força de contato, associadas à descrição facetada da superfície de contato. Os resultados são comparados com soluções numéricas de outros autores e com resultados experimentais, demonstrando a precisão e o desempenho dos algoritmos implementados para problemas fortemente não-lineares.
Fundação para a Ciência e Tecnologia - SFRH/BD/69140/2010
Style APA, Harvard, Vancouver, ISO itp.
22

Jhong, Jhih-Syong, i 鍾智雄. "A Study of Gradient Smoothing Methods for Boundary Value Problems on Triangular Meshes". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/93662964677132066520.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Pina, Maria de Fátima Alves de. "Smoothing and Interpolation on the Essential Manifold". Doctoral thesis, 2020. http://hdl.handle.net/10316/95009.

Pełny tekst źródła
Streszczenie:
Tese no âmbito do Programa Interuniversitário de Doutoramento em Matemática, apresentada à Faculdade de Ciências e Tecnologia da Universidade de Coimbra
Interpolating data in non-Euclidean spaces plays an important role in different areas of knowledge. The main goal of this thesis is to present, in detail, two different approaches for solving interpolation problems on the Generalized Essential manifold Gk,n ×SO(n), consisting of the product of the Grassmann manifold of all k-dimensional subspaces of R^n and the Lie group of rotations in R^n. The first approach to be considered is a generalization to manifolds of the De Casteljau algorithm and the second is based on rolling motions. In order to achieve our objective, we first gather information of all the essential topics of Riemannian geometry and Lie theory necessary for a complete understanding of the geometry of the fundamental manifolds involved in this work, with particular emphasis on the Grassmann manifold and on the Normalized Essential manifold. To perform the De Casteljau algorithm in the manifold Gk,n×SO(n) we adapt a procedure already developed for connected and compact Lie groups and for spheres, and accomplish the implementation of that algorithm, first for the generation of geometric cubic polynomials in the Grassmann manifold Gk,n, and then extending it to generate cubic splines in the same manifold. New expressions for the velocity vector field along geometric cubic polynomials and for its covariant derivative are derived in order to obtain admissible curves that also fulfil appropriate boundary conditions. To solve the interpolation problem using the second approach, we propose an algorithm inspired in techniques that combine rolling/unrolling with unwrapping/wrapping, but accomplishing the objective using rolling motions only. Interpolating curves given in explicit form are obtained for the manifold Gk,n ×SO(n), which also prepares the ground for applications using the Normalized Essential manifold. The definition of rolling map is a crucial tool in this approach. We present a geometric interpretation of all the conditions present in that definition, including a refinement of the non-twist conditions which allows to prove interesting properties of rolling and, consequently, simplifies the study of rolling motions. In particular, the non-twist conditions are rewritten in terms of parallel vector fields, allowing for a clear connection between rolling and parallel transport. When specializing to the rolling manifold Gk,n ×SO(n) the definition of rolling map is adjusted in order to avoid destroying the matrix structure of that manifold. We also address controllability issues for the rolling motion of the Grassmann manifold Gk,n. In parallel with a theoretical proof, we present a constructive proof of the controllability of the kinematic equations that describe the pure rolling motions of the Grassmann manifold Gk,n over the affine tangent space at a point. We make connections with other known approaches to generate interpolating curves in manifolds and point out some directions for future work.
A interpolação de dados em espaços não Euclidianos desempenha um papel importante em diferentes áreas do conhecimento. O objetivo principal desta tese é apresentar, em detalhe, duas abordagens diferentes para resolver problemas de interpolação na variedade Essencial Generalizada Gk,n×SO(n), que consiste no produto cartesiano da variedade de Grassmann formada por todos os subespaços k-dimensionais de R^n e o grupo de Lie das rotações em R^n. A primeira abordagem a ser considerada é uma generalização para variedades do algoritmo de De Casteljau e a segunda é baseada em certos movimentos de rolamento. A fim de alcançar o nosso objetivo, primeiro reunimos informações de todos os tópicos essenciais de geometria Riemanniana e de teoria de Lie necessários para uma completa compreensão da geometria das variedades fundamentais envolvidas neste trabalho, com particular ênfase na variedade de Grassmann e na variedade Essencial Normalizada. Para implementar o algoritmo de De Casteljau na variedade Gk,n ×SO(n), adaptamos um procedimento já conhecido para grupos de Lie conexos e compactos e para esferas, e realizamos a implementação desse algoritmo, primeiro para a geração de polinómios geométricos cúbicos na variedade de Grassmann Gk,n, e depois estendemo-lo para gerar splines cúbicos na mesma variedade. São deduzidas novas expressões para o campo de vetores velocidade ao longo dessas curvas e para a sua derivada covariante, a fim de obter curvas admissíveis que também satisfaçam condições de fronteiras apropriadas. Para resolver o problema de interpolação utilizando a segunda abordagem, propomos um algoritmo inspirado em técnicas que combinam rolling/unrolling com unwrapping/wrapping, mas cumprindo o objetivo utilizando apenas movimentos de rolamento. As curvas de interpolação para a variedade Gk,n×SO(n) são obtidas de forma explícita, o que também prepara o terreno para aplicações utilizando a variedade Essencial Normalizada. A definição de aplicação rolamento é uma ferramenta crucial nesta abordagem. Apresentamos uma interpretação geométrica de todas as condições presentes nessa definição, incluindo um refinamento das condições de non-twist o que permite provar propriedades interessantes de rolamento e, consequentemente, simplifica o estudo dos movimentos de rolamento. Em particular, as condições de non-twist são reescritas em termos de campos vectoriais paralelos, permitindo uma ligação clara entre o rolamento e o transporte paralelo. Quando é especificada para a variedade de rolamento Gk,n×SO(n), a definição de aplicação rolamento é ajustada de forma a evitar destruir a estrutura matricial dessa variedade. Também abordamos questões de controlabilidade para o movimento de rolamento da variedade de Grassmann Gk,n. Em paralelo com uma prova teórica, apresentamos uma prova construtiva da controlabilidade das equações da cinemática que descrevem os movimentos de rolamento puro da variedade de Grassmann Gk,n sobre o espaço afim associado ao espaço tangente num ponto. Estabelecemos algumas relações com outras abordagens conhecidas para gerar curvas interpoladoras em variedades e apresentamos algumas direções para o trabalho futuro.
Style APA, Harvard, Vancouver, ISO itp.
24

Klann, Esther [Verfasser]. "Regularization of linear ill-posed problems in two steps : combination of data smoothing and reconstruction methods / von Esther Klann". 2005. http://d-nb.info/979913039/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Rau, Christian. "Curve Estimation and Signal Discrimination in Spatial Problems". Phd thesis, 2003. http://hdl.handle.net/1885/48023.

Pełny tekst źródła
Streszczenie:
In many instances arising prominently, but not exclusively, in imaging problems, it is important to condense the salient information so as to obtain a low-dimensional approximant of the data. This thesis is concerned with two basic situations which call for such a dimension reduction. The first of these is the statistical recovery of smooth edges in regression and density surfaces. The edges are understood to be contiguous curves, although they are allowed to meander almost arbitrarily through the plane, and may even split at a finite number of points to yield an edge graph. A novel locally-parametric nonparametric method is proposed which enjoys the benefit of being relatively easy to implement via a `tracking' approach. These topics are discussed in Chapters 2 and 3, with pertaining background material being given in the Appendix. In Chapter 4 we construct concomitant confidence bands for this estimator, which have asymptotically correct coverage probability. The construction can be likened to only a few existing approaches, and may thus be considered as our main contribution. ¶ Chapter 5 discusses numerical issues pertaining to the edge and confidence band estimators of Chapters 2-4. Connections are drawn to popular topics which originated in the fields of computer vision and signal processing, and which surround edge detection. These connections are exploited so as to obtain greater robustness of the likelihood estimator, such as with the presence of sharp corners. ¶ Chapter 6 addresses a dimension reduction problem for spatial data where the ultimate objective of the analysis is the discrimination of these data into one of a few pre-specified groups. In the dimension reduction step, an instrumental role is played by the recently developed methodology of functional data analysis. Relatively standar non-linear image processing techniques, as well as wavelet shrinkage, are used prior to this step. A case study for remotely-sensed navigation radar data exemplifies the methodology of Chapter 6.
Style APA, Harvard, Vancouver, ISO itp.
26

(5930024), Kshitij Mall. "Advancing Optimal Control Theory Using Trigonometry For Solving Complex Aerospace Problems". Thesis, 2019.

Znajdź pełny tekst źródła
Streszczenie:
Optimal control theory (OCT) exists since the 1950s. However, with the advent of modern computers, the design community delegated the task of solving the optimal control problems (OCPs) largely to computationally intensive direct methods instead of methods that use OCT. Some recent work showed that solvers using OCT could leverage parallel computing resources for faster execution. The need for near real-time, high quality solutions for OCPs has therefore renewed interest in OCT in the design community. However, certain challenges still exist that prohibits its use for solving complex practical aerospace problems, such as landing human-class payloads safely on Mars.

In order to advance OCT, this thesis introduces Epsilon-Trig regularization method to simply and efficiently solve bang-bang and singular control problems. The Epsilon-Trig method resolves the issues pertaining to the traditional smoothing regularization method. Some benchmark problems from the literature including the Van Der Pol oscillator, the boat problem, and the Goddard rocket problem verified and validated the Epsilon-Trig regularization method using GPOPS-II.

This study also presents and develops the usage of trigonometry for incorporating control bounds and mixed state-control constraints into OCPs and terms it as Trigonometrization. Results from literature and GPOPS-II verified and validated the Trigonometrization technique using certain benchmark OCPs. Unlike traditional OCT, Trigonometrization converts the constrained OCP into a two-point boundary value problem rather than a multi-point boundary value problem, significantly reducing the computational effort required to formulate and solve it. This work uses Trigonometrization to solve some complex aerospace problems including prompt global strike, noise-minimization for general aviation, shuttle re-entry problem, and the g-load constraint problem for an impactor. Future work for this thesis includes the development of the Trigonometrization technique for OCPs with pure state constraints.
Style APA, Harvard, Vancouver, ISO itp.
27

Heinrich, André. "Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration". Doctoral thesis, 2012. https://monarch.qucosa.de/id/qucosa%3A19869.

Pełny tekst źródła
Streszczenie:
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
Style APA, Harvard, Vancouver, ISO itp.
28

Karaman, Sadi. "Fixed point smoothing algorithm to the torpedo tracking problem". Thesis, 1986. http://hdl.handle.net/10945/21866.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Massey, John Sirles. "Surface shape regions as manifestations of a socio-economic phenomenon : a solution to the choropleth mapping problem". Thesis, 2012. http://hdl.handle.net/2440/84536.

Pełny tekst źródła
Streszczenie:
A choropleth map is a cartographic document. It shows a geographic study area tessellated by a set of polygons that differ in shape and size. Each polygon is depicted by a uniform symbol representing the manifestation of some phenomenon. This thesis focuses on socio-economic phenomena. We want to delineate a set of socio-economic regions within a study area. These regions are used for decision making about the delivery of specific goods and services and/or the provision of specific community infrastructure. However, we have identified three fundamental weaknesses associated with the use of choropleth maps for socio-economic regionalisation. Therefore, as an alternative to the choropleth map if we think explicitly in R³, then the best representation of the spatial distribution of a socio- economic phenomenon is a smooth surface. The socio-economic data we use are collected during a national census of population and are summarised for areas, i.e., polygons. To accommodate these data we have developed and applied a method for gridding and smoothing - termed regularisation - in order to build a smooth surface. We apply Green's theorem and use path integrals with much simplification to compute a smoothed datum for each intersection of a, say, 100 by 100 grid that describes a surface. Mathematically, surface shape is interpreted through the comparison of curvatures. Surface shape analysis involves the measurement of the Gaussian and mean curvatures at the internal intersections of the grid. Curvature measurement requires at least a twice differentiable function. We have invented such a function based on Lagrange interpolation. It is called a Lagrange polynomial in xy. Each internal intersection of the grid is the (2,2) element of a 3 x 3 matrix extracted from the grid. We compute a Lagrange polynomial in xy for each 3 x 3 matrix. Then we use this polynomial to measure the curvatures and classify the shape. Contiguous grid intersections of the same shape class comprise a shape neighbourhoods region interpreted as a specific manifestation of a socio-economic phenomenon. Hence, we have the basis for describing the spatial distribution of the phenomenon. Three investigations into the construction of quadratic polynomials as alternative functions are described. Two of these quadratic polynomials are called `exact fit' in the sense that the polynomial returns the exact z-datum associated with each xy-pair used in its construction. Construction of a `best fit' quadratic polynomial based on least squares interpolation comprises the third investigation. We compare the four different types of polynomials and of these we choose the Lagrange polynomial in x y as most appropriate. Given a relatively high density grid, e.g., 250 by 250, regardless of the polynomial used the resulting maps of shape neighbourhoods regions are virtually identical. This surprising convergence in R² is explained. Is a map of shape neighbourhoods regions an accurate description of the spatial distribution of a socio-economic phenomenon? We effect an indirect evaluation of a known phenomenon represented by the spatial distribution of f(x,y) = sin x sin y. We compute the true map of shape neighbourhoods regions of this phenomenon. An approximate map of shape neighbourhoods regions is computed by sampling with 100 randomly generated polygons. Comparison implies that the approximate map is an accurate representation of the true map. This conclusion is supported strongly by the results of a study of a nonperiodic-nonrandom known phenomenon, based on a combination of exponential functions in x and y. This has a surface similar to that of a socio-economic phenomenon. We review selected geographic studies in which mathematical tools have been used for analytical purposes. Mathematical analysis is gaining broader acceptance in geography. The innovative, high quality Surpop work of British geographers is described, and we comment on the strongly complementary nature of the research presented in this thesis to the Surpop work. We describe 18 future research directions and themes; suggestions are made on how each may be undertaken. Next, we summarise each of the ten results of the research presented in this thesis. The thesis concludes with a statement of the medium-term research directions of the researcher and his acknowledgements.
Thesis (M.Sc.(M&CS)) -- University of Adelaide, School of Mathematical Sciences, 2012
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii