Дисертації з теми "Convergence order"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Convergence order.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Convergence order".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Van, der Walt Jan Harm. "Order convergence on Archimedean vector lattices and applications." Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-02062006-130754.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Liang, Jingwei. "Convergence rates of first-order operator splitting methods." Caen, 2016. http://www.theses.fr/2016CAEN2024.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ce manuscrit traite de l’analyse de convergence des méthodes du premier ordre d’éclatement d’opérateurs qui sont omniprésents en optimisation non-lisse moderne. Il consiste en trois avancées théoriques principales sur la caractérisation des cette classe de méthodes, à savoir: leur taux de convergence globaux, de nouveaux schémas d’éclatement et une analyse de leur convergence linéaire locale. Dans un premier temps, nous proposons des taux de convergence globaux (sous-linéaires) et locaux (linéaire) pour l’itération de Krasnosel’ski˘ı-Mann inexacte, et ses applications à un large éventail de schémas d’éclatement d’opérateurs monotones. Ensuite, nous mettons au point deux algorithmes inertiels multi-pas d’éclatement d’opérateurs, pour le cas convexe et non-convexe, et établissons leurs garanties de convergence sur les itérées. Finalement, on s’appuyant sur le concept clé de la régularité partielle, nous présentons une analyse unifiée et précise de la convergence linéaire locale pour les méthodes d’optimisation proximales du premier ordre. Nous montrons que pour tous ces algorithmes, sous des conditions de non-dégénérescence appropriées, les itérées qu’ils génèrent (i) identifie les variétés actives de régularité partielle en temps finis, et ensuite (ii) entre dans un régime de convergence linéaire locale. Les taux de convergence linéaire sont caractérisés précisément, notamment en mettant en jeu la structure du problème d’optimisation, celle du schéma proximal, et la géométrie des variétés actives identifiées. Ces résultats théoriques sont systématiquement illustrés sur des applications issues des problèmes inverses, du traitement du signal et des images et de l’apprentissage
This manuscript is concerned with convergence analysis of first-order operator splitting methods that are ubiquitous in modern non-smooth optimization. It consists of three main theoretical advances on this class of methods, namely global convergence rates, novel operator splitting schemes and local linear convergence. First, we propose global (sub-linear) and local (linear) convergence rates for the inexact \KM iteration built from non-expansive operators, and its application to a variety of monotone splitting schemes. Then we design two novel multi-step inertial operator splitting algorithms, both in the convex and non-convex settings, and establish their global convergence. Finally, building on the key concept of partial smoothness, we present a unified and sharp local linear convergence analysis for the class of first-order proximal splitting methods for optimization. We show that for all these algorithms, under appropriate non-degeneracy conditions, the iterates generated by each of these methods will (i) identify the involved partial smooth manifolds in finite time, and then (ii) will enter a local linear convergence regime. The linear convergence rates are characterized precisely based on the structure of the optimization problem, that of the proximal splitting scheme, and the geometry of the identified active manifolds. Our theoretical findings are systematically illustrated on applications arising from inverse problems, signal/image processing and machine learning
3

Wang, Yuan. "Convergence and Boundedness of Probability-One Homotopies for Model Order Reduction." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30716.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The optimal model reduction problem is an inherently nonconvex problem and thus provides a nontrivial computational challenge. This study systematically examines the requirements of probability-one homotopy methods to guarantee global convergence. Homotopy algorithms for nonlinear systems of equations construct a continuous family of systems, and solve the given system by tracking the continuous curve of solutions to the family. The main emphasis is on guaranteeing transversality for several homotopy maps based upon the pseudogramian formulation of the optimal projection equations and variations based upon canonical forms. These results are essential to the probability-one homotopy approach by guaranteeing good numerical properties in the computational implementation of the homotopy algorithms.
Ph. D.
4

Davies, Peredur Glyn Cwyfan. "Identifying word-order convergence in the speech of Welsh-English bilinguals." Thesis, Bangor University, 2010. https://research.bangor.ac.uk/portal/en/theses/identifying-wordorder-convergence-in-the-speech-of-welshenglish-bilinguals(200be10a-4e1f-4b0f-ae56-f707bfce8556).html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis presents a study of the speech of Welsh-English bilinguals to determine the extent and manner of the structural influence of English on Welsh, specifically the phenomenon of convergence, which is described as the increase in frequency of use of a construction (e.g. word order) in one language due to the prevalence of that construction in another language with which its speakers are in contact. I take two approaches to measure convergence, using Welsh-English conversational data which were specially collected for a 40 hour corpus. First, I adapt the Matrix Language Frame model (Myers-Scotton 2002), usable to identify the language from which clause morphosyntax is sourced, to identify convergence. I propose the concept of a dichotomous Matrix Language, which is where there is conflicting evidence for which language provides clause structure. In testing the model on speech from six speakers, I find that, with few exceptions, Welsh is the source of the structure in the majority of clauses analysed. I Interpret this to show that word-order convergence in these data is limited insofar as using the Matrix Language Frame model indicates. Second, I analyse the speech of 28 bilinguals for evidence of the deletion of the initial auxiliary verb in periphrastic constructions involving an auxiliary form of bod 'be' and a 2nd person singular pronominal subject ti. Auxiliary deletion (AD) in such clauses results in a clause-initial subject, which I compare to English SVO word-order. I find that AD in such contexts is very common in these data, and is also found in clauses with a different subject. Analysis of age variation in the data indicates that AD in Welsh has become more common in recent years. I propose that an increase to subject-initial clauses in Welsh may be a change in progress, which I interpret to be in part due to convergence to English.
5

Couchman, Benjamin Luke Streatfield. "On the convergence of higher-order finite element methods to weak solutions." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/115685.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 77-79).
The ability to handle discontinuities appropriately is essential when solving nonlinear hyperbolic partial differential equations (PDEs). Discrete solutions to the PDE must converge to weak solutions in order for the discontinuity propagation speed to be correct. As shown by the Lax-Wendroff theorem, one method to guarantee that convergence, if it occurs, will be to a weak solution is to use a discretely conservative scheme. However, discrete conservation is not a strict requirement for convergence to a weak solution. This suggests a hierarchy of discretizations, where discretely conservative schemes are a subset of the larger class of methods that converge to the weak solution. We show here that a range of finite element methods converge to the weak solution without using discrete conservation arguments. The effect of using quadrature rules to approximate integrals is also considered. In addition, we show that solutions using non-conservation working variables also converge to weak solutions.
by Benjamin Luke Streatfield Couchman.
S.M.
6

Ghadimi, Euhanna. "Accelerating Convergence of Large-scale Optimization Algorithms." Doctoral thesis, KTH, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-162377.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Several recent engineering applications in multi-agent systems, communication networks, and machine learning deal with decision problems that can be formulated as optimization problems. For many of these problems, new constraints limit the usefulness of traditional optimization algorithms. In some cases, the problem size is much larger than what can be conveniently dealt with using standard solvers. In other cases, the problems have to be solved in a distributed manner by several decision-makers with limited computational and communication resources. By exploiting problem structure, however, it is possible to design computationally efficient algorithms that satisfy the implementation requirements of these emerging applications. In this thesis, we study a variety of techniques for improving the convergence times of optimization algorithms for large-scale systems. In the first part of the thesis, we focus on multi-step first-order methods. These methods add memory to the classical gradient method and account for past iterates when computing the next one. The result is a computationally lightweight acceleration technique that can yield significant improvements over gradient descent. In particular, we focus on the Heavy-ball method introduced by Polyak. Previous studies have quantified the performance improvements over the gradient through a local convergence analysis of twice continuously differentiable objective functions. However, the convergence properties of the method on more general convex cost functions has not been known. The first contribution of this thesis is a global convergence analysis of the Heavy- ball method for a variety of convex problems whose objective functions are strongly convex and have Lipschitz continuous gradient. The second contribution is to tailor the Heavy- ball method to network optimization problems. In such problems, a collection of decision- makers collaborate to find the decision vector that minimizes the total system cost. We derive the optimal step-sizes for the Heavy-ball method in this scenario, and show how the optimal convergence times depend on the individual cost functions and the structure of the underlying interaction graph. We present three engineering applications where our algorithm significantly outperform the tailor-made state-of-the-art algorithms. In the second part of the thesis, we consider the Alternating Direction Method of Multipliers (ADMM), an alternative powerful method for solving structured optimization problems. The method has recently attracted a large interest from several engineering communities. Despite its popularity, its optimal parameters have been unknown. The third contribution of this thesis is to derive optimal parameters for the ADMM algorithm when applied to quadratic programming problems. Our derivations quantify how the Hessian of the cost functions and constraint matrices affect the convergence times. By exploiting this information, we develop a preconditioning technique that allows to accelerate the performance even further. Numerical studies of model-predictive control problems illustrate significant performance benefits of a well-tuned ADMM algorithm. The fourth and final contribution of the thesis is to extend our results on optimal scaling and parameter tuning of the ADMM method to a distributed setting. We derive optimal algorithm parameters and suggest heuristic methods that can be executed by individual agents using local information. The resulting algorithm is applied to distributed averaging problem and shown to yield substantial performance improvements over the state-of-the-art algorithms.

QC 20150327

7

Kim, Taejong. "Mesh independent convergence of modified inexact Newton methods for second order nonlinear problems." Texas A&M University, 2003. http://hdl.handle.net/1969.1/3870.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this dissertation, we consider modified inexact Newton methods applied to second order nonlinear problems. In the implementation of Newton's method applied to problems with a large number of degrees of freedom, it is often necessary to solve the linear Jacobian system iteratively. Although a general theory for the convergence of modified inexact Newton's methods has been developed, its application to nonlinear problems from nonlinear PDE's is far from complete. The case where the nonlinear operator is a zeroth order perturbation of a fixed linear operator was considered in the paper written by Brown et al.. The goal of this dissertation is to show that one can develop modified inexact Newton's methods which converge at a rate independent of the number of unknowns for problems with higher order nonlinearities. To do this, we are required to first, set up the problem on a scale of Hilbert spaces, and second, to devise a special iterative technique which converges in a higher order Sobolev norm, i.e., H1+alpha(omega) \ H1 0(omega) with 0 < alpha < 1/2. We show that the linear system solved in Newton's method can be replaced with one iterative step provided that the initial iterate is close enough. The closeness criteria can be taken independent of the mesh size. In addition, we have the same convergence rates of the method in the norm of H1 0(omega) using the discrete Sobolev inequalities.
8

Butch, Nicholas Patrick. "The search for quantum criticality near the convergence of hidden order and ferromagnetism." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2008. http://wwwlib.umi.com/cr/ucsd/fullcit?p3307110.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (Ph. D.)--University of California, San Diego, 2008.
Title from first page of PDF file (viewed July 3, 2008). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 139-149).
9

Bürger, Steven, and Bernd Hofmann. "About a deficit in low order convergence rates on the example of autoconvolution." Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-130630.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We revisit in L2-spaces the autoconvolution equation x ∗ x = y with solutions which are real-valued or complex-valued functions x(t) defined on a finite real interval, say t ∈ [0,1]. Such operator equations of quadratic type occur in physics of spectra, in optics and in stochastics, often as part of a more complex task. Because of their weak nonlinearity deautoconvolution problems are not seen as difficult and hence little attention is paid to them wrongly. In this paper, we will indicate on the example of autoconvolution a deficit in low order convergence rates for regularized solutions of nonlinear ill-posed operator equations F(x)=y with solutions x† in a Hilbert space setting. So for the real-valued version of the deautoconvolution problem, which is locally ill-posed everywhere, the classical convergence rate theory developed for the Tikhonov regularization of nonlinear ill-posed problems reaches its limits if standard source conditions using the range of F (x† )∗ fail. On the other hand, convergence rate results based on Hölder source conditions with small Hölder exponent and logarithmic source conditions or on the method of approximate source conditions are not applicable since qualified nonlinearity conditions are required which cannot be shown for the autoconvolution case according to current knowledge. We also discuss the complex-valued version of autoconvolution with full data on [0,2] and see that ill-posedness must be expected if unbounded amplitude functions are admissible. As a new detail, we present situations of local well-posedness if the domain of the autoconvolution operator is restricted to complex L2-functions with a fixed and uniformly bounded modulus function.
10

Agbebaku, Dennis Ferdinand. "Solution of conservation laws via convergence space completion." Diss., University of Pretoria, 2011. http://hdl.handle.net/2263/27791.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
It is well known that a classical solution of the initial value problem for a scalar conservation law may fail to exist on the whole domain of definition of the problem. For this reason, suitable generalized solutions of such problems, known as weak solutions, have been considered and studied extensively. However, weak solutions are not unique. In order to obtain a unique solution that is physically relevant, the vanishing viscosity method, amongst others, has been employed to single out a unique solution known as the entropy solution. In this thesis we present an alternative approach to the study of the entropy solution of conservation laws. The main novelty of our approach is that the theory of entropy solution of conservation law is presented in an operator theoretic setting. In this regard, the Order Completion Method for nonlinear PDEs, in the context of convergence vector spaces, is modified to obtain an operator equation which generalizes the initial value problem. This equation admits at most one solution, which may be represented as a Hausdorff continuous function. As a particular case, we apply our method to obtain the entropy solution of the Burger's equation. Copyright
Dissertation (MSc)--University of Pretoria, 2011.
Mathematics and Applied Mathematics
Unrestricted
11

Hasse, Gunther Willy. "Convergence from chaos to order in capital projects using chaos attractors – an explorative study." Thesis, University of Pretoria, 2020. http://hdl.handle.net/2263/73060.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Successful capital projects contribute to sustain society and accelerate socio-economic development due to its inherent multiplier effect. The linear project management paradigm does not seem to stem either historical or current capital project cost overruns and failures. Accelerative societal change in terms of trends, megatrends, paradigm shifts, Black Swan events, and disruptive technologies require capital projects to be executed in a volatile, uncertain, complex and ambiguous environment that is expected to result in more chaos and failures of capital projects. This research contributes to the non-linear ‘management by chaos’ paradigm and develops and test chaos theories and models for employment in capital projects. The objective of this research is to explore if chaos attractors could cause local convergence (first research question) and overall convergence (second research question) from chaos to order in capital projects and thereby contribute to reduce capital project cost overruns and failures. Using the grand chaos theory and literature references to chaos attractor metaphors as a starting point, six lower-level chaos theories and variance models were built for fixed-point attractors, fixed-point repellers, limit-cycle attractors, torus attractors, butterfly attractors and strange attractors. One lower level-theory and variance model were built for a landscape that comprised of the six chaos attractors. A randomness-chaos-complexity-order continuum model was derived from literature to represent the context within which dynamic capital project behaviour unfolds. Assuming a constructivist research paradigm, a two-round qualitative explorative research strategy was employed with the capital project as the unit of analysis. The Nominal Group Technique was employed in the first round of interviews with 12 experienced capital project managers to obtain grounded definitions, an understanding of the randomness-chaos-complexity-order continuum model and the concept of chaos attractors. Voice recordings from interviews were transcribed and content analysis was done using the Atlas.ti software. Five capital project archetypes were identified by respondents. This was followed by a second round of deep individual interviews using semi-structured questions with 14 experienced capital project managers. Content analysis was used to confirm the archetypes and test the transferability and convergence effect from chaos to order of the six chaos metaphors and one landscape of the six chaos metaphors to the capital project domain. Evidence was found in terms of examples, characteristics, value statements and variance model scoring to suggest that local convergence in capital projects from chaos to order could occur as a result of the six individual chaos attractors. Similarly, that overall project convergence could occur as a result of a specific constellation of these six chaos attractors located across the capital project life cycle. Nine convergence-divergence archetypes were defined by respondents that described the dynamic behaviour of different types of capital projects in the randomness-chaos-complexity-order continuum. It was also found that achieving capital project convergence from chaos towards an ordered project state, using chaos attractors, do not imply project success. However, an ordered project state could aid the minimisation of capital project cost overruns. “Chaos theory considers the convergence from chaos to order a natural phenomenon in capital projects that is brought about by the following six chaos attractors: fixed-point, repeller, limit-cycle, torus, butterfly and strange”. This exploratory research found evidence to support the existence of this grand theory and its associated mid-range and lower-level theories, but further research is required to validate the generalisation of these findings.
Thesis (PhD)--University of Pretoria, 2020.
Graduate School of Technology Management (GSTM)
PhD
Unrestricted
12

Khirirat, Sarit. "Randomized first-order methods for convex optimization : Improved convergence rate bounds and experimental evaluations." Thesis, KTH, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214697.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Huge-scale optimization problems appear in several applications ranging frommachine learning over large data sets to distributed model predictive control.Classical optimization algorithms struggle to handle these large-scale computations,and recently, a number of randomized rst-order methods that are simpleto implement and have small per-iteration cost have been proposed. However,optimal step size selections and corresponding convergence rates of many randomizedrst-order methods were still unknown. In this thesis, we hence deriveconvergence rate results for several randomized rst-order methods for convexand strongly convex optimization problem, both with and without convexconstraints. Furthermore, we have implemented these randomized rst-ordermethods in MATLAB and evaluated their performance on l2-regularized leastsquaressupport vector machine (SVM) classication problems. In addition, wehave implemented randomized rst-order projection methods for constrainedconvex optimization, derived associated convergence rate bounds, and evaluatethe methods on l2-regularized least-squares SVM classication problems withEuclidean ball constraints of the weight vector. Based on the implementationexperience, we nally discuss how data scaling/normalization and conditioningaect the convergence rates of randomized rst-order methods.
Storskaliga optimeringsproblem uppkommer i många moderna tillämpningar,från maskininlärning till distribuerad styrning. Traditionella optimeringsalgoritmerhar svårt att hantera dessa storskaliga beräkningar. Nyligen har ett antalrandomiserade gradient-baserade optimeringsmetoder föreslagits för optimeringöver stora datamängder, men det är i dagsläget inte känt hur man på bästasätt väljer algoritmparametrar för dem och vilka garantier man kan ge på deraskonvergensegenskaper.I detta examensarbete härleder vi konvergensresultat för era randomiseradegradient-baserade optimeringsalgoritmer, för konvexa och starkt konvexafunktioner samt med och utan konvexa bivillkor. Vi har implementerat dessaalgoritmer i MATLAB och utvärderat dem på klassificeringsproblem. Våra numeriskaexperiment validerar våra teoretiska resultat och visar hur skalning,normalisering och konditionering påverkar den praktiska prestandan hos randomiseradegradientmetoder.
13

Barré, Mathieu. "Worst-case analysis of efficient first-order methods." Electronic Thesis or Diss., Université Paris sciences et lettres, 2021. http://www.theses.fr/2021UPSLE064.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
De nombreuses applications modernes reposent sur la résolution de problèmes d’optimisations (par exemple, en biologie numérique, en mécanique, en finance), faisant des méthodes d’optimisation des outils essentiels dans de nombreux domaines scientifiques. Apporter des garanties sur le comportement de ces méthodes constitue donc un axe de recherche important. Une façon classique d’analyser un algorithme d’optimisation consiste à étudier son comportement dans le pire cas. C'est-à-dire, donner des garanties sur son comportement (par exemple sa vitesse de convergence) qui soient indépendantes de la fonction en entrée de l’algorithme et vraies pour toutes les fonctions dans une classe donnée. Cette thèse se concentre sur l’analyse en pire cas de quelques méthodes du premier ordre réputées pour leur efficacité. Nous commençons par étudier les méthodes d’accélération d’Anderson, pour lesquelles nous donnons de nouvelles bornes de pire cas qui permettent de garantir précisément et explicitement quand l’accélération a lieu. Pour obtenir ces garanties, nous fournissons des majorations sur une variation du problème d’optimisation polynomiale de Tchebychev, dont nous pensons qu’elles constituent un résultat indépendant. Ensuite, nous prolongeons l’étude des Problèmes d’Estimation de Performances (PEP), développés à l’origine pour analyser les algorithmes d’optimisation à pas fixes, à l’analyse des méthodes adaptatives. En particulier, nous illustrons ces développements à travers l’étude des comportements en pire cas de la descente de gradient avec pas de Polyak, qui utilise la norme des gradients et les valeurs prises par la fonction objectif, ainsi que d’une nouvelle version accélérée. Nous détaillons aussi cette approche sur d’autres algorithmes adaptatifs standards. Enfin, la dernière contribution de cette thèse est de développer plus avant la méthodologie PEP pour l’analyse des méthodes du premier ordre se basant sur des opérations proximales inexactes. En utilisant cette approche, nous définissons des algorithmes dont les garanties en pire cas ont été optimisées et nous fournissons des analyses de pire cas pour quelques méthodes présentes dans la littérature
Many modern applications rely on solving optimization problems (e.g., computational biology, mechanics, finance), establishing optimization methods as crucial tools in many scientific fields. Providing guarantees on the (hopefully good) behaviors of these methods is therefore of significant interest. A standard way of analyzing optimization algorithms consists in worst-case reasoning. That is, providing guarantees on the behavior of an algorithm (e.g. its convergence speed), that are independent of the function on which the algorithm is applied and true for every function in a particular class. This thesis aims at providing worst-case analyses of a few efficient first-order optimization methods. We start by the study of Anderson acceleration methods, for which we provide new explicit worst-case bounds guaranteeing precisely when acceleration occurs. We obtained these guarantees by providing upper bounds on a variation of the classical Chebyshev optimization problem on polynomials, that we believe of independent interest. Then, we extend the Performance Estimation Problem (PEP) framework, that was originally designed for principled analyses of fixed-step algorithms, to study first-order methods with adaptive parameters. This is illustrated in particular through the worst-case analyses of the canonical gradient method with Polyak step sizes that use gradient norms and function values information, and of an accelerated version of it. The approach is also presented on other standard adaptive algorithms. Finally, the last contribution of this thesis is to further develop the PEP methodology for analyzing first-order methods relying on inexact proximal computations. Using this framework, we produce algorithms with optimized worst-case guarantees and provide (numerical and analytical) worst-case bounds for some standard algorithms in the literature
14

Hao, Zhaopeng. "High-order numerical methods for integral fractional Laplacian: algorithm and analysis." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-dissertations/612.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The fractional Laplacian is a promising mathematical tool due to its ability to capture the anomalous diffusion and model the complex physical phenomenon with long-range interaction, such as fractional quantum mechanics, image processing, jump process, etc. One of the important applications of fractional Laplacian is a turbulence intermittency model of fractional Navier-Stokes equation which is derived from Boltzmann's theory. However, the efficient computation of this model on bounded domains is challenging as highly accurate and efficient numerical methods are not yet available. The bottleneck for efficient computation lies in the low accuracy and high computational cost of discretizing the fractional Laplacian operator. Although many state-of-the-art numerical methods have been proposed and some progress has been made for the existing numerical methods to achieve quasi-optimal complexity, some issues are still fully unresolved: i) Due to nonlocal nature of the fractional Laplacian, the implementation of the algorithm is still complicated and the computational cost for preparation of algorithms is still high, e.g., as pointed out by Acosta et al \cite{AcostaBB17} 'Over 99\% of the CPU time is devoted to assembly routine' for finite element method; ii) Due to the intrinsic singularity of the fractional Laplacian, the convergence orders in the literature are still unsatisfactory for many applications including turbulence intermittency simulations. To reduce the complexity and computational cost, we consider two numerical methods, finite difference and spectral method with quasi-linear complexity, which are summarized as follows. We develop spectral Galerkin methods to accurately solve the fractional advection-diffusion-reaction equations and apply the method to fractional Navier-Stokes equations. In spectral methods on a ball, the evaluation of fractional Laplacian operator can be straightforward thanks to the pseudo-eigen relation. For general smooth computational domains, we propose the use of spectral methods enriched by singular functions which characterize the inherent boundary singularity of the fractional Laplacian. We develop a simple and easy-to-implement fractional centered difference approximation to the fractional Laplacian on a uniform mesh using generating functions. The weights or coefficients of the fractional centered formula can be readily computed using the fast Fourier transform. Together with singularity subtraction, we propose high-order finite difference methods without any graded mesh. With the use of the presented results, it may be possible to solve fractional Navier-Stokes equations, fractional quantum Schrodinger equations, and stochastic fractional equations with high accuracy. All numerical simulations will be accompanied by stability and convergence analysis.
15

Ameismeier, Tobias [Verfasser], та Helmut [Akademischer Betreuer] Abels. "Thin Vibrating Rods: Γ-Convergence, Large Time Existence and First Order Asymptotics / Tobias Ameismeier ; Betreuer: Helmut Abels". Regensburg : Universitätsbibliothek Regensburg, 2021. http://d-nb.info/1236401433/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Nyamayaro, Takura T. A. "On the design and implementation of a hybrid numerical method for singularly perturbed two-point boundary value problems." University of the Western Cape, 2014. http://hdl.handle.net/11394/4326.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
>Magister Scientiae - MSc
With the development of technology seen in the last few decades, numerous solvers have been developed to provide adequate solutions to the problems that model different aspects of science and engineering. Quite often, these solvers are tailor-made for specific classes of problems. Therefore, more of such must be developed to accompany the growing need for mathematical models that help in the understanding of the contemporary world. This thesis treats two point boundary value singularly perturbed problems. The solution to this type of problem undergoes steep changes in narrow regions (called boundary or internal layer regions) thus rendering the classical numerical procedures inappropriate. To this end, robust numerical methods such as finite difference methods, in particular fitted mesh and fitted operator methods have extensively been used. While the former consists of transforming the continuous problem into a discrete one on a non-uniform mesh, the latter involves a special discretisation of the problem on a uniform mesh and are known to be more accurate. Both classes of methods are suitably designed to accommodate the rapid change(s) in the solution. Quite often, finite difference methods on piece-wise uniform meshes (of Shishkin-type) are adopted. However, methods based on such non-uniform meshes, though layer-resolving, are not easily extendable to higher dimensions. This work aims at investigating the possibility of capitalising on the advantages of both fitted mesh and fitted operator methods. Theoretical results are confirmed by extensive numerical simulations.
17

Zhao, Qingrong. "Reduced-Order Robust Adaptive Controller Design and Convergence Analysis for Uncertain SISO Linear Systems with Noisy Output Measurements." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1194564628.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

BRITO, MARGARIDA. "Encadrement presque sur des statistiques d'ordre." Paris 6, 1987. http://www.theses.fr/1987PA066284.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Soit k(n) une suite non-decroissante d'entiers positifs. Sous certaines hypotheses pour la suite k(n), on determine des encadrements presque surement optimaux de la k(n)eme statistique d'ordre d'un echantillon de taille n. On commence par aborder le cas ou k(n) est inferieur ou egal a log(log(n)). En utilisant des approximations des queues de la loi binomiale, obtenues a partir des techniques usuelles de la theorie des grandes deviations, on determine d'abord des suites qui majorent ou minorent de facon optimale la k(n)eme statistique d'ordre d'un echantillon uniforme. On applique ensuite les resultats obtenus aux lois de probabilite actuelles
19

Munyakazi, Justin Bazimaziki. "Higher Order Numerical Methods for Singular Perturbation Problems." Thesis, Online Access, 2009. http://etd.uwc.ac.za/usrfiles/modules/etd/docs/etd_gen8Srv25Nme4_6335_1277251056.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Valois, Isabela da Silva. "PATHS OF CONVERGENCE OF AGRICULTURAL INCOME IN BRAZIL - AN ANALYSIS FROM MARKOV PROCESS OF FIRST ORDER FOR THE PERIOD 1996 TO 2009." Universidade Federal do CearÃ, 2012. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=8110.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
The Brazilian agricultural sector has made in the period of stabilization after the Real Plan (1996-2009) a satisfactory economic dynamics, in which the level of agricultural products began an upward trend and virtually uninterrupted growth. This performance suggests that state economies are undergoing a process of catching up, which in the long run there would be a tendency for poorer economies achieve the same level of economic growth (in terms of per capita agricultural GDP) of the richest economies, setting a process of convergence to steady state. Accordingly, this paper seeks to analyze the convergence of per capita agricultural income between the states of Brazil, making sure that the dynamics of the agricultural sector had contributed to the reduction of inequalities existing interstate. To this end, it was used the first-order Markov process. The results indicate the occurrence of movements backward economies to levels of income per capita agricultural lower, indicating that the economies under review showed a trend of impoverishment, despite the global economic growth presented by the sector over the period. Among the factors that led these economies to tread a path of impoverishment, one can cite the emphasis of public policy to export crops, not covered by all the federating units of the country, which would result in the strengthening of the state economies have developed, expense of which are under development; beyond the migration of manpower for the agricultural production centers in more developed agricultural, causing the "Red Queen Effect," in which the growth of agricultural GDP does not translate into growth of income per capita in the field. However, the focus of this study is to identify the occurrence of convergence / divergence, no inferences about the causes that led to the initiation of such a movement, since these factors make room for new studies that seek to investigate them, in order to provide tools for the formulation of agricultural policies aimed at minimizing or even reversal of the causes that lead to poverty in the countryside.
O setor agropecuÃrio brasileiro tem apresentado no perÃodo de pÃs estabilizaÃÃo do Plano Real (1996-2009) uma dinÃmica econÃmica satisfatÃria, em que o nÃvel de produto agropecuÃrio iniciou uma trajetÃria ascendente e praticamente ininterrupta de crescimento. Tal performance sugere que as economias estaduais estejam passando por um processo de catching up, em que no longo prazo existiria uma tendÃncia das economias mais pobres alcanÃarem o mesmo nÃvel de crescimento econÃmico (em termos de PIB per capita agropecuÃrio) das economias mais ricas, configurando um processo de convergÃncia no steady state. Eom efeito, este, trabalho busca analisar a convergÃncia da renda agropecuÃria per capita entre os estados do Brasil, verificando se a dinÃmica do setor agrÃcola teria contribuÃdo para a reduÃÃo das desigualdades interestaduais preexistentes. Para tal, fez-se uso do processo markoviano de primeira ordem. Os resultados apontaram a ocorrÃncia de movimentos de retrocesso das economias para nÃveis de renda per capita agropecuÃria inferiores, indicando que as economias em anÃlise apresentaram uma tendÃncia de empobrecimento, apesar do crescimento econÃmico global do setor ao longo do perÃodo. Dentre os fatores que levariam tais economias a trilharem uma trajetÃria de empobrecimento, pode-se citar a Ãnfase das polÃticas pÃblicas Ãs culturas de exportaÃÃo, nÃo contempladas por todas as unidades federativas do PaÃs, o que resultaria no fortalecimento das economias estaduais jà desenvolvidas, em detrimento das que se encontram em desenvolvimento; alÃm dos movimentos migratÃrios da mÃo-de-obra agropecuÃria para os centros produtores agrÃcolas mais desenvolvidos, causando o âEfeito Rainha Vermelhaâ, em que o crescimento do PIB agropecuÃrio nÃo se traduziria em crescimento das rendas per capita no campo. Contudo, o foco deste estudo consiste na identificaÃÃo da ocorrÃncia do processo de convergÃncia/divergÃncia, sem inferir sobre as causas que levariam ao desencadeamento de tal movimento, jà que tais fatores abrem espaÃo para novos estudos que busquem investigÃ-los, a fim de poder fornecer instrumentos de formulaÃÃo de polÃticas pÃblicas agropecuÃrias direcionadas à minimizaÃÃo ou mesmo reversÃo das causas que levam à pobreza no campo.
21

Riffaud, Sébastien. "Modèles réduits : convergence entre calcul et données pour la mécanique des fluides." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0334.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'objectif de cette thèse est de réduire significativement le coût de calcul associé aux simulations numériques gouvernées par des équations aux dérivées partielles. Dans ce but, nous considérons des modèles dits "réduits", dont la construction consiste typiquement en une phase d'apprentissage, au cours de laquelle des solutions haute-fidélité sont collectées pour définir un sous-espace d'approximation de faible dimension, et une étape de prédiction, qui exploite ensuite ce sous-espace d'approximation conduit par les données afin d'obtenir des simulations rapides voire en temps réel. La première contribution de cette thèse concerne la modélisation d'écoulements gazeux dans les régimes hydrodynamiques et raréfiés. Dans ce travail, nous développons une nouvelle approximation d'ordre réduite de l'équation de Boltzmann-BGK, basée sur la décomposition orthogonale aux valeurs propres dans la phase d'apprentissage et sur la méthode de Galerkin dans l'étape de prédiction. Nous évaluons la simulation d'écoulements instationnaires contenant des ondes de choc, des couches limites et des vortex en 1D et 2D. Les résultats démontrent la stabilité, la précision et le gain significatif des performances de calcul fourni par le modèle réduit par rapport au modèle haute-fidélité. Le second sujet de cette thèse porte sur les applications du problème de transport optimal pour la réduction de modèles. Nous proposons notamment d'employer la théorie du transport optimal afin d'analyser et d'enrichir la base de données contenant les solutions haute-fidélité utilisées pour l'entraînement du modèle réduit. Les tests de reproduction et de prédiction d'écoulements instationnaires, gouvernés par l'équation de Boltzmann-BGK en 1D, montrent l'amélioration de la précision et de la fiabilité du modèle réduit résultant de ces deux applications. Finalement, la dernière contribution de cette thèse concerne le développement d'une méthode de décomposition de domaine basée sur la méthode de Galerkin discontinue. Dans cette approche, le modèle haute-fidélité décrit la solution où un certain degré de précision est requis, tandis que le modèle réduit est employé dans le reste du domaine. La méthode de Galerkin discontinue pour le modèle réduit offre une manière simple de reconstruire la solution globale en raccordant les solutions locales à travers les flux numériques aux interfaces des cellules. La méthode proposée est évaluée pour des problèmes paramétriques gouvernés par les équations d'Euler en 1D et 2D. Les résultats démontrent la précision de la méthode proposée et la réduction significative du coût de calcul par rapport aux simulations haute-fidélité
The objective of this thesis is to significantly reduce the computational cost associated with numerical simulations governed by partial differential equations. For this purpose, we consider reduced-order models (ROMs), which typically consist of a training stage, in which high-fidelity solutions are collected to define a low-dimensional trial subspace, and a prediction stage, where this data-driven trial subspace is then exploited to achieve fast or real-time simulations. The first contribution of this thesis concerns the modeling of gas flows in both hydrodynamic and rarefied regimes. In this work, we develop a new reduced-order approximation of the Boltzmann-BGK equation, based on Proper Orthogonal Decomposition (POD) in the training stage and on the Galerkin method in the prediction stage. We investigate the simulation of unsteady flows containing shock waves, boundary layers and vortices in 1D and 2D. The results demonstrate the stability, accuracy and significant computational speedup factor delivered by the ROM with respect to the high-fidelity model. The second topic of this thesis deals with the optimal transport problem and its applications to model order reduction. In particular, we propose to use the optimal transport theory in order to analyze and enrich the training database containing the high-fidelity solution snapshots. Reproduction and prediction of unsteady flows, governed by the 1D Boltzmann-BGK equation, show the improvement of the accuracy and reliability of the ROM resulting from these two applications. Finally, the last contribution of this thesis concerns the development of a domain decomposition method based on the Discontinuous Galerkin method. In this approach, the ROM approximates the solution where a significant dimensionality reduction can be achieved while the high-fidelity model is employed elsewhere. The Discontinuous Galerkin method for the ROM offers a simple way to recover the global solution by linking local solutions through numerical fluxes at cell interfaces. The proposed method is evaluated for parametric problems governed by the quasi-1D and 2D Euler equations. The results demonstrate the accuracy of the proposed method and the significant reduction of the computational cost with respect to the high-fidelity model
22

Sanja, Lončar. "Negative Selection - An Absolute Measure of Arbitrary Algorithmic Order Execution." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=104861&source=NDLTD&language=en.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Algorithmic trading is an automated process of order execution on electronic stock markets. It can be applied to a broad range of financial instruments, and it is  characterized by a signicant investors' control over the execution of his/her orders, with the principal goal of finding the right balance between costs and risk of not (fully) executing an order. As the measurement of execution performance gives information whether best execution is achieved, a signicant number of diffeerent benchmarks is  used in practice. The most frequently used are price benchmarks, where some of them are determined before trading (Pre-trade benchmarks), some during the trading  day (In-traday benchmarks), and some are determined after the trade (Post-trade benchmarks). The two most dominant are VWAP and Arrival Price, which is along with other pre-trade price benchmarks known as the Implementation Shortfall (IS).We introduce Negative Selection as a posteriori measure of the execution algorithm performance. It is based on the concept of Optimal Placement, which represents the ideal order that could be executed in a given time win-dow, where the notion of ideal means that it is an order with the best execution price considering  market  conditions  during the time window. Negative Selection is dened as a difference between vectors of optimal and executed orders, with vectors dened as a quantity of shares at specied price positionsin the order book. It is equal to zero when the order is optimally executed; negative if the order is not (completely) filled, and positive if the order is executed but at an unfavorable price.Negative Selection is based on the idea to offer a new, alternative performance measure, which will enable us to find the  optimal trajectories and construct optimal execution of an order.The first chapter of the thesis includes a list of notation and an overview of denitions and theorems that will be used further in the thesis. Chapters 2 and 3 follow with a  theoretical overview of concepts related to market microstructure, basic information regarding benchmarks, and theoretical background of algorithmic trading. Original results are presented in chapters 4 and 5. Chapter 4 includes a construction of optimal placement, definition and properties of Negative Selection. The results regarding the properties of a Negative Selection are given in [35]. Chapter 5 contains the theoretical background for stochastic optimization, a model of the optimal execution formulated as a stochastic optimization problem with regard to Negative Selection, as well as original work on nonmonotone line search method [31], while numerical results are in the last, 6th chapter.
Algoritamsko trgovanje je automatizovani proces izvršavanja naloga na elektronskim berzama. Može se primeniti na širok spektar nansijskih instrumenata kojima se trguje na berzi i karakteriše ga značajna kontrola investitora nad izvršavanjem njegovih naloga, pri čemu se teži nalaženju pravog balansa izmedu troška i rizika u vezi sa izvršenjem naloga. S ozirom da se merenjem performasi izvršenja naloga određuje da li je postignuto najbolje izvršenje, u praksi postoji značajan broj različitih pokazatelja. Najčešće su to pokazatelji cena, neki od njih se određuju pre trgovanja (eng. Pre-trade), neki u toku trgovanja (eng. Intraday), a neki nakon trgovanja (eng. Post-trade). Dva najdominantnija pokazatelja cena su VWAP i Arrival Price koji je zajedno sa ostalim "pre-trade" pokazateljima cena poznat kao Implementation shortfall (IS).Pojam negative selekcije se uvodi kao "post-trade" mera performansi algoritama izvršenja, polazeći od pojma optimalnog naloga, koji predstavlja idealni nalog koji se  mogao izvrsiti u datom vremenskom intervalu, pri ćemu se pod pojmom "idealni" podrazumeva nalog kojim se postiže najbolja cena u tržišnim uslovima koji su vladali  u toku tog vremenskog intervala. Negativna selekcija se definiše kao razlika vektora optimalnog i izvršenog naloga, pri čemu su vektori naloga defisani kao količine akcija na odgovarajućim pozicijama cena knjige naloga. Ona je jednaka nuli kada je nalog optimalno izvršen; negativna, ako nalog nije (u potpunosti) izvršen, a pozitivna ako je nalog izvršen, ali po nepovoljnoj ceni.Uvođenje mere negativne selekcije zasnovano je na ideji da se ponudi nova, alternativna, mera performansi i da se u odnosu na nju nađe optimalna trajektorija i konstruiše optimalno izvršenje naloga.U prvom poglavlju teze dati su lista notacija kao i pregled definicija i teorema  neophodnih za izlaganje materije. Poglavlja 2 i 3 bave se teorijskim pregledom pojmova i literature u vezi sa mikrostrukturom tržišta, pokazateljima trgovanja i algoritamskim trgovanjem. Originalni rezultati su predstavljeni u 4. i 5. poglavlju. Poglavlje 4 sadrži konstrukciju optimalnog naloga, definiciju i osobine negativne selekcije. Teorijski i praktični rezultati u vezi sa osobinama negativna selekcije dati su u [35]. Poglavlje 5 sadrži teorijske osnove stohastičke optimizacije, definiciju modela za optimalno izvršenje, kao i originalni rad u vezi sa metodom nemonotonog linijskog pretraživanja [31], dok 6. poglavlje sadrži empirijske rezultate.
23

Liu, Yating. "Optimal Quantization : Limit Theorem, Clustering and Simulation of the McKean-Vlasov Equation." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS215.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse contient deux parties. Dans la première partie, on démontre deux théorèmes limites de la quantification optimale. Le premier théorème limite est la caractérisation de la convergence sous la distance de Wasserstein d’une suite de mesures de probabilité par la convergence simple des fonctions d’erreur de la quantification. Ces résultats sont établis en Rd et également dans un espace de Hilbert séparable. Le second théorème limite montre la vitesse de convergence des grilles optimales et la performance de quantification pour une suite de mesures de probabilité qui convergent sous la distance de Wasserstein, notamment la mesure empirique. La deuxième partie de cette thèse se concentre sur l’approximation et la simulation de l’équation de McKean-Vlasov. On commence cette partie par prouver, par la méthode de Feyel (voir Bouleau (1988)[Section 7]), l’existence et l’unicité d’une solution forte de l’équation de McKean-Vlasov dXt = b(t, Xt, μt)dt + σ(t, Xt, μt)dBt sous la condition que les fonctions de coefficient b et σ sont lipschitziennes. Ensuite, on établit la vitesse de convergence du schéma d’Euler théorique de l’équation de McKean-Vlasov et également les résultats de l’ordre convexe fonctionnel pour les équations de McKean-Vlasov avec b(t,x,μ) = αx+β, α,β ∈ R. Dans le dernier chapitre, on analyse l’erreur de la méthode de particule, de plusieurs schémas basés sur la quantification et d’un schéma hybride particule- quantification. À la fin, on illustre deux exemples de simulations: l’équation de Burgers (Bossy and Talay (1997)) en dimension 1 et le réseau de neurones de FitzHugh-Nagumo (Baladron et al. (2012)) en dimension 3
This thesis contains two parts. The first part addresses two limit theorems related to optimal quantization. The first limit theorem is the characterization of the convergence in the Wasserstein distance of probability measures by the pointwise convergence of Lp-quantization error functions on Rd and on a separable Hilbert space. The second limit theorem is the convergence rate of the optimal quantizer and the clustering performance for a probability measure sequence (μn)n∈N∗ on Rd converging in the Wasserstein distance, especially when (μn)n∈N∗ are the empirical measures with finite second moment but possibly unbounded support. The second part of this manuscript is devoted to the approximation and the simulation of the McKean-Vlasov equation, including several quantization based schemes and a hybrid particle-quantization scheme. We first give a proof of the existence and uniqueness of a strong solution of the McKean- Vlasov equation dXt = b(t, Xt, μt)dt + σ(t, Xt, μt)dBt under the Lipschitz coefficient condition by using Feyel’s method (see Bouleau (1988)[Section 7]). Then, we establish the convergence rate of the “theoretical” Euler scheme and as an application, we establish functional convex order results for scaled McKean-Vlasov equations with an affine drift. In the last chapter, we prove the convergence rate of the particle method, several quantization based schemes and the hybrid scheme. Finally, we simulate two examples: the Burger’s equation (Bossy and Talay (1997)) in one dimensional setting and the Network of FitzHugh-Nagumo neurons (Baladron et al. (2012)) in dimension 3
24

Dejan, Ćebić. "Optimalni višekoračni metodi NJutnovog tipa za nalaženje višestrukih korena nelinearne jednačine sa poznatom celobrojnom višestrukošću." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2018. https://www.cris.uns.ac.rs/record.jsf?recordId=105555&source=NDLTD&language=en.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ova disertacija se bavi problemom određivanja višestrukih rešenja realnih nelinearnih jednačina kada je višestrukost unapred poznati prirodan broj. Teorijski se analiziraju i numerički testiraju red konvergencije i optimalnost neki dobro poznatih metoda poput Liu-Čou metoda i Čou-Čen-Song metoda. Izvodi se i objašnjava zavisnost optimalnog reda konvergencije i parnosti/neparnosti višestrukosti rešenja. Takođe, konstruišu se dve nove familije postupaka osmog reda konvergecnije. Razmatraju se nove familije dvokoračnih postupaka namenjene za rešavanje problema koje klasični metodi NJutnovog tipa ne mogu da reše.
This thesis deals with the problem of determing multiple roots of real nonlinear equations where the multiplicity is some integer known in advance. The convergence order and optimal properties of some well-known methods such as Liu-Zhou method and Zhou-Chen-Song method are theoretically analyzed and numerically tested. The dependence of optimal convergence order on multiplicity has been derived and explained. Further, two new efficient families of methods with optimal eighth convergence order have been constructed. Furthermore, some new families of two-step methods are considered to solve certain problems where the classical Newton-type methods fail.
25

Tain, Cyril. "Modelling of type II superconductors : implementation with FreeFEM." Electronic Thesis or Diss., Normandie, 2023. http://www.theses.fr/2023NORMIR40.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Nous présentons dans cette étude quatre modèles pour les supraconducteurs de type II: le modèle de London, le modèle de Ginzburg-Landau dépendant du temps (TDGL), le modèle de Ginzburg-Landau stationnaire et un modèle de type Abelian-Higgs. Pour le modèle de London nous avons étudié un problème à symétrie cylindrique. Nous avons établi une formulation hydrodynamique du modèle grâce à l'introduction d'une fonction courant. Le caractère bien posé du problème a été prouvé. Le champ magnétique extérieur a été calculé pour des domaines 2D et 3D. En 3D une méthode par éléments frontières a été implémentée en utilisant une fonctionalité récente de FreeFem. Pour le modèle TDGL deux codes fondés sur deux formulations variationnelles ont été implémentées et validées sur des cas tests classiques de la littérature en 2D et 3D. Pour le modèle GL stationnaire une méthode de gradient de Sobolev a été utilisée pour trouver l'état d'équilibre. Ces résultats ont été comparés avec ceux du modèle TDGL. Pour le modèle Abelian-Higgs un code Fortran différences finies en 1D a été développé et validé par la construction d'un système manufacturé. Ce modèle a été utilisé pour retrouver certaines propriétés de magnétisation des supraconducteurs
In this thesis we present four models for type II superconductors: the London model, the time dependent Ginzburg-Landau (TDGL) model, the steady state Ginzburg-Landau model and an Abelian-Higgs model. For the London model a problem with cylindrical symmetry was considered. A hydrodynamic formulation of the problem was established through the introduction of a stream function. Well-posedness of the problem was proved. The external magnetic field was computed for 2D and 3D domains. In 3D a boundary element method was implemented using a recent feature of FreeFem. For the TDGL model two codes based on two variational formulations were proposed and tested on classical benchmarks of the literature in 2D and 3D. In the steady state GL model a Sobolev gradient technique was used to find the equilibrium state. The results were compared with the ones given by the TDGL model. In the Abelian-Higgs model a 1D finite differences code written in Fortran was developed and tested with the construction of a manufactured system. The model was used to retrieve some of the properties of magnetization of superconductors
26

Davis, Clayton Paul. "Understanding and Improving Moment Method Scattering Solutions." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd620.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Vergnaud, Alban. "Améliorations de la précision et de la modélisation de la tension de surface au sein de la méthode SPH, et simulations de cas d'amerrissage d'urgence d'helicoptères." Thesis, Ecole centrale de Nantes, 2020. http://www.theses.fr/2020ECDN0033.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La méthode SPH (Smoothed Particle Hydrodynamics) est une méthode de simulation numérique Lagrangienne et sans maillage, utilisée dans de nombreux domaines de la physique et de l’ingénierie (astrophysique, mécanique des milieux solides, mécanique des milieux fluides, etc...). Dans le domaine de la mécanique des fluides, cette méthode est désormais utilisée dans de nombreux champs d’application (ingénierie navale, automobile, aéronautique, etc...), profitant en particulier de son caractère Lagrangien et de l'absence de connectivités pour simuler des écoulements complexes à surface libre avec de grandes déformations et de nombreuses reconnexions d’interfaces. Cependant, la méthode SPH souffre encore d’un certain manque de précision dû à son caractère Lagrangien et à la relative complexité des opérateurs utilisés. L’objectif général de cette thèse est de proposer plusieurs améliorations en vue d’augmenter la précision de la méthode SPH. Le premier axe de ce travail de recherche porte sur l’étude du désordre particulaire (ou "particle shifting" en anglais) afin de briser les structures Lagrangiennes classiquement observées en SPH et responsables d'une dégradation de la précision des simulations. En particulier, à l’aide d’une étude théorique portant notamment sur des propriétés de convergence et de consistance, une nouvelle loi de shifting est proposée. Un deuxième axe s'intéresse à l'étude d'un nouvel opérateur visqueux en proche paroi, pour un traitement surfacique des conditions aux limites. Le troisième axe de développement concerne la montée en ordre de la méthode SPH, et notamment dans le cas des schémas de type Riemann-SPH. Une nouvelle méthode de reconstruction, basée sur le schéma WENO (Weighted Essentially Non-Oscillatory) et des interpolations MLS (Moving Least Squares), des états gauche et droit des problèmes de Riemann est proposée. En complément de ces recherches, un nouveau modèle de tension de surface précis et robuste est proposé pour les écoulements monophasiques, permettant notamment une imposition de l’angle de contact au niveau de la ligne de contact. Enfin, dans le cadre du projet SARAH (increased SAfety and Robust certification for ditching of Aircraft and Helicopters ; European Unions Horizon 2020 Research and Innovation Programme Grant No. 724139), le dernier axe de cette thèse est consacré à la mise en place d’un modèle numérique permettant la simulation de cas d’amerrissage d’urgence d’hélicoptère. Ce modèle est validé grâce à la comparaison des résultats numériques avec ceux obtenus lors d’une campagne d’essais expérimentaux menée au bassin d'essais de l'Ecole Centrale de Nantes
Smoothed Particle Hydrodynamics (SPH) is a Lagrangian and meshless numerical method, used in many branches of physics and engineering (astrophysics, solid mechanics, fluid mechanics, etc...). In fluid mechanics, this method is now used in many application fields (naval engineering, automotive engineering, aeronautic engineering, etc...), using its meshless and Lagrangian features to simulate free surface flows with complex shapes and with many interface reconnexions. However, the SPH method still suffers from a lack of precision due to its Lagrangian feature and the relative complexity of the SPH operators. The objective of this thesis is to propose several improvements to increase the precision of the SPH method. The first part of this work focuses on a particle shifting technique aiming at breaking the Lagrangian structures inherently observed in SPH and which usually leads to a deterioration of the simulations. In particular, thanks to a theoretical study on consistency and convergence properties, a new shifting law is proposed. Secondly, a new viscous operator for near-body areas is proposed, based on a surface formulation of the boundary conditions. The third part concerns higher orders of convergence in the SPH method, and in particular for the case of Riemann-SPH schemes. A new reconstruction method, based the WENO scheme (Weighted Essentially Non-Oscillatory) and MLS (Moving Least Squares) interpolations, is proposed for the left and right state reconstructions of the Riemann problems. Then, a new accurate and robust surface tension model for single-phase flows is proposed, allowing namely to impose the contact angles at the contact line. Finally, as part of the SARAH project (increased SAfety and Robust certification for ditching of Aircraft and Helicopters ; European Unions Horizon 2020 Research and Innovation Programme Grant No. 724139), the last topic of this thesis is dedicated to the establishment of a numerical model allowing the SPH simulations of emergency ditching cases of helicopters. This model is validated thanks to comparisons with experimental results conducted in the wave basin of Ecole Centrale Nantes
28

Durochat, Clément. "Méthode de type Galerkin discontinu en maillages multi-éléments (et non-conformes) pour la résolution numérique des équations de Maxwell instationnaires." Thesis, Nice, 2013. http://www.theses.fr/2013NICE4005.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse porte sur l’étude d’une méthode de type Galerkin discontinu en domaine temporel (GDDT), afin de résoudre numériquement les équations de Maxwell instationnaires sur des maillages hybrides tétraédriques/hexaédriques en 3D (triangulaires/quadrangulaires en 2D) et non-conformes, que l’on note méthode GDDT-PpQk. Comme dans différents travaux déjà réalisés sur plusieurs méthodes hybrides (par exemple des combinaisons entre des méthodes Volumes Finis et Différences Finies, Éléments Finis et Différences Finies, etc.), notre objectif principal est de mailler des objets ayant une géométrie complexe à l’aide de tétraèdres, pour obtenir une précision optimale, et de mailler le reste du domaine (le vide environnant) à l’aide d’hexaèdres impliquant un gain en terme de mémoire et de temps de calcul. Dans la méthode GDDT considérée, nous utilisons des schémas de discrétisation spatiale basés sur une interpolation polynomiale nodale, d’ordre arbitraire, pour approximer le champ électromagnétique. Nous utilisons un flux centré pour approcher les intégrales de surface et un schéma d’intégration en temps de type saute-mouton d’ordre deux ou d’ordre quatre. Après avoir introduit le contexte historique et physique des équations de Maxwell, nous présentons les étapes détaillées de la méthode GDDT-PpQk. Nous réalisons ensuite une analyse de stabilité L2 théorique, en montrant que cette méthode conserve une énergie discrète et en exhibant une condition suffisante de stabilité de type CFL sur le pas de temps, ainsi que l’analyse de convergence en h (théorique également), conduisant à un estimateur d’erreur a-priori. Ensuite, nous menons une étude numérique complète en 2D (ondes TMz), pour différents cas tests, des maillages hybrides et non-conformes, et pour des milieux de propagation homogènes ou hétérogènes. Nous faisons enfin de même pour la mise en oeuvre en 3D, avec des simulations réalistes, comme par exemple la propagation d’une onde électromagnétique dans un modèle hétérogène de tête humaine. Nous montrons alors la cohérence entre les résultats mathématiques et numériques de cette méthode GDDT-PpQk, ainsi que ses apports en termes de précision et de temps de calcul
This thesis is concerned with the study of a Discontinuous Galerkin Time-Domain method (DGTD), for the numerical resolution of the unsteady Maxwell equations on hybrid tetrahedral/hexahedral in 3D (triangular/quadrangular in 2D) and non-conforming meshes, denoted by DGTD-PpQk method. Like in several studies on various hybrid time domain methods (such as a combination of Finite Volume with Finite Difference methods, or Finite Element with Finite Difference, etc.), our general objective is to mesh objects with complex geometry by tetrahedra for high precision and mesh the surrounding space by square elements for simplicity and speed. In the discretization scheme of the DGTD method considered here, the electromagnetic field components are approximated by a high order nodal polynomial, using a centered approximation for the surface integrals. Time integration of the associated semi-discrete equations is achieved by a second or fourth order Leap-Frog scheme. After introducing the historical and physical context of Maxwell equations, we present the details of the DGTD-PpQk method. We prove the L2 stability of this method by establishing the conservation of a discrete analog of the electromagnetic energy and a sufficient CFL-like stability condition is exhibited. The theoritical convergence of the scheme is also studied, this leads to a-priori error estimate that takes into account the hybrid nature of the mesh. Afterward, we perform a complete numerical study in 2D (TMz waves), for several test problems, on hybrid and non-conforming meshes, and for homogeneous or heterogeneous media. We do the same for the 3D implementation, with more realistic simulations, for example the propagation in a heterogeneous human head model. We show the consistency between the mathematical and numerical results of this DGTD-PpQk method, and its contribution in terms of accuracy and CPU time
29

Kulkarni, Shashank D. "Development and validation of a Method of Moments approach for modeling planar antenna structures." Worcester, Mass. : Worcester Polytechnic Institute, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-042007-151741/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dissertation (Ph.D.)--Worcester Polytechnic Institute.
Keywords: patch antennas; volume integral equation (VIE); method of moments (MoM); low order basis functions; convergence. Includes bibliographical references (leaves 169-186 ).
30

Sayi, Mbani T. "High Accuracy Fitted Operator Methods for Solving Interior Layer Problems." University of the Western Cape, 2020. http://hdl.handle.net/11394/7320.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Philosophiae Doctor - PhD
Fitted operator finite difference methods (FOFDMs) for singularly perturbed problems have been explored for the last three decades. The construction of these numerical schemes is based on introducing a fitting factor along with the diffusion coefficient or by using principles of the non-standard finite difference methods. The FOFDMs based on the latter idea, are easy to construct and they are extendible to solve partial differential equations (PDEs) and their systems. Noting this flexible feature of the FOFDMs, this thesis deals with extension of these methods to solve interior layer problems, something that was still outstanding. The idea is then extended to solve singularly perturbed time-dependent PDEs whose solutions possess interior layers. The second aspect of this work is to improve accuracy of these approximation methods via methods like Richardson extrapolation. Having met these three objectives, we then extended our approach to solve singularly perturbed two-point boundary value problems with variable diffusion coefficients and analogous time-dependent PDEs. Careful analyses followed by extensive numerical simulations supporting theoretical findings are presented where necessary.
31

Miller, Kenyon Russell. "Convergent neural algorithms for pattern matching using high-order relational descriptions." Diss., Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/8219.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Saunders, Martin. "Measurement of low-order structure factors by Convergent Beam Electron Diffraction." Thesis, University of Bath, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359247.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis describes the development and testing of a new technique for the measurement of structure factors based on the matching of theoretical calculations with experimental, energy-filtered zone-axis Convergent Beam Electron Diffraction (CBED) patterns. The sum-of-squares difference between a set of experimental diffraction intensities and a theoretical calculation is minimised by varying a set of low-order structure factors until a best fit is obtained. The basic theory required for the simulation of zone-axis CBED patterns is given. Additional theory is developed specifically for the pattern matching method in order to improve the efficiency of the matching calculation. This includes the development of analytic expressions for the gradient of the sum-of-squares with respect to each of the fitting parameters, and the addition of beams to the pattern calculation by second-order perturbation theory. The effects of random and systematic errors are considered by fitting to simulated `noisy' data. A wide range of potential systematic error effects are investigated and limits are found for errors in the accelerating voltage, Debye-Waller factor and lattice parameter which reduce systematic errors to acceptable levels. These tests also investigate the sensitivity of the method to structure factor variations, which gives an indication of how many structure factors can be measured. Finally, the method is applied to the measurement of low-order structure factors from experimental Si [110] zone-axis patterns. The results are compared to the best X-ray Pendellösung measurements available, and the bonding charge densities obtained from both the zone-axis and X-ray measurements are constructed
33

Kouao, Serge Guy. "Incidence des facteurs institutionnels dans l’évolution de la structure financière des entreprises : cas d’entreprises françaises cotées à la bourse de Paris." Thesis, Bordeaux 4, 2011. http://www.theses.fr/2011BOR40032/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
S’appuyant sur les théories du financement hiérarchique et du compromis, cette recherche se donne pour objectif de tester empiriquement la relation structure financière-institution. Ces deux notions partagent des caractéristiques communes favorisant leur association conceptuelle à travers le ratio d’endettement cible spécifiquement via le comportement de conservatisme financier des entreprises. Cela ouvre de nouvelles possibilités d’analyses de ladite relation, notamment, en mobilisant le néo-institutionnalisme. Un échantillon de 204 entreprises françaises cotées à la bourse de Paris, ayant des données complètes entre 1999 et 2007, a servi à entreprendre le volet empirique de l’étude. Les principaux résultats indiquent que l’ensemble des déterminants traditionnels de la structure financière, à l’exception de la taille, joue un rôle important dans la politique de financement de ces entreprises. Le niveau de corruption et la liquidité du marché boursier français (variables institutionnelles juridico-financières) n’influencent pas le choix du niveau d’endettement, mais jouent plutôt un rôle significatif dans le choix de la maturité de la dette. Par ailleurs, la structure financière de ces entreprises converge lentement mais sûrement vers son niveau cible
Based on the pecking order and trade-off theories, this research aims to test empirically the relationship between corporate capital structure and institution. Both concepts share common characteristics fostering their conceptual association through the target debt ratio specifically via corporate behavior of financial conservatism. This opens new possibilities for analysis of that relationship, in particular, by mobilizing the new institutionalism framework. A sample of 204 French companies listed on the Paris stock exchange, with complete data between 1999 and 2007, was used to undertake the empirical part of the study. The main results indicate that all the traditional determinants of capital structure, except the size, play an important role in the financing policy of these companies. The level of corruption and the French stock market liquidity (legal and financial institutional variables) do not influence the choice of debt level, but rather play a significant role in the choice of debt maturity. In addition, the financial structure of these companies converges slowly but surely toward its target level
34

Sciannandrone, Daniele. "Acceleration and higher order schemes of a characteristic solver for the solution of the neutron transport equation in 3D axial geometries." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112171/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le sujet de ce travail de thèse est l’application de la méthode de caractéristiques longues (MOC) pour résoudre l’équation du transport des neutrons pour des géométries à trois dimensions extrudées. Les avantages du MOC sont sa précision et son adaptabilité, le point faible était la quantité de ressources de calcul requises. Ce problème est même plus important pour des géométries à trois dimensions ou le nombre d’inconnues du problème est de l’ordre de la centaine de millions pour des calculs d’assemblage.La première partie de la recherche a été dédiée au développement des techniques optimisées pour le traçage et la reconstruction à-la-volé des trajectoires. Ces méthodes profitent des régularités des géométries extrudées et ont permis une forte réduction de l’empreinte mémoire et une réduction des temps de calcul. La convergence du schéma itératif a été accélérée par un opérateur de transport dégradé (DPN) qui est utilisé pour initialiser les inconnues de l’algorithme itératif and pour la solution du problème synthétique au cours des itérations MOC. Les algorithmes pour la construction et la solution des opérateurs MOC et DPN ont été accélérés en utilisant des méthodes de parallélisation à mémoire partagée qui sont le plus adaptés pour des machines de bureau et pour des clusters de calcul. Une partie importante de cette recherche a été dédiée à l’implémentation des méthodes d’équilibrage la charge pour améliorer l’efficacité du parallélisme. La convergence des formules de quadrature pour des cas 3D extrudé a aussi été explorée. Certaines formules profitent de couts négligeables du traitement des directions azimutales et de la direction verticale pour accélérer l’algorithme. La validation de l’algorithme du MOC a été faite par des comparaisons avec une solution de référence calculée par un solveur Monte Carlo avec traitement continu de l’énergie. Pour cette comparaison on propose un couplage entre le MOC et la méthode des Sous-Groupes pour prendre en compte les effets des résonances des sections efficaces. Le calcul complet d’un assemblage de réacteur rapide avec interface fertile/fissile nécessite 2 heures d’exécution avec des erreurs de quelque pcm par rapport à la solution de référence.On propose aussi une approximation d’ordre supérieur du MOC basée sur une expansion axiale polynomiale du flux dans chaque maille. Cette méthode permet une réduction du nombre de mailles (et d’inconnues) tout en gardant la même précision.Toutes les méthodes développées dans ce travail de thèse ont été implémentées dans la version APOLLO3 du solveur de transport TDT
The topic of our research is the application of the Method of Long Characteristics (MOC) to solve the Neutron Transport Equation in three-dimensional axial geometries. The strength of the MOC is in its precision and versatility. As a drawback, it requires a large amount of computational resources. This problem is even more severe in three-dimensional geometries, for which unknowns reach the order of tens of billions for assembly-level calculations.The first part of the research has dealt with the development of optimized tracking and reconstruction techniques which take advantage of the regularities of three-dimensional axial geometries. These methods have allowed a strong reduction of the memory requirements and a reduction of the execution time of the MOC calculation.The convergence of the iterative scheme has been accelerated with a lower-order transport operator (DPN) which is used for the initialization of the solution and for solving the synthetic problem during MOC iterations.The algorithms for the construction and solution of the MOC and DPN operators have been accelerated by using shared-memory parallel paradigms which are more suitable for standard desktop working stations. An important part of this research has been devoted to the implementation of scheduling techniques to improve the parallel efficiency.The convergence of the angular quadrature formula for three-dimensional cases is also studied. Some of these formulas take advantage of the reduced computational costs of the treatment of planar directions and the vertical direction to speed up the algorithm.The verification of the MOC solver has been done by comparing results with continuous-in-energy Monte Carlo calculations. For this purpose a coupling of the 3D MOC solver with the Subgroup method is proposed to take into account the effects of cross sections resonances. The full calculation of a FBR assembly requires about 2 hours of execution time with differences of few PCM with respect to the reference results.We also propose a higher order scheme of the MOC solver based on an axial polynomial expansion of the unknown within each mesh. This method allows the reduction of the meshes (and unknowns) by keeping the same precision.All the methods developed in this thesis have been implemented in the APOLLO3 version of the neutron transport solver TDT
35

Hejazi, Hala Ahmad. "Finite volume methods for simulating anomalous transport." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/81751/1/Hala%20Ahmad_Hejazi_Thesis.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis a new approach for solving a certain class of anomalous diffusion equations was developed. The theory and algorithms arising from this work will pave the way for more efficient and more accurate solutions of these equations, with applications to science, health and industry. The method of finite volumes was applied to discretise the spatial derivatives, and this was shown to outperform existing methods in several key respects. The stability and convergence of the new method were rigorously established.
36

Bernoussi, Benaissa. "Compacité et ordre convergence dans les espace des fonctions mesurables et de mesures." Perpignan, 1990. http://www.theses.fr/1990PERP0087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Kratz, Marie. "Some contributions in probability and statistics of extremes." Habilitation à diriger des recherches, Université Panthéon-Sorbonne - Paris I, 2005. http://tel.archives-ouvertes.fr/tel-00239329.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Thiel, Alena. "Heterotemporal convergences : travelling significations of order and their adaptations in the claims-making strategies of Accra's Makola market traders." Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=228600.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Studies on market trader activism in Africa routinely approach traders' claims-making practices from the perspective of the state's regime of signifying order, in relation to which opposition simply seeks to render itself “legible” (Scott 1998). In contrast, this dissertation contends that one must pay close attention to the multiple significations of order and disorder that exist in any social situation and which, through their continuous permeation, fuel transformations of normative plausibilities and, by extension, of the grounds for claims. With a grounding in the theory of the social and political quality of time, I show how the idea of coeval temporalities sensitises observers to the multiple sources of significations of order and disorder – particularly, with regard to subjects' relation to authority – and their creative adaptation in the moment of temporal convergence. The central marketplace of Accra, the capital of Ghana, provides the context for this study. My empirical analysis of this social arena that is closely connected to global flows of people, capital, consumer items and, inevitably, ideas, including those related to order and associated grounds of entitlement adds to the underappreciated theoretical strand the actor-centred process of translation that engenders creative adaptations between converging coeval temporalities.
39

Sato, Fernando Massami. "Numerical experiments with stable versions of the Generalized Finite Element Method." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18134/tde-16102017-101710/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Generalized Finite Element Method (GFEM) is essentially a partition of unity based method (PUM) that explores the Partition of Unity (PoU) concept to match a set of functions chosen to efficiently approximate the solution locally. Despite its well-known advantages, the method may present some drawbacks. For instance, increasing the approximation space through enrichment functions may introduce linear dependences in the solving system of equations, as well as the appearance of blending elements. To address the drawbacks pointed out above, some improved versions of the GFEM were developed. The Stable GFEM (SGFEM) is a first version hereby considered in which the GFEM enrichment functions are modified. The Higher Order SGFEM proposes an additional modification for generating the shape functions attached to the enriched patch. This research aims to present and numerically test these new versions recently proposed for the GFEM. In addition to highlighting its main features, some aspects about the numerical integration when using the higher order SGFEM, in particular are also addressed. Hence, a splitting rule of the quadrilateral element area, guided by the PoU definition itself is described in detail. The examples chosen for the numerical experiments consist of 2-D panels that present favorable geometries to explore the advantages of each method. Essentially, singular functions with good properties to approximate the solution near corner points and polynomial functions for approximating smooth solutions are examined. Moreover, a comparison among the conventional FEM and the methods herein described is made taking into consideration the scaled condition number and rates of convergence of the relative errors on displacements. Finally, the numerical experiments show that the Higher Order SGFEM is the more robust and reliable among the versions of the GFEM tested.
O Método dos Elementos Finitos Generalizados (MEFG) é essencialmente baseado no método da partição da unidade, que explora o conceito de partição da unidade para compatibilizar um conjunto de funções escolhidas para localmente aproximar de forma eficiente a solução. Apesar de suas vantagens bem conhecidas, o método pode apresentar algumas desvantagens. Por exemplo, o aumento do espaço de aproximação por meio das funções de enriquecimento pode introduzir dependências lineares no sistema de equações resolvente, assim como o aparecimento de elementos de mistura. Para contornar as desvantagens apontadas acima, algumas versões aprimoradas do MEFG foram desenvolvidas. O MEFG Estável é uma primeira versão aqui considerada na qual as funções de enriquecimento do MEFG são modificadas. O MEFG Estável de ordem superior propõe uma modificação adicional para a geração das funções de forma atreladas ao espaço enriquecido. Esta pesquisa visa apresentar e testar numericamente essas novas versões do MEFG recentemente propostas. Além de destacar suas principais características, alguns aspectos sobre a integração numérica quando usado o MEFG Estável de ordem superior, em particular, são também abordados. Por exemplo, detalha-se uma regra de divisão da área do elemento quadrilateral, guiada pela própria definição de sua partição da unidade. Os exemplos escolhidos para os experimentos numéricos consistem em chapas com geometrias favoráveis para explorar as vantagens de cada método. Essencialmente, examinam-se funções singulares com boas propriedades de aproximar a solução nas vizinhanças de vértices de cantos, bem como funções polinomiais para aproximar soluções suaves. Ademais, uma comparação entre o MEF convencional e os métodos aqui descritos é feita levando-se em consideração o número de condição do sistema escalonado e as razões de convergência do erro relativo em deslocamento. Finalmente, os experimentos numéricos mostram que o MEFG Estável de ordem superior é a mais robusta e confiável entre as versões do MEFG testadas.
40

Paditz, Ludwig. "Über mittlere Abweichungen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-112977.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In diesem Artikel werden notwendige und hinreichende Bedingungen für die Gültigkeit von Grenzwertsätzen für mittlere Abweichungen untersucht. In der Terminilogie von J.V.LINNIK (1971) werden die x-Bereiche für mittlere Abweichungen gewöhnlich als "sehr enge" Zonen der integralen normalen Anziehung bezeichnet. Darüber hinaus werden die Restglieder untersucht, die in den asymptotischen Beziehungen auftreten. Die Ordnung der Konvergenzgeschwindigkeit wird angegeben. Frühere Ergebnisse einiger Autoren werden verallgemeinert. Abschließend werden einige Literaturhinweise angegeben
In this paper we study necessary and sufficient conditions for the validity of limit theorems on moderate deviations. Usually x-zones for moderate deviations are called in the terminilogy by YU.V.LINNIK (1971) "very narrow" zones of integral normal attraction. Moreover we analyse the remainder term appearing in the asymptotic relations. Informations on the order of the rate of convergence are given. Earlier results by several authors are generalized. Finally some references are given
41

Bérard, Bergery Blandine. "Approximation du temps local et intégration par régularisation." Thesis, Nancy 1, 2007. http://www.theses.fr/2007NAN10058/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse s'inscrit dans la théorie de l'intégration par régularisation de Russo et Vallois. La première partie est consacrée à l'approximation du temps local des semi-martingales continues. Si X est une diffusion réversible, on montre la convergence d'un premier schéma d'approximation vers le temps local de X, en probabilité uniformément sur les compacts. De ce premier schéma, on tire deux autres schémas d'approximation du temps local, l'un valable pour les semi-martingales continues, l'autre pour le mouvement Brownien standard. Dans le cas du mouvement Brownien, une vitesse de convergence dans L^2(Omega) et un résultat de convergence presque sûre sont établis. La deuxième partie de la thèse est consacrée à l'intégrale "forward" et à la variation quadratique généralisée, définies par des limites en probabilité de famille d'intégrales. Dans le cas Höldérien, la convergence presque sûre est établie. Enfin, on montre la convergence au second ordre pour une série de processus particuliers
The setting of this work is the integration by regularization of Russo and Vallois. The first part studies schemes of approximation of the local time of continuous semimartingales. If X is a reversible diffusion, the convergence of a first schema of approximation to the local time of X is proven, in probability uniformly on the compact sets. From this first schema, two other schemas of approximation for the local time are found. One converges in the semi-martingale case, the other in the Brownian case. Moreover, in the Brownian case, we estimate the rate of convergence in L^2(Omega) and a result of almost sure convergence is proven. The second part study the forward integral and the generalized quadratic variation, which have been defined by convergence of families of integrals, in probability uniformly on the compacts sets. In the case of Hölder processes, the almost sure convergence is proven. Finally, the second order convergence is studied in many cases
42

Manou-Abi, Solym Mawaki. "Théorèmes limites et ordres stochastiques relatifs aux lois et processus stables." Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30025/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse se compose de trois parties indépendantes, toutes en rapport avec les lois et processus stables. Dans un premier temps, nous établissons des théorèmes de convergence (principe d'invariance) vers des processus stables. Les objets considérés sont des fonctionnelles additives de carrés non intégrables d'une chaîne de Markov à temps discret. L'approche envisagée repose sur l'utilisation des coefficients de mélange pour les chaînes de Markov. Dans un second temps, nous obtenons des vitesses de convergence vers des lois stables dans le théorème central limite généralisé à l'aide des propriétés de la distance idéale de Zolotarev. La dernière partie est consacrée à l'étude des ordres stochastiques convexes ou inégalités de comparaison convexe entre des intégrales stochastiques dirigées par des processus stables. L'idée principale sur laquelle reposent les résultats consiste à adapter au contexte stable le calcul stochastique forward-backward
This PhD Thesis is composed of three independent parts about stable laws and processes. In the first part, we establish convergence theorems (invariance principle) to stable processes, for additive functionals of a discrete time Markov chain that are not assumed to be square-integrable. The method is based on the use of mixing coefficients for Markov chains. In the second part, we obtain some rates of convergence to stable laws in the generalized central limit theorem by means of the Zolotarev ideal probability metric. The last part of the thesis is devoted to the study of convex ordering or convex comparison inequalities between stochastic integrals driven by stable processes. The main idea of our results is based on the forward-backward stochastic calculus for the stable case
43

Champier, Sylvie. "Convergence de schémas numériques type Volumes finis pour la résolution d'équations hyperboliques." Saint-Etienne, 1992. http://www.theses.fr/1992STET4007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le travail présenté dans cette thèse porte sur l'étude théorique de la convergence de schémas numériques utilisés pour la résolution d'équations hyperboliques linéaires et non linéaires. Les méthodes d'approximation sont de type volumes finis sur des maillages irréguliers en espace. On considère des schémas décentrés amont et de type Van Leer (quasi d'ordre 1 en espace). Pour chaque schéma, on établit une estimation en norme infinie sur la solution approchée. Dans le cas de rectangles, le schéma est à variation totale décroissante et à l'aide de théorèmes de compacité, on montre la convergence de la solution approchée vers la solution faible (entropique) du problème dans l'espace des fonctions localement intégrables. Cette propriété sur le schéma n'est plus vérifiée dans le cas de triangles. Il est cependant possible d'obtenir une estimation faible sur une variation totale pondérée, suffisante pour obtenir la convergence dans le cas linéaire. Dans le cas non linéaire, on utilise la théorie des solutions mesures introduites par Di Perna. On démontre un théorème général sur les solutions mesures qui permet d'établir la convergence de la solution approchée dans l'espace des fonctions de puissance Pieme localement intégrable, pour tout P supérieur ou égal à 1, vers la solution faible entropique
44

Jeschke, Anja [Verfasser], and Jörn [Akademischer Betreuer] Behrens. "Second Order Convergent Discontinuous Galerkin Projection Method for Dispersive Shallow Water Flows / Anja Jeschke ; Betreuer: Jörn Behrens." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2018. http://d-nb.info/1172880662/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Jeschke, Anja Verfasser], and Jörn [Akademischer Betreuer] [Behrens. "Second Order Convergent Discontinuous Galerkin Projection Method for Dispersive Shallow Water Flows / Anja Jeschke ; Betreuer: Jörn Behrens." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2018. http://nbn-resolving.de/urn:nbn:de:gbv:18-94463.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Martucci, Francesco. "L' ordre économique et monétaire de la Communauté européenne." Paris 1, 2007. http://www.theses.fr/2007PA010310.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'étude défend la thèse selon laquelle l'Union économique et monétaire (UEM) permet à l'Etat membre de l'Union européenne de préserver, en économie de marché mondialisée, la capacité de mener «collectivement» une politique économique et monétaire. L'UEM repose sur un ensemble de règles juridiques qui, en fondant et encadrant le choix de politique économique et monétaire, forment l'ordre économique et monétaire de la Communauté européenne. La première partie de l'étude est consacrée à la répartition des compétences et des pouvoirs prévue par le traité en matière de politique économique et monétaire. Si la Communauté est compétente pour mener une politique monétaire, la politique économique demeure nationale, les institutions communautaires étant néanmoins dotées de pouvoirs de discipline budgétaire et de coordination des politiques économiques. L'ordre économique et monétaire est ainsi fondé sur une asymétrie institutionnelle qui le rend difficilement orientable par l'autorité politique. Aussi l'ordre économique et monétaire est-il davantage orienté par des règles matérielles qui font l'objet de la seconde partie de la thèse. D'une part, les règles de discipline monétaire et budgétaire promeuvent la stabilité macroéconomique exigée de facto par le marché. D'autre part, condition existentielle de la troisième phase de l'UEM, les règles de convergence des politiques économiques constituent un instrument permettant à la Communauté d'orienter l'ordre économique et monétaire afin de promouvoir une action macroéconomique européenne en faveur de la croissance et de l'emploi.
47

Debroux, Noémie. "Mathematical modelling of image processing problems : theoretical studies and applications to joint registration and segmentation." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR02/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous nous proposons d'étudier et de traiter conjointement plusieurs problèmes phares en traitement d'images incluant le recalage d'images qui vise à apparier deux images via une transformation, la segmentation d'images dont le but est de délimiter les contours des objets présents au sein d'une image, et la décomposition d'images intimement liée au débruitage, partitionnant une image en une version plus régulière de celle-ci et sa partie complémentaire oscillante appelée texture, par des approches variationnelles locales et non locales. Les relations étroites existant entre ces différents problèmes motivent l'introduction de modèles conjoints dans lesquels chaque tâche aide les autres, surmontant ainsi certaines difficultés inhérentes au problème isolé. Le premier modèle proposé aborde la problématique de recalage d'images guidé par des résultats intermédiaires de segmentation préservant la topologie, dans un cadre variationnel. Un second modèle de segmentation et de recalage conjoint est introduit, étudié théoriquement et numériquement puis mis à l'épreuve à travers plusieurs simulations numériques. Le dernier modèle présenté tente de répondre à un besoin précis du CEREMA (Centre d'Études et d'Expertise sur les Risques, l'Environnement, la Mobilité et l'Aménagement) à savoir la détection automatique de fissures sur des images d'enrobés bitumineux. De part la complexité des images à traiter, une méthode conjointe de décomposition et de segmentation de structures fines est mise en place, puis justifiée théoriquement et numériquement, et enfin validée sur les images fournies
In this thesis, we study and jointly address several important image processing problems including registration that aims at aligning images through a deformation, image segmentation whose goal consists in finding the edges delineating the objects inside an image, and image decomposition closely related to image denoising, and attempting to partition an image into a smoother version of it named cartoon and its complementary oscillatory part called texture, with both local and nonlocal variational approaches. The first proposed model addresses the topology-preserving segmentation-guided registration problem in a variational framework. A second joint segmentation and registration model is introduced, theoretically and numerically studied, then tested on various numerical simulations. The last model presented in this work tries to answer a more specific need expressed by the CEREMA (Centre of analysis and expertise on risks, environment, mobility and planning), namely automatic crack recovery detection on bituminous surface images. Due to the image complexity, a joint fine structure decomposition and segmentation model is proposed to deal with this problem. It is then theoretically and numerically justified and validated on the provided images
48

Berard, Bergery Blandine. "Approximation du temps local et intégration par régularisation." Phd thesis, Université Henri Poincaré - Nancy I, 2007. http://tel.archives-ouvertes.fr/tel-00181777.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse s'inscrit dans la théorie de l'intégration par régularisation de Russo et Vallois. La première partie est consacrée à l'approximation du temps local des semi-martingales continues. On montre que, si $X$ est une diffusion réversible, alors $ \frac{1}{\epsilon}\int_0^t \left( \indi_{\{ y < X_{s+\epsilon}\}} - \indi_{\{ y < X_{s}\}} \right) \left( X_{s+\epsilon}-X_{s} \right)ds$ converge vers $L_t^y(X)$, en probabilité uniformément sur les compacts, quand $\epsilon \to 0$. De ce premier schéma, on tire deux autres schémas d'approximation du temps local, l'un valable pour les semi-martingales continues, l'autre pour le mouvement Brownien standard. Dans le cas du mouvement Brownien, une vitesse de convergence dans $L^2(\Omega)$ et un résultat de convergence presque sûre sont établis. La deuxième partie de la thèse est consacrée à l'intégrale "forward" et à la variation quadratique généralisée, définies par des limites en probabilité de famille d'intégrales. Dans le cas Höldérien, la convergence presque sûre est établie. Enfin, on montre la convergence au second ordre pour une série de processus particuliers.
49

Paditz, Ludwig. "Über die Annäherung der Verteilungsfunktionen von Summen unabhängiger Zufallsgrößen gegen unbegrenzt teilbare Verteilungsfunktionen unter besonderer Beachtung der Verteilungsfunktion der standardisierten Normalverteilung." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-114206.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mit der vorgelegten Arbeit werden neue Beiträge zur Grundlagenforschung auf dem Gebiet der Grenzwertsätze der Wahrscheinlichkeitstheorie vorgelegt. Grenzwertsätze für Summen unabhängiger Zufallsgrößen nehmen unter den verschiedenartigsten Forschungsrichtungen der Wahrscheinlichkeitstheorie einen bedeutenden Platz ein und sind in der heutigen Zeit nicht mehr allein von theoretischem Interesse. In der Arbeit werden Ergebnisse zu neuere Problemstellungen aus der Summationstheorie unabhängiger Zufallsgrößen vorgestellt, die erstmalig in den fünfziger bzw. sechzger Jahren des 20. Jahrhunderts in der Literatur auftauchten und in den zurückliegenden Jahren mit großem Interesse untersucht wurden. International haben sich in der Theorie der Grenzwertsätze zwei Hauptrichtungen herauskristallisiert: Zum Einen die Fragen zur Konvergenzgeschwindigkeit, mit der eine Summenverteilungsfunktion gegen eine vorgegebene Grenzverteilungsfunktion konvergiert, und zum Anderen die Fragen nach einer Fehlerabschätzung zur Grenzverteilungsfunktion bei einem endlichen Summationsprozeß. Zuerst werden unbegrenz teilbare Grenzverteilungsfunktionen betrachtet und dann wird speziell die Normalverteilung als Grenzverteilung diskutiert. Als charakteristische Kenngrößen werden sowohl Momente oder einseitige Momente bzw. Pseudomomente benutzt. Die Fehlerabschätzungen werden sowohl als gleichmäßige wie auch ungleichmäßige Restgliedabschätzungen angegeben, einschließlich einer Beschreibung der dabei auftretenden absoluten Konstanten. Als Beweismethoden werden sowohl die Methode der charakteristischen Funktionen als auch direkte Methoden (Faltungsmethode) weiter ausgebaut. Für eine 1965 von Bikelis angegebene Fehlerabschätzung gelang es nun erstmalig, die auftretende absolute Konstante C mit C=114,667 numerisch abzuschätzen. Weiterhin werden in der Arbeit sogenannte Grenzwertsätze für mittlere Abweichungen studiert. Hier werden erstmalig auch Restgliedabschätzungen abgeleitet. Der in den letzten Jahren zum Beweis von Grenzwertsätzen eingeschlagene Weg über die Faltung von Verteilungsfunktionen erwies sich als bahnbrechend und bestimmte die Entwicklung sowohl der Theorie der Grenzwertsätze für mittlere und große Abweichungen als auch der Untersuchung zu den ungleichmäßigen Abschätzungen im zentralen Grenzwertsatz bedeutend. Die Faltungsmethode stellt in der vorliegenden Dissertationsschrift das hauptsächliche Beweisinstrument dar. Damit gelang es, eine Reihe neuer Ergebnisse zu erhalten und insbesondere mittels der elektronischen Datenverarbeitung neue numerische Resultate zu erhalten
With the presented work new contributions to basic research in the field of limit theorems of probability theory are given. Limit theorems for sums of independent random variables taking on the most diverse lines of research in probability theory an important place in modern times and are no longer only of theoretical interest. In the work results are presented to newer problems on the summation theory of independent random variables, at first time in the fifties and sixties of the 20th Century appeared in the literature and have been studied in the past few years with great interest. International two main directions have emerged in the theory of limit theorems: Firstly, the questions on the convergence speed of a cumulative distribution function converges to a predetermined limit distribution function, and on the other hand the questions on an error estimate for the limit distribution function at a finite summation process. First indefinite divisible limit distribution functions are considered, then the normal distribution is specifically discussed as a limit distribution. As characteristic parameters both moments or one-sided moments or pseudo-moments are used. The error estimates are stated both in uniform as well as non-uniform residual bounds including a description of the occurring absolute constants. Both the method of characteristic functions as well as direct methods (convolution method) can be further expanded as proof methods. Now for the error estimate, 1965 given by Bikelis, was the first time to estimate the appearing absolute constant C with C = 114.667 numerically. Furthermore, in the work of so-called limit theorems for moderate deviations are studied. Here also remainder estimates are derived for the first time. In recent years to the proof of limit theorems the chosen way of the convolution of distribution functions proved to be groundbreaking and determined the development of both the theory of limit theorems for moderate and large deviations as well as the investigation into the nonuniform estimates in the central limit theorem significantly. The convolution method is in the present thesis, the main instrument of proof. Thus, it was possible to obtain a series of results and obtain new numerical results in particular by means of electronic data processing
50

ALMEIDA, NETO Miguel Santana de. "Os padrões ecomorfológicos apresentados pelas espécies da ordem Characiformes (actinopterygii) são relacionadas com suas adaptações ecológicas?" Universidade Federal Rural de Pernambuco, 2012. http://www.tede2.ufrpe.br:8080/tede2/handle/tede2/5441.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Submitted by (ana.araujo@ufrpe.br) on 2016-08-23T12:27:17Z No. of bitstreams: 1 Miguel Santana de Almeida Neto.pdf: 1157651 bytes, checksum: 9441b1beb74a542d2c1f48a97e3fcee5 (MD5)
Made available in DSpace on 2016-08-23T12:27:18Z (GMT). No. of bitstreams: 1 Miguel Santana de Almeida Neto.pdf: 1157651 bytes, checksum: 9441b1beb74a542d2c1f48a97e3fcee5 (MD5) Previous issue date: 2012-12-05
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
The morphological patterns shown in the species result from the interactions between their phenotype, genotype, and environment through adaptive and evolutionary processes. Thus, this study aims to evaluate the role of phylogeny and feeding in the morphological patterns of the species of Characiformes order, testing the hypothesis that the environmental pressure interferes in the morphological patterns presented by species causing morphological divergence or convergence between them. The site selected for this research was the pond Curralinho, an marginal pond of the Rio São Francisco, located in the state of Pernambuco. In this work we used the twelve species of the Order Characiformes more abundant in the ecosystem evaluated, which were captured in the period from March/2007 to February/2008 Through morphological data and diet analysis, we found that segregation of Characiformes in morphologically distinct groups, reflecting dietary differences. Therefore, morphologically similar species tending to occupy the same trophic guild. The piscivorous Acestrorhynchus britskii, Acestrorhynchus lacustris, Hoplias malabaricus, Serrasalmus brandtii and Pygocentrus piraya showed morphological characteristics that allowed ingest relatively large prey. However, we observed differences in the efficacy of swimming performance of these piscivorous, indicating that the piscivorous that eat whole fish exhibit greater agility swimming performance in order to capture your evasive preys that presented adaptations to avoid and escape the predators. The species Roeboides xenodon and Leporinus reinhardti also preferably used fish in their diets, however, the morphology of these species did not show any adjustments found in majority of the piscivorous. The less swimming speed was evidentiated among the members of the guilds insectivorous and omnivorous, that using food items with lower mobility than fish, like insects, or immobile, such as vegetables. This lower efficiency in swimming performance does not seem to characterize a disadvantage for these species. These appear to preferentially occupy the margin of the water body, which offers greater resource availability for these fish. The relationship between morphology and trophic ecology was confirmed by the Mantel test. This test indicated that the ecological structure of the taxocenosis evaluated probably been shaped by evolutionary adaptations for use of a given resource. These adaptations may be evident when we observe the processes of adaptive convergence and divergence.
Os padrões morfológicos apresentados pelas espécies resultam da interação entre seu fenótipo, genótipo e ambiente, através de processos adaptativos e evolutivos. Dessa forma, este trabalho objetiva avaliar o papel da filogenia e do hábito alimentar nos padrões morfológicos das espécies da Ordem Characiformes, testando a hipótese de que a pressão ambiental interfere nos padrões morfológicos apresentados pelas espécies a ponto de causar divergência ou convergência morfológica entre elas. O local selecionado para esta pesquisa foi a lagoa Curralinho, uma lagoa marginal do Rio São Francisco, localizada no estado de Pernambuco. Neste trabalho foram utilizadas as doze espécies da Ordem Characiformes mais abundantes no ecossistema avaliado, sendo elas capturadas no período de março/2007 a fevereiro/2008. Através de dados morfológicos e análise da dieta, verifica-se que a segregação dos Characiformes em grupos morfologicamente distintos, refletiu de um modo geral, suas diferenças alimentares, com as espécies morfologicamente semelhantes propendendo a ocupar uma mesma categoria trófica. Os piscívoros Acestrorhynchus britskii, Acestrorhynchus lacustres, Hoplias malabaricus, Pygocentrus piraya e Serrasalmus brandtii apresentaram características morfológicas que os permitiram ingerir presas relativamente maiores. No entanto entre esses piscívoros observam-se diferenças na eficiência natatória, indicando que os piscívoros que ingerem peixes inteiros apresentam uma maior agilidade natatória para capturar suas presas evasivas, que se deslocam e apresentam adaptações para fugir de seus predadores. As espécies Roeboides xenodon e Leporinus reinhardti, também utilizaram preferencialmente peixes em suas dietas, no entanto, a morfologia dessas espécies não apresentou as adaptações encontradas na maioria dos piscívoros. A menor velocidade de natação foi evidenciada entre os integrantes das categorias insetívora e onívora, que utilizam itens alimentares com menor mobilidade que os peixes, como os insetos, ou imóveis, como os vegetais. Esta menor eficiência natatória não parece caracterizar uma desvantagem para essas espécies, visto que essas parecem ocupar preferencialmente as margens do corpo d’água, que oferece uma maior disponibilidade de recursos para estes peixes. A relação entre morfologia e ecologia trófica, confirmada pelo teste de Mantel, indica que provavelmente a estruturação ecológica da taxocenose avaliada vem sendo conformada por adaptações evolutivas para utilização de um determinado recurso. Essas adaptações podem ser evidenciadas, quando se observam os processos de convergência e divergência adaptativas.

До бібліографії