Thèses sur le sujet « A-510 »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : A-510.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « A-510 ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Van, Staden Jason. « Identification and characterisation of a Cryptococcus laurentii Abo 510 Phytase ». Thesis, Stellenbosch : Stellenbosch University, 2004. http://hdl.handle.net/10019.1/49982.

Texte intégral
Résumé :
Thesis (MSc)--University of Stellenbosch, 2004.
ENGLISH ABSTRACT: Phosphorus is vital for growth of all life forms and is a fundamental component of nucleic acids, ATP and several other biological compounds. Oilseeds and cereal grains, two major constituents of the diet of animals, contain phytic acid, which is the main storage form of phosphorus in plant cells. Monogastric animals, such as poultry and pigs, are not capable of utilising the bound phosphorus in phytic acid since they do not produce phytase, the essential hydrolysing enzyme. Microbial phytase is therefore added to the animal feed to enhance the availability of phosphorus and thus minimise phosphorus pollution and phosphorus supplementation in diets. For a phytase to be effective in the poultry and swine industry, it needs to be able to release phytic acid phosphorus in the digestive tract, it must be thermostable to resist feed processing and must be inexpensive to produce. One approach for developing an efficient phytase for the animal feed industry is by identifying new phytases from microorganisms, plants and animals. In this study, 11 strains of the genus Cryptococcus were screened for 'phytase activity. Initially, a differential agar plate screening method was employed to determine if any Cryptococcus species were able to express phytase, after which production was confirmed in different liquid media. Cryptococcus laurentii Abo 510 was identified as a strain with significant phytase activity. The C. laurentii Abo 510 strain showed clear zones on the differentialmedia agar plates and the production of phytase at high levels was observed when using wet cells grown in liquid media. The C. laurentii Abo 510 strain produced maximal phytase activity at a relatively high temperature (62°C) and in an acidic pH range (pH 5.0). This phytase also showed a broad substrate specificity that may assist in the release of other phosphate compounds captured in feedstuff. Although the phytase did not require any metal ions for its activity, several metal ions caused inhibition of the phytase activity. The enzyme was stable when exposed to 70°C for up to 180 minutes with only 40% loss in activity. Phosphorus addition to the culture media and enzyme assay solution at concentrations exceeding 500 f.!Minhibited the phytase activity completely. Different carbon sources in the culture media also influenced the phytase activity. The enzyme was determined to be a cell wall-associated phytase with little intracellularactivity.
AFRIKAANSE OPSOMMING: Lewende organismes benodig fosfaat vir groei en oorlewing en fosfaat vorm 'n fundamentele komponent van nukleïensure, ATP en verskeie ander biologiese verbindings. Veevoer bestaan meestal uit twee groot bestanddele, naamlik oliesade en graansoorte wat fitiensuur bevat. Fitiensuur is die vernaamste vorm waarin fosfaat in veevoer gestoor word. Enkelmaagdiere soos pluimvee en varke is nie in staat om die fosfaat van die fitiensuur te benut nie, aangesien hierdie diere nie die geskikte hidrolitiese ensiem, fitase, vir die vrystelling van fosfaat besit nie. 'n Mikrobiese fitase-ensiem word derhalwe by veevoer gevoeg om die fosfaatbeskikbaarheid te verhoog. Sodoende word fosfaatbesoedeling en fosfaataanvullings tot die dieet van diere ook verminder. Vir 'n fitase om effektief in die pluimvee en vark-industrie te wees, moet dit fosfaat vanaf fitiensuur in die spysverteringskanaal vrystel, dit moet behandeling by hoë temperature tydens die veevoervervaardiging oorleef en die ensiem moet goedkoop geproduseer kan word. Een van die benaderings om 'n effektiewe fitase vir die dierevoer-industrie te ontwikkel, is om nuwe fitases in mikroërganismes, plante of diere te identifiseer. In hierdie studie is die fitase-aktiwiteit van 11 stamme van die Cryptococcus genus bepaal. Die seleksie vir die produksie van fitase deur die verskillende Cryptococcus stamme was aanvanklik op differensiële agar plate gedoen en in verskillende vloeisto:finedia bevestig. 'n Cryptococcus laurentii Abo SlOstam is geïdentifiseer as 'n goeie fitase produseerder. Die C. laurentii Abo SlOstam het helder sones op die differensiële media agar plate getoon en die produksie van hoë fitase-aktiwiteit is in nat selle waargeneem na opkweking in vloeisto:finedia. Die C. laurentii Abo 510ras produseer maksimum fitase-aktiwieit by 'n redelike hoë temperatuur (62°C) en in 'n suur pH reeks (pH 5.0). Die fitase het ook 'n wye substraatspesifisiteit wat tot die vrystelling van fosfaat vanaf ander komponente in die veevoer mag bydra. Die fitase het geen metaalione vir sy aktiwiteit benodig nie, maar sekere metaalione het die fitase-aktiwiteit onderdruk. Die ensiem was redelik stabiel by 70°C en het na 180 minute blootstelling slegs 'n 40% verlies in aktiwiteit getoon. Die byvoeging van fosfaat in die kultuurmedium en in die ensiem reaksiemengsel teen konsentrasies bo 500 f.lM, het die fitase aktiwiteit heeltemalonderdruk. Verskeie koolstofbronne het ook 'n effek op die optimale fitase-aktiwiteit getoon. Die fitase ensiem is met die selwand geassosieer en het baie min intrasellulêre aktiwiteit getoon.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wallace, J. « Self avoiding walks on the square lattice ». Thesis, University of Reading, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.376195.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Pester, Cornelia. « A residual a posteriori error estimator for the eigenvalue problem for the Laplace-Beltrami operator ». Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200601556.

Texte intégral
Résumé :
The Laplace-Beltrami operator corresponds to the Laplace operator on curved surfaces. In this paper, we consider an eigenvalue problem for the Laplace-Beltrami operator on subdomains of the unit sphere in $\R^3$. We develop a residual a posteriori error estimator for the eigenpairs and derive a reliable estimate for the eigenvalues. A global parametrization of the spherical domains and a carefully chosen finite element discretization allows us to use an approach similar to the one for the two-dimensional case. In order to assure results in the quality of those for plane domains, weighted norms and an adapted Clément-type interpolation operator have to be introduced.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Grosman, Serguei. « The robustness of the hierarchical a posteriori error estimator for reaction-diffusion equation on anisotropic meshes ». Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200601418.

Texte intégral
Résumé :
Singularly perturbed reaction-diffusion problems exhibit in general solutions with anisotropic features, e.g. strong boundary and/or interior layers. This anisotropy is reflected in the discretization by using meshes with anisotropic elements. The quality of the numerical solution rests on the robustness of the a posteriori error estimator with respect to both the perturbation parameters of the problem and the anisotropy of the mesh. The simplest local error estimator from the implementation point of view is the so-called hierarchical error estimator. The reliability proof is usually based on two prerequisites: the saturation assumption and the strengthened Cauchy-Schwarz inequality. The proofs of these facts are extended in the present work for the case of the singularly perturbed reaction-diffusion equation and of the meshes with anisotropic elements. It is shown that the constants in the corresponding estimates do neither depend on the aspect ratio of the elements, nor on the perturbation parameters. Utilizing the above arguments the concluding reliability proof is provided as well as the efficiency proof of the estimator, both independent of the aspect ratio and perturbation parameters.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Apel, Thomas, et Cornelia Pester. « Clément-type interpolation on spherical domains - interpolation error estimates and application to a posteriori error estimation ». Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200601335.

Texte intégral
Résumé :
In this paper, a mixed boundary value problem for the Laplace-Beltrami operator is considered for spherical domains in $R^3$, i.e. for domains on the unit sphere. These domains are parametrized by spherical coordinates (\varphi, \theta), such that functions on the unit sphere are considered as functions in these coordinates. Careful investigation leads to the introduction of a proper finite element space corresponding to an isotropic triangulation of the underlying domain on the unit sphere. Error estimates are proven for a Clément-type interpolation operator, where appropriate, weighted norms are used. The estimates are applied to the deduction of a reliable and efficient residual error estimator for the Laplace-Beltrami operator.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Grosman, Serguei. « Robust local problem error estimation for a singularly perturbed reaction-diffusion problem on anisotropic finite element meshes ». Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600475.

Texte intégral
Résumé :
Singularly perturbed reaction-diffusion problems exhibit in general solutions with anisotropic features, e.g. strong boundary and/or interior layers. This anisotropy is reflected in the discretization by using meshes with anisotropic elements. The quality of the numerical solution rests on the robustness of the a posteriori error estimator with respect to both the perturbation parameters of the problem and the anisotropy of the mesh. An estimator that has shown to be one of the most reliable for reaction-diffusion problem is the equilibrated residual method and its modification done by Ainsworth and Babuška for singularly perturbed problem. However, even the modified method is not robust in the case of anisotropic meshes. The present work modifies the equilibrated residual method for anisotropic meshes. The resulting error estimator is equivalent to the equilibrated residual method in the case of isotropic meshes and is proved to be robust on anisotropic meshes as well. A numerical example confirms the theory.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Alegría, Varona Ciro. « Vicente Santuc : El topo en su laberinto. Introducción a un filosofar posible hoy, Lima : Universidad Antonio Ruiz de Montoya, 2005, 510 pp ». Pontificia Universidad Católica del Perú - Departamento de Humanidades, 2006. http://repositorio.pucp.edu.pe/index/handle/123456789/112771.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Dahmen, Wolfgang, Helmut Harbrecht et Reinhold Schneider. « Compression Techniques for Boundary Integral Equations - Optimal Complexity Estimates ». Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600464.

Texte intégral
Résumé :
In this paper matrix compression techniques in the context of wavelet Galerkin schemes for boundary integral equations are developed and analyzed that exhibit optimal complexity in the following sense. The fully discrete scheme produces approximate solutions within discretization error accuracy offered by the underlying Galerkin method at a computational expense that is proven to stay proportional to the number of unknowns. Key issues are the second compression, that reduces the near field complexity significantly, and an additional a-posteriori compression. The latter one is based on a general result concerning an optimal work balance, that applies, in particular, to the quadrature used to compute the compressed stiffness matrix with sufficient accuracy in linear time. The theoretical results are illustrated by a 3D example on a nontrivial domain.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Meznik, Ivan. « A Paper Accepted for the Proceedings but not Presented at the Conference ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-81198.

Texte intégral
Résumé :
The concept of the limit of a function is undoubtedly the key to higher mathematics. With a view to very fine mathematical essence of the notion mathematics educators permanently deliberate what didactic method to take in order to reach relatively satisfactory level of its understanding. The paper presents an approach based on the aid of hypothesis that is put forward by means of calculator support.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Nkemzi, Boniface, et Bernd Heinrich. « Partial Fourier approximation of the Lamé equations in axisymmetric domains ». Universitätsbibliothek Chemnitz, 2005. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200501145.

Texte intégral
Résumé :
In this paper, we study the partial Fourier method for treating the Lamé equations in three-­dimensional axisymmetric domains subjected to nonaxisymmetric loads. We consider the mixed boundary value problem of the linear theory of elasticity with the displacement u, the body force f \in (L_2)^3 and homogeneous Dirichlet and Neumann boundary conditions. The partial Fourier decomposition reduces, without any error, the three­dimensional boundary value problem to an infinite sequence of two­dimensional boundary value problems, whose solutions u_n (n = 0,1,2,...) are the Fourier coefficients of u. This process of dimension reduction is described, and appropriate function spaces are given to characterize the reduced problems in two dimensions. The trace properties of these spaces on the rotational axis and some properties of the Fourier coefficients u_n are proved, which are important for further numerical treatment, e.g. by the finite-element method. Moreover, generalized completeness relations are described for the variational equation, the stresses and the strains. The properties of the resulting system of two­dimensional problems are characterized. Particularly, a priori estimates of the Fourier coefficients u_n and of the error of the partial Fourier approximation are given.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Zollman, Alan. « The Use of Graphic Organizers to Improve Student and Teachers Problem-Solving Skills and Abilities ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-83221.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Kunert, Gerd. « A posteriori error estimation for anisotropic tetrahedral and triangular finite element meshes ». Doctoral thesis, [S.l. : s.n.], 1999. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10324701.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Pester, Cornelia. « A posteriori error estimation for non-linear eigenvalue problems for differential operators of second order with focus on 3D vertex singularities ». Doctoral thesis, Logos Verlag Berlin, 2005. https://monarch.qucosa.de/id/qucosa%3A18520.

Texte intégral
Résumé :
This thesis is concerned with the finite element analysis and the a posteriori error estimation for eigenvalue problems for general operator pencils on two-dimensional manifolds. A specific application of the presented theory is the computation of corner singularities. Engineers use the knowledge of the so-called singularity exponents to predict the onset and the propagation of cracks. All results of this thesis are explained for two model problems, the Laplace and the linear elasticity problem, and verified by numerous numerical results.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Abramovitz, Buma, Miryam Berezina, Abraham Berman et Ludmila Shvartsman. « Some Initiatives in Calculus Teaching ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-79286.

Texte intégral
Résumé :
In our experience of teaching Calculus to engineering undergraduates we have had to grapple with many different problems. A major hurdle has been students’ inability to appreciate the importance of the theory. In their view the theoretical part of mathematics is separate from the computing part. In general, students also believe that they can pass their exams even though they do not have a real understanding of the theory behind the problems they are required to solve. In an effort to surmount these difficulties we tried to find ways to make students better understand the theoretical part of Calculus. This paper describes our experience of teaching Calculus. It reports on the continuation of our previous research.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Wurnig, Otto. « Solids of Revolution – from the Integration of a given Functionto the Modelling of a Problem with the help of CAS and GeoGebra ». Proceedings of the tenth International Conference Models in Developing Mathematics Education. - Dresden : Hochschule für Technik und Wirtschaft, 2009. - S. 600 - 605, 2012. https://slub.qucosa.de/id/qucosa%3A777.

Texte intégral
Résumé :
After the students in high school have learned to integrate a function, the calculation of the volume of a solid of revolution, like a rotated parabola, is taken as a good applied example. The next step is to calculate the volume of an object of reality which is interpreted as a solid of revolution of a given function f(x). The students do all these calculations in the same way and get the same result. Consequently the teachers can easily decide if a result is right or wrong. If the students have learned to work with a graphical or CAS calculator, they can calculate the volume of solids of revolution in reality by modelling a possible fitted function f(x). Every student has to decide which points of the curve that generates the solid of revolution can be taken and which function will suitably fit the curve. In Austrian high schools teachers use GeoGebra as a software which allows you to insert photographs or scanned material in the geometric window as a background picture. In this case the student and the teacher can control if the graph of the calculated function will fit the generating curve in a useful way.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Schieck, Matthias. « Facetten der Konvergenztheorie regularisierter Lösungen im Hilbertraum bei A-priori-Parameterwahl ». Doctoral thesis, Universitätsbibliothek Chemnitz, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-201000375.

Texte intégral
Résumé :
Die vorliegende Arbeit befasst sich mit der Konvergenztheorie für die regularisierten Lösungen inkorrekter inverser Probleme bei A-priori-Parameterwahl im Hilbertraum. Zunächst werden bekannte Konvergenzratenresultate basierend auf verallgemeinerten Quelldarstellungen systematisch zusammengetragen. Danach wird sich mit dem Fall befasst, was getan werden kann, wenn solche Quellbedingungen nicht erfüllt sind. Man gelangt zur Analysis von Abstandsfunktionen, mit deren Hilfe ebenfalls Konvergenzraten ermittelt werden können. Praktisch wird eine solche Abstandsfunktion anhand der Betrachtung einer Fredholmschen Integralgleichung 2. Art abgeschätzt. Schließlich werden die Zusammenhänge zwischen bedingter Stabilität, Stetigkeitsmodul und Konvergenzraten erörtert und durch ein Beispiel zur Laplace-Gleichung untermauert
This dissertation deals with the convergence theory of regularized solutions of ill-posed inverse problems in Hilbert space with a priori parameter choice. First, well-known convergence rate results based on general source conditions are brought together systematically. Then it is studied what can be done if such source conditions are not fulfilled. One arrives at the analysis of distance functions. With their help, convergence rates can be determined, too. As an example, a distance function is calculated by solving a Fredholm integral equation of the second kind. Finally, the cross-connections between conditional stability, the modulus of continuity and convergence rates is treated accompanied with an example concerning the Laplace equation
Styles APA, Harvard, Vancouver, ISO, etc.
17

Hellwig, Friederike. « Adaptive Discontinuous Petrov-Galerkin Finite-Element-Methods ». Doctoral thesis, Humboldt-Universität zu Berlin, 2019. http://dx.doi.org/10.18452/20034.

Texte intégral
Résumé :
Die vorliegende Arbeit "Adaptive Discontinuous Petrov-Galerkin Finite-Element-Methods" beweist optimale Konvergenzraten für vier diskontinuierliche Petrov-Galerkin (dPG) Finite-Elemente-Methoden für das Poisson-Modell-Problem für genügend feine Anfangstriangulierung. Sie zeigt dazu die Äquivalenz dieser vier Methoden zu zwei anderen Klassen von Methoden, den reduzierten gemischten Methoden und den verallgemeinerten Least-Squares-Methoden. Die erste Klasse benutzt ein gemischtes System aus konformen Courant- und nichtkonformen Crouzeix-Raviart-Finite-Elemente-Funktionen. Die zweite Klasse verallgemeinert die Standard-Least-Squares-Methoden durch eine Mittelpunktsquadratur und Gewichtsfunktionen. Diese Arbeit verallgemeinert ein Resultat aus [Carstensen, Bringmann, Hellwig, Wriggers 2018], indem die vier dPG-Methoden simultan als Spezialfälle dieser zwei Klassen charakterisiert werden. Sie entwickelt alternative Fehlerschätzer für beide Methoden und beweist deren Zuverlässigkeit und Effizienz. Ein Hauptresultat der Arbeit ist der Beweis optimaler Konvergenzraten der adaptiven Methoden durch Beweis der Axiome aus [Carstensen, Feischl, Page, Praetorius 2014]. Daraus folgen dann insbesondere die optimalen Konvergenzraten der vier dPG-Methoden. Numerische Experimente bestätigen diese optimalen Konvergenzraten für beide Klassen von Methoden. Außerdem ergänzen sie die Theorie durch ausführliche Vergleiche beider Methoden untereinander und mit den äquivalenten dPG-Methoden.
The thesis "Adaptive Discontinuous Petrov-Galerkin Finite-Element-Methods" proves optimal convergence rates for four lowest-order discontinuous Petrov-Galerkin methods for the Poisson model problem for a sufficiently small initial mesh-size in two different ways by equivalences to two other non-standard classes of finite element methods, the reduced mixed and the weighted Least-Squares method. The first is a mixed system of equations with first-order conforming Courant and nonconforming Crouzeix-Raviart functions. The second is a generalized Least-Squares formulation with a midpoint quadrature rule and weight functions. The thesis generalizes a result on the primal discontinuous Petrov-Galerkin method from [Carstensen, Bringmann, Hellwig, Wriggers 2018] and characterizes all four discontinuous Petrov-Galerkin methods simultaneously as particular instances of these methods. It establishes alternative reliable and efficient error estimators for both methods. A main accomplishment of this thesis is the proof of optimal convergence rates of the adaptive schemes in the axiomatic framework [Carstensen, Feischl, Page, Praetorius 2014]. The optimal convergence rates of the four discontinuous Petrov-Galerkin methods then follow as special cases from this rate-optimality. Numerical experiments verify the optimal convergence rates of both types of methods for different choices of parameters. Moreover, they complement the theory by a thorough comparison of both methods among each other and with their equivalent discontinuous Petrov-Galerkin schemes.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Herranz, Sotoca Javier. « Some Digital Signature Schemes with Collective Signers ». Doctoral thesis, Universitat Politècnica de Catalunya, 2005. http://hdl.handle.net/10803/7016.

Texte intégral
Résumé :
Digital signatures are one of the most important consequences of the appearance of public key cryptography, in 1976. These schemes provide authentication, integrity and non-repudiation to digital communications.
Some extensions or variations of the concept of digital signature have been introduced, and many specific realizations of these new types of nature schemes have been proposed.
In this thesis, we deal with the basic definitions and required security properties of traditional signature schemes and two of its extensions: distributed signature schemes and ring signature schemes. We review the state of the art in these two topics; then we propose and analyze new specific schemes for different scenarios.

Namely, we first study distributed signature schemes for general access structures, based on RSA; then we show that such schemes can be used to construct other cryptographic protocols: distributed key distribution schemes and metering schemes. With respect to ring signatures, we opose schemes for both a scenario where the keys are of the Discrete Logarithm type and a scenario where the public keys of users are inferred from their personal identities. Finally, we also propose some distributed ring signature schemes, a kind of schemes which combine the concepts of distributed signatures and ring signatures.

We formally prove the security of all these proposals, assuming that some mathematical problems are hard to solve. Specifically, we base the security of our schemes in the hardness of either the RSA problem, or the Discrete Logarithm problem, or the Computational Diffie-Hellman problem.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Dakroub, Jad. « Analyse a posteriori d'algorithmes itératifs pour des problèmes non linéaires ». Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066259/document.

Texte intégral
Résumé :
La résolution numérique de n’importe quelle discrétisation d’équations aux dérivées partielles non linéaires requiert le plus souvent un algorithme itératif. En général, la discrétisation des équations aux dérivées partielles donne lieu à des systèmes de grandes dimensions. Comme la résolution des grands systèmes est très coûteuse en terme de temps de calcul, une question importante se pose: afin d’obtenir une solution approchée de bonne qualité, quand est-ce qu’il faut arrêter l’itération afin d’éviter les itérations inutiles ? L’objectif de cette thèse est alors d’appliquer, à différentes équations, une méthode qui nous permet de diminuer le nombre d’itérations de la résolution des systèmes en gardant toujours une bonne précision de la méthode numérique. En d’autres termes, notre but est d’appliquer une nouvelle méthode qui fournira un gain remarquable en terme de temps de calcul. Tout d’abord, nous appliquons cette méthode pour un problème non linéaire modèle. Nous effectuons l’analyse a priori et a posteriori de la discrétisation par éléments finis de ce problème et nous proposons par la suite deux algorithmes de résolution itérative correspondants. Nous calculons les estimations d’erreur a posteriori de nos algorithmes itératifs proposés et nous présentons ensuite quelques résultats d’expérience numériques afin de comparer ces deux algorithmes. Nous appliquerons de même cette approche pour les équations de Navier-Stokes. Nous proposons un schéma itératif et nous étudions la convergence et l’analyse a priori et a posteriori correspondantes. Finalement, nous présentons des simulations numériques montrant l’efficacité de notre méthode
The numerical resolution of any discretization of nonlinear PDEs most often requires an iterative algorithm. In general, the discretization of partial differential equations leads to large systems. As the resolution of large systems is very costly in terms of computation time, an important question arises. To obtain an approximate solution of good quality, when is it necessary to stop the iteration in order to avoid unnecessary iterations? A posteriori error indicators have been studied in recent years owing to their remarkable capacity to enhance both speed and accuracy in computing. This thesis deals with a posteriori error estimation for the finite element discretization of nonlinear problems. Our purpose is to apply a new method that allows us to reduce the number of iterations of the resolution system while keeping a good accuracy of the numerical method. In other words, our goal is to apply a new method that provides a remarkable gain in computation time. For a given nonlinear equation we propose a finite element discretization relying on the Galerkin method. We solve the discrete problem using two iterative methods involving some kind of linearization. For each of them, there are actually two sources of error, namely discretization and linearization. Balancing these two errors can be very important, since it avoids performing an excessive number of iterations. Our results lead to the construction of computable upper indicators for the full error. Similarly, we apply this approach to the Navier-Stokes equations. Several numerical tests are provided to evaluate the efficiency of our indicators
Styles APA, Harvard, Vancouver, ISO, etc.
20

Mohan-Ram, Vivekanand. « Modelling Geometric Concepts Via Pop-Up Engineering ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-80693.

Texte intégral
Résumé :
The main purpose of this workshop is to focus upon a complementary approach to the study of, and the investigation into, concepts related to Geometry- Space Strand. It ought to benefit educators especially those who prepare teachers for the primary/elementary schools. Participants in this workshop will initially learn the skills needed in Pop-Up Engineering to produce ‘hole’ 3- D paper models which illustrate some particular geometric concepts. The process of the construction of these models allows for building imagery, testing predictions, arousing and satisfying curiosity, connecting to Geometric concepts and most of all motivating and holding interest. It is envisaged that this approach to the teaching and learning of geometric concepts will provide grounds for discussion, enrichment, exploration, clarification of and ownership of ideas, and cross curriculum integration. It has the potential to reduce the apparent difficulty students experience with the study of geometric concepts.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Hoffmann, R., et R. Klein. « Adjusting the Mathematics Curriculum Into the 21st Century ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-82570.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Apel, T., et F. Milde. « Realization and comparison of various mesh refinement strategies near edges ». Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199800531.

Texte intégral
Résumé :
This paper is concerned with mesh refinement techniques for treating elliptic boundary value problems in domains with re- entrant edges and corners, and focuses on numerical experiments. After a section about the model problem and discretization strategies, their realization in the experimental code FEMPS3D is described. For two representative examples the numerically determined error norms are recorded, and various mesh refinement strategies are compared.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Hofmann, B., et O. Scherzer. « Local Ill-Posedness and Source Conditions of Operator Equations in Hilbert Spaces ». Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199800957.

Texte intégral
Résumé :
The characterization of the local ill-posedness and the local degree of nonlinearity are of particular importance for the stable solution of nonlinear ill-posed problems. We present assertions concerning the interdependence between the ill-posedness of the nonlinear problem and its linearization. Moreover, we show that the concept of the degree of nonlinearity com bined with source conditions can be used to characterize the local ill-posedness and to derive a posteriori estimates for nonlinear ill-posed problems. A posteriori estimates are widely used in finite element and multigrid methods for the solution of nonlinear partial differential equations, but these techniques are in general not applicable to inverse an ill-posed problems. Additionally we show for the well-known Landweber method and the iteratively regularized Gauss-Newton method that they satisfy a posteriori estimates under source conditions; this can be used to prove convergence rates results.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Thomas, Kerry J. « Teaching Mathematical Modelling to Tomorrow''s Mathematicians or, You too can make a million dollars predicting football results ». Turning dreams into reality : transformations and paradigm shifts in mathematics education. - Grahamstown : Rhodes University, 2011. - S. 334 - 339, 2012. https://slub.qucosa.de/id/qucosa%3A1949.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Cusi, Annalisa. « Analyzing the effects of a linguistic approach to the teaching of algebra : students tell “stories of development” revealing new competencies and conceptions ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-79613.

Texte intégral
Résumé :
This work is part of a wide-ranging long-term project aimed at fostering students’ acquisition of symbol sense through teaching experiments on proof in elementary number theory (ENT). In this paper, in particular, we highlight the positive effects of our approach analysing the written reflections that the students involved have produced at the end of the project. These reflections testify an increased level of awareness, developed by students, about the role played by algebraic language as a tool for thinking and a positive evolution in their vision of algebra.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Stöcker, Martin. « Globale Optimierungsverfahren, garantiert globale Lösungen und energieeffiziente Fahrzeuggetriebe ». Doctoral thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-166805.

Texte intégral
Résumé :
Der Schwerpunkt der vorliegenden Arbeit liegt auf Methoden zur Lösung nichtlinearer Optimierungsprobleme mit der Anforderung, jedes globale Optimum garantiert zu finden und mit einer im Voraus festgesetzten Genauigkeit zu approximieren. Eng verbunden mit dieser deterministischen Optimierung ist die Berechnung von Schranken für den Wertebereich einer Funktion über einem gegebenen Hyperquader. Verschiedene Ansätze, z. B. auf Basis der Intervallarithmetik, werden vorgestellt und analysiert. Im Besonderen werden Methoden zur Schrankengenerierung für multivariate ganz- und gebrochenrationale Polynome mit Hilfe der Darstellung in der Basis der Bernsteinpolynome weiterentwickelt. Weiterhin werden in der Arbeit schrittweise die Bausteine eines deterministischen Optimierungsverfahrens unter Verwendung der berechneten Wertebereichsschranken beschrieben und Besonderheiten für die Optimierung polynomialer Aufgaben näher untersucht. Die Analyse und Bearbeitung einer Aufgabenstellung aus dem Entwicklungsprozess für Fahrzeuggetriebe zeigt, wie die erarbeiteten Ansätze zur Lösung nichtlinearer Optimierungsprobleme die Suche nach energieeffizienten Getrieben mit einer optimalen Struktur unterstützen kann. Kontakt zum Autor: [Nachname] [.] [Vorname] [@] gmx [.] de
Styles APA, Harvard, Vancouver, ISO, etc.
27

Johannes, Jan. « Verallgemeinerte Maximum-Likelihood-Methoden und der selbstinformative Grenzwert ». Doctoral thesis, [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=96644258X.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

Feld, Niels. « Faisceaux et modules de Milnor-Witt ». Thesis, Université Grenoble Alpes, 2021. http://www.theses.fr/2021GRALM001.

Texte intégral
Résumé :
On généralise la théorie des modules de cycles de Rost en utilisant la K-théorie de Milnor-Witt au lieu de la K-théorie de Milnor. On obtient un cadre (quadratique) pour étudier certains complexes de cycles et leurs groupes de (co)homologie.De plus, on démontre que le coe ur de la catégorie homotopique stable de Morel-Voevodsky au-dessus d'un corps parfait (équipé de sa t-structure homotopique) est équivalente à la catégorie des modules de cycles de Milnor-Witt.Finalement, on explore une conjecture de Morel concernant les transferts de Bass-Tate définis sur la contraction d'un faisceau homotopique et démontre que la conjecture est vraie à coefficients rationnels. On étudie aussi les relations entre faisceaux homotopiques (contractés), faisceaux homotopiques avec transferts généralisés et MW-faisceaux homotopiques, et démontre une équivalence de catégories. Comme applications, on décrit l'image essentielle du foncteur canonique qui oublie les MW-transferts et utilise ces résultats pour discuter de la conjecture de conservativité en A1-homotopie due à Bachmann et Yakerson
We generalize Rost's theory of cycle modules using the Milnor-Witt K-theory instead of the classical Milnor K-theory. We obtain a (quadratic) setting to study general cycle complexes and their (co)homology groups.Moreover, we prove that the heart of Morel-Voevodsky stable homotopy category over a perfect field (equipped with its homotopy t-structure) is equivalent to the category of Milnor-Witt cycle modules.Finally, we explore a conjecture of Morel about the Bass-Tate transfers defined on the contraction of a homotopy sheaf and prove that the conjecture is true with rational coefficients. We also study the relations between (contracted) homotopy sheaves, sheaves with generalized transfers and MW-homotopy sheaves, and prove an equivalence of categories. As applications, we describe the essential image of the canonical functor that forgets MW-transfers and use theses results to discuss the conservativity conjecture in A1-homotopy due to Bachmann and Yakerson
Styles APA, Harvard, Vancouver, ISO, etc.
29

Köhler, Karoline Sophie. « On efficient a posteriori error analysis for variational inequalities ». Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17635.

Texte intégral
Résumé :
Effiziente und zuverlässige a posteriori Fehlerabschätzungen sind eine Hauptzutat für die effiziente numerische Berechnung von Lösungen zu Variationsungleichungen durch die Finite-Elemente-Methode. Die vorliegende Arbeit untersucht zuverlässige und effiziente Fehlerabschätzungen für beliebige Finite-Elemente-Methoden und drei Variationsungleichungen, nämlich dem Hindernisproblem, dem Signorini Problem und dem Bingham Problem in zwei Raumdimensionen. Die Fehlerabschätzungen hängen vom zum Problem gehörenden Lagrange Multiplikator ab, der eine Verbindung zwischen der Variationsungleichung und dem zugehörigen linearen Problem darstellt. Effizienz und Zuverlässigkeit werden bezüglich eines totalen Fehlers gezeigt. Die Fehleranschätzungen fordern minimale Regularität. Die Approximation der exakten Lösung erfüllt die Dirichlet Randbedingungen und die Approximation des Lagrange Multiplikators ist nicht-positiv im Falle des Hindernis- und Signoriniproblems, und hat Betrag kleiner gleich 1 für das Bingham Problem. Dieses allgemeine Vorgehen ermöglicht das Einbinden nicht-exakter diskreter Lösungen, welche im Kontext dieser Ungleichungen auftreten. Aus dem Blickwinkel der Anwendungen ist Effizienz und Zuverlässigkeit im Bezug auf den Fehler der primalen Variablen in der Energienorm von großem Interesse. Solche Abschätzungen hängen von der Wahl eines effizienten diskreten Lagrange Multiplikators ab. Im Falle des Hindernis- und Signorini Problems werden postive Beispiele für drei Finite-Elemente Methoden, der konformen Courant Methode, der nicht-konformen Crouzeix-Raviart Methode und der gemischten Raviart-Thomas Methode niedrigster Ordnung hergeleitet. Partielle Resultate liegen im Fall des Bingham Problems vor. Numerischer Experimente heben die theoretischen Ergebnisse hervor und zeigen Effizienz und Zuverlässigkeit. Die numerischen Tests legen nahe, dass der aus den Abschätzungen resultierende adaptive Algorithmus mit optimaler Konvergenzrate konvergiert.
Efficient and reliable a posteriori error estimates are a key ingredient for the efficient numerical computation of solutions for variational inequalities by the finite element method. This thesis studies such reliable and efficient error estimates for arbitrary finite element methods and three representative variational inequalities, namely the obstacle problem, the Signorini problem, and the Bingham problem in two space dimensions. The error estimates rely on a problem connected Lagrange multiplier, which presents a connection between the variational inequality and the corresponding linear problem. Reliability and efficiency are shown with respect to some total error. Reliability and efficiency are shown under minimal regularity assumptions. The approximation to the exact solution satisfies the Dirichlet boundary conditions, and an approximation of the Lagrange multiplier is non-positive in the case of the obstacle and Signorini problem and has an absolute value smaller than 1 for the Bingham flow problem. These general assumptions allow for reliable and efficient a posteriori error analysis even in the presence of inexact solve, which naturally occurs in the context of variational inequalities. From the point of view of the applications, reliability and efficiency with respect to the error of the primal variable in the energy norm is of great interest. Such estimates depend on the efficient design of a discrete Lagrange multiplier. Affirmative examples of discrete Lagrange multipliers are presented for the obstacle and Signorini problem and three different first-order finite element methods, namely the conforming Courant, the non-conforming Crouzeix-Raviart, and the mixed Raviart-Thomas FEM. Partial results exist for the Bingham flow problem. Numerical experiments highlight the theoretical results, and show efficiency and reliability. The numerical tests suggest that the resulting adaptive algorithms converge with optimal convergence rates.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Merdon, Christian. « Aspects of guaranteed error control in computations for partial differential equations ». Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2013. http://dx.doi.org/10.18452/16818.

Texte intégral
Résumé :
Diese Arbeit behandelt garantierte Fehlerkontrolle für elliptische partielle Differentialgleichungen anhand des Poisson-Modellproblems, des Stokes-Problems und des Hindernisproblems. Hierzu werden garantierte obere Schranken für den Energiefehler zwischen exakter Lösung und diskreten Finite-Elemente-Approximationen erster Ordnung entwickelt. Ein verallgemeinerter Ansatz drückt den Energiefehler durch Dualnormen eines oder mehrerer Residuen aus. Hinzu kommen berechenbare Zusatzterme, wie Oszillationen der gegebenen Daten, mit expliziten Konstanten. Für die Abschätzung der Dualnormen der Residuen existieren viele verschiedene Techniken. Diese Arbeit beschäftigt sich vorrangig mit Equilibrierungsschätzern, basierend auf Raviart-Thomas-Elementen, welche effiziente garantierte obere Schranken ermöglichen. Diese Schätzer werden mit einem Postprocessing-Verfahren kombiniert, das deren Effizienz mit geringem zusätzlichen Rechenaufwand deutlich verbessert. Nichtkonforme Finite-Elemente-Methoden erzeugen zusätzlich ein Inkonsistenzresiduum, dessen Dualnorm mit Hilfe diverser konformer Approximationen abgeschätzt wird. Ein Nebenaspekt der Arbeit betrifft den expliziten residuen-basierten Fehlerschätzer, der für gewöhnlich optimale und leicht zu berechnende Verfeinerungsindikatoren für das adaptive Netzdesign liefert, aber nur schlechte garantierte obere Schranken. Eine neue Variante, die auf den equilibrierten Flüssen des Luce-Wohlmuth-Fehlerschätzers basiert, führt zu stark verbesserten Zuverlässigkeitskonstanten. Eine Vielzahl numerischer Experimente vergleicht alle implementierten Fehlerschätzer und zeigt, dass effiziente und garantierte Fehlerkontrolle in allen vorliegenden Modellproblemen möglich ist. Insbesondere zeigt ein Modellproblem, wie die Fehlerschätzer erweitert werden können, um auch auf Gebieten mit gekrümmten Rändern garantierte obere Schranken zu liefern.
This thesis studies guaranteed error control for elliptic partial differential equations on the basis of the Poisson model problem, the Stokes equations and the obstacle problem. The error control derives guaranteed upper bounds for the energy error between the exact solution and different finite element discretisations, namely conforming and nonconforming first-order approximations. The unified approach expresses the energy error by dual norms of one or more residuals plus computable extra terms, such as oscillations of the given data, with explicit constants. There exist various techniques for the estimation of the dual norms of such residuals. This thesis focuses on equilibration error estimators based on Raviart-Thomas finite elements, which permit efficient guaranteed upper bounds. The proposed postprocessing in this thesis considerably increases their efficiency at almost no additional computational costs. Nonconforming finite element methods also give rise to a nonconsistency residual that permits alternative treatment by conforming interpolations. A side aspect concerns the explicit residual-based error estimator that usually yields cheap and optimal refinement indicators for adaptive mesh refinement but not very sharp guaranteed upper bounds. A novel variant of the residual-based error estimator, based on the Luce-Wohlmuth equilibration design, leads to highly improved reliability constants. A large number of numerical experiments compares all implemented error estimators and provides evidence that efficient and guaranteed error control in the energy norm is indeed possible in all model problems under consideration. Particularly, one model problem demonstrates how to extend the error estimators for guaranteed error control on domains with curved boundary.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Pönitz, Kornelia. « Finite-Elemente-Mortaring nach einer Methode von J. A. Nitsche für elliptische Randwertaufgaben ». Doctoral thesis, Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200601648.

Texte intégral
Résumé :
Viele technische Prozesse führen auf Randwertprobleme mit partiellen Differentialgleichungen, die mit Finite-Elemente-Methoden näherungsweise gelöst werden können. Spezielle Varianten dieser Methoden sind Finite-Elemente-Mortar-Methoden. Sie erlauben das Arbeiten mit an Teilgebietsschnitträndern nichtzusammenpassenden Netzen, was für Probleme mit komplizierten Geometrien, Randschichten, springenden Koeffizienten sowie für zeitabhängige Probleme von Vorteil sein kann. Ebenso können unterschiedliche Diskretisierungsmethoden in den einzelnen Teilgebieten miteinander gekoppelt werden. In dieser Arbeit wird das Finite-Elemente-Mortaring nach einer Methode von Nitsche für elliptische Randwertprobleme auf zweidimensionalen polygonalen Gebieten untersucht. Von besonderem Interesse sind dabei nichtreguläre Lösungen (u \in H^{1+\delta}(\Omega), \delta>0) mit Eckensingularitäten für die Poissongleichung sowie die Lamé-Gleichung mit gemischten Randbedingungen. Weiterhin werden singulär gestörte Reaktions-Diffusions-Probleme betrachtet, deren Lösungen zusätzlich zu Eckensingularitäten noch anisotropes Verhalten in Randschichten aufweisen. Für jede dieser drei Problemklassen wird das Nitsche-Mortaring dargelegt. Es werden einige Eigenschaften der Mortar-Diskretisierung angegeben und a-priori-Fehlerabschätzungen in einer H^1-artigen sowie der L_2-Norm durchgeführt. Auf lokal verfeinerten Dreiecksnetzen können auch für Lösungen mit Eckensingularitäten optimale Konvergenzordnungen nach gewiesen werden. Bei den Lösungen mit anisotropen Verhalten werden zusätzlich anisotrope Dreiecksnetze verwendet. Es werden auch hier Konvergenzordnungen wie bei klassischen Finite-Elemente-Methoden ohne Mortaring erreicht. Numerische Experimente illustrieren die Methode und die Aussagen zur Konvergenz.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Grosman, Sergey. « Adaptivity in anisotropic finite element calculations ». Doctoral thesis, Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200600815.

Texte intégral
Résumé :
When the finite element method is used to solve boundary value problems, the corresponding finite element mesh is appropriate if it is reflects the behavior of the true solution. A posteriori error estimators are suited to construct adequate meshes. They are useful to measure the quality of an approximate solution and to design adaptive solution algorithms. Singularly perturbed problems yield in general solutions with anisotropic features, e.g. strong boundary or interior layers. For such problems it is useful to use anisotropic meshes in order to reach maximal order of convergence. Moreover, the quality of the numerical solution rests on the robustness of the a posteriori error estimation with respect to both the anisotropy of the mesh and the perturbation parameters. There exist different possibilities to measure the a posteriori error in the energy norm for the singularly perturbed reaction-diffusion equation. One of them is the equilibrated residual method which is known to be robust as long as one solves auxiliary local Neumann problems exactly on each element. We provide a basis for an approximate solution of the aforementioned auxiliary problem and show that this approximation does not affect the quality of the error estimation. Another approach that we develope for the a posteriori error estimation is the hierarchical error estimator. The robustness proof for this estimator involves some stages including the strengthened Cauchy-Schwarz inequality and the error reduction property for the chosen space enrichment. In the rest of the work we deal with adaptive algorithms. We provide an overview of the existing methods for the isotropic meshes and then generalize the ideas for the anisotropic case. For the resulting algorithm the error reduction estimates are proven for the Poisson equation and for the singularly perturbed reaction-difussion equation. The convergence for the Poisson equation is also shown. Numerical experiments for the equilibrated residual method, for the hierarchical error estimator and for the adaptive algorithm confirm the theory. The adaptive algorithm shows its potential by creating the anisotropic mesh for the problem with the boundary layer starting with a very coarse isotropic mesh.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Wurnig, Otto. « Solids of Revolution – from the Integration of a given Function to the Modelling of a Problem with the help of CAS and GeoGebra ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-81160.

Texte intégral
Résumé :
After the students in high school have learned to integrate a function, the calculation of the volume of a solid of revolution, like a rotated parabola, is taken as a good applied example. The next step is to calculate the volume of an object of reality which is interpreted as a solid of revolution of a given function f(x). The students do all these calculations in the same way and get the same result. Consequently the teachers can easily decide if a result is right or wrong. If the students have learned to work with a graphical or CAS calculator, they can calculate the volume of solids of revolution in reality by modelling a possible fitted function f(x). Every student has to decide which points of the curve that generates the solid of revolution can be taken and which function will suitably fit the curve. In Austrian high schools teachers use GeoGebra as a software which allows you to insert photographs or scanned material in the geometric window as a background picture. In this case the student and the teacher can control if the graph of the calculated function will fit the generating curve in a useful way.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Reguly, Afonso. « Fragilidade a têmpera em aços SAE 5160 ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1999. http://hdl.handle.net/10183/119237.

Texte intégral
Résumé :
O presente trabalho centra-se no fenômeno denominado de Fragilidade à Têmpera. Este fenômeno, observado em aços de alto carbono temperados, leva a uma fratura intergranular e está associado a segregação de fósforo e formação de cementita nos contornos de grão. Durante este estudo dois aços com composição química típica dos aços comerciais SAE 5160, mas com dois diferentes níveis de fósforo, foram utilizados para avaliar a influência da Fragilidade à Têmpera na resistência ao Impacto Charpy a temperatura ambiente. Os diferentes tratamentos térmicos realizados incluíram têmpera e revenidos convencionais e tratamentos térmicos especiais realizados na Gleeble 1500. Com o tratamento térmico convencional microestruturas martensíticas e bainítica foram obtidas. Para os aços martensíticos as temperaturas de austenitização variaram entre 830 e 1100 oC com o revenido variando entre como temperado até 500 oC por uma hora. A Gleeble 1500 foi utilizada para estudar a influência de um curto ciclo de austenitização na Fragilidade à Têmpera. Os resultados indicaram um melhor desempenho do aço de baixo P para todas as condições de tratamento térmico utilizadas. Os ciclos curtos de austenitização indicaram que a fragilidade à têmpera não pode ser evitada por este procedimento. Baseado na análise dos experimentos realizados com o aço SAE 5160 são apresentadas sugestões para minimizar os efeitos da Fragilidade à Têmpera em aços de alto carbono temperados e revenidos.
The present work is centered on the quench embrittlement phenomena. This phenomenon, observed in hardened high carbon steels, leads to intergranular fracture and is associated with phosphorus segregation and cementite formation at grain boundaries. During this study, two steels with compositions typical for commercial SAE 5160 steel, but with two different levels of phosphorus, were utilized to evaluate the influence of quench embrittlement on room temperature impact toughness. The heat treatment included conventional quench and temper treatments and special heat treatments processed on the Gleeble 1500. With conventional heat treatment martensitic and bainitic microstructures were obtained. For the martensitic microstructures, austenitizing temperatures ranged from 830 to 1100 oC with tempering temperatures from as quenched to 500 oC. The Gleeble 1500 was used to study the influence of short time heat treatment on the quench embrittlement. The low P steel exhibited better performance than the high P steel under all heat treatment conditions. Results on the short-holding-time treatment indicated that the QE can not be avoided by this procedure and a similar behavior between conventional and shortholding time treatment was observed. Based on an analysis of the experiments realized with the SAE 5160 steel suggestions to minimize the effects of quench embrittlement in high carbon quench and tempered steels were discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Polujan, Alexandr Verfasser], et Alexander [Gutachter] [Pott. « Boolean and vectorial functions : a design-theoretic point of view / Alexandr Polujan ; Gutachter : Alexander Pott ». Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2021. http://d-nb.info/1239811446/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Fromenteau, Clément. « Sur le champ de Teichmüller des surfaces de Hopf ». Thesis, Angers, 2017. http://www.theses.fr/2017ANGE0094.

Texte intégral
Résumé :
Le but de cette thèse est d’utiliser le langage des champs pour étudier les espaces de Teichmüller dans un cadre analytique. L’approche classique de Kodaira et Spencer dans la théorie des déformations infinitésimales et le théorème d’existence de déformation verselle de Kuranishi ne permettent pas en effet une étude de l’espace de Teichmüller comme objet analytique global. Lathéorie des champs est particulièrement adaptée à l’étude des espaces de modules et des quotients mais elle n’a été essentiellement employée que dans un cadre algébrique. Après la présentation de son adaptation au cadre analytique on s’attachera à l’utiliser sur l’exemple simple mais néanmoins pertinent des surfaces de Hopf, premier exemple de variété compacte complexe non Kählérienne, dont l’espace de Teichmüller n’est ni séparé, ni une orbifold. On donne en particulier deux atlas concrets de ce champ et on calcul certain groupe d’homotopie et de cohomologie. Enfin, on donne des applications aux classes d’homotopies de déformations de surfaces de Hopf
The goal of this phd thesis is to use stacks to study Teichmüller spaces in an analytic framework. Kodaira Spencer theory of infinitesimal deformations is not enough to describe teichmüller space as a global analytic object. Stack theory is very well adapted tostuying moduli spaces and quotients. however it is essentially developped in an algebraic context. We adapt this theory to an analytic framework and we use it on simple but interesting example of Hopf surfaces. In particular we give two concrete atlases of the Teichmüller stack of Hopf surfaces. We compute some of its homotopy groups and homology groups. Finally we give some applications to homotopy classes of deformations of Hopf surfaces
Styles APA, Harvard, Vancouver, ISO, etc.
37

Hou, Zhanyuan. « A class of functional differential equations of mixed type ». Thesis, London Metropolitan University, 1994. http://repository.londonmet.ac.uk/3300/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Rickett, Lydia. « A mathematical analysis of digestive processes in a model stomach ». Thesis, University of East Anglia, 2013. https://ueaeprints.uea.ac.uk/48042/.

Texte intégral
Résumé :
It is of great medical interest to gain a better understanding of digestion in the human stomach, not least because of the relevance to nutrient and drug delivery. The Institute of Food Research has developed the Dynamic Gastric Model, a physical, in vitro model stomach capable of re-creating the physiological conditions experienced in vivo. The aim of this thesis is to examine mathematically digestion in the main body (top section) of the Dynamic Gastric Model, where gentle wall movements and gastric secretions result in the outside layer of the digesta “sloughing off”, before passing into the bottom section for further processing. By considering a simplified, local description of the flow close to the wall, we may gain an insight into the mechanisms behind this behaviour. This description focuses on the mixing of two layers of creeping fluid through temporal instability of the perturbed fluid interface. Some attention is also paid to a more general study of the surrounding flow field. Linear, two-fluid flow next to a prescribed, sinusoidally moving wall is found to be stable in all cases. Studies of thin film flow next to such a wall suggest that the same may be true of the nonlinear case, although in the case of an inclined wall wave steepening is found to occur for early times. A linear instability is found for small wavenumber disturbances when the wall is modelled as an elastic beam or when we include a scalar material field that acts to alter the surface tension at the interface. An examination of Navier–Stokes flow of a single fluid through a diverging channel (representing a small strip through the centre of the main body) reveals that the flow loses symmetry at a lower Reynolds number than flow through a channel of uniform width. Our results are interpreted in terms of the Dynamic Gastric Model.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Rabus, Hella. « On the quasi-optimal convergence of adaptive nonconforming finite element methods in three examples ». Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2014. http://dx.doi.org/10.18452/16970.

Texte intégral
Résumé :
Eine Vielzahl von Anwendungen in der numerischen Simulation der Strömungsdynamik und der Festkörpermechanik begründen die Entwicklung von zuverlässigen und effizienten Algorithmen für nicht-standard Methoden der Finite-Elemente-Methode (FEM). Um Freiheitsgrade zu sparen, wird in jedem Durchlauf des adaptiven Algorithmus lediglich ein Teil der Gebiete verfeinert. Einige Gebiete bleiben daher möglicherweise verhältnismäßig grob. Die Analyse der Konvergenz und vor allem die der Optimalität benötigt daher über die a priori Fehleranalyse hinausgehende Argumente. Etablierte adaptive Algorithmen beruhen auf collective marking, d.h. die zu verfeinernden Gebiete werden auf Basis eines Gesamtfehlerschätzers markiert. Bei adaptiven Algorithmen mit separate marking wird der Gesamtfehlerschätzer in einen Volumenterm und in einen Fehlerschätzerterm aufgespalten. Da der Volumenterm unabhängig von der diskreten Lösung ist, kann einer schlechten Datenapproximation durch eine lokal tiefe Verfeinerung begegnet werden. Bei hinreichender Datenapproximation wird das Gitter dagegen bezüglich des neuen Fehlerschätzerterms wie üblich level-orientiert verfeinert. Die numerischen Experimente dieser Arbeit liefern deutliche Indizien der quasi-optimalen Konvergenz für den in dieser Arbeit untersuchten adaptiven Algorithmus, der auf separate marking beruht. Der Parameter, der die Verbesserung der Datenapproximation sicherstellt, ist frei wählbar. Dadurch ist es erstmals möglich, eine ausreichende und gleichzeitig optimale Approximation der Daten innerhalb weniger Durchläufe zu erzwingen. Diese Arbeit ermöglicht es, Standardargumente auch für die Konvergenzanalyse von Algorithmen mit separate marking zu verwenden. Dadurch gelingt es Quasi-Optimalität des vorgestellten Algorithmus gemäß einer generellen Vorgehensweise für die drei Beispiele, dem Poisson Modellproblem, dem reinen Verschiebungsproblem der linearen Elastizität und dem Stokes Problem, zu zeigen.
Various applications in computational fluid dynamics and solid mechanics motivate the development of reliable and efficient adaptive algorithms for nonstandard finite element methods (FEMs). To reduce the number of degrees of freedom, in adaptive algorithms only a selection of finite element domains is marked for refinement on each level. Since some element domains may stay relatively coarse, even the analysis of convergence and more importantly the analysis of optimality require new arguments beyond an a priori error analysis. In adaptive algorithms, based on collective marking, a (total) error estimator is used as refinement indicator. For separate marking strategies, the (total) error estimator is split into a volume term and an error estimator term, which estimates the error. Since the volume term is independent of the discrete solution, if there is a poor data approximation the improvement may be realised by a possibly high degree of local mesh refinement. Otherwise, a standard level-oriented mesh refinement based on an error estimator term is performed. This observation results in a natural adaptive algorithm based on separate marking, which is analysed in this thesis. The results of the numerical experiments displayed in this thesis provide strong evidence for the quasi-optimality of the presented adaptive algorithm based on separate marking and for all three model problems. Furthermore its flexibility (in particular the free steering parameter for data approximation) allows a sufficient and optimal data approximation in just a few number of levels of the adaptive scheme. This thesis adapts standard arguments for optimal convergence to adaptive algorithms based on separate marking with a possibly high degree of local mesh refinement, and proves quasi-optimality following a general methodology for three model problems, i.e., the Poisson model problem, the pure displacement problem in linear elasticity and the Stokes equations.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Mohamed, Mabruka. « A numerical study of the complex Lorenz system as a dynamical model ». Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/12136/.

Texte intégral
Résumé :
A nonlinear dynamical system is a mathematical model for a portion of the physical world where continuous components are interacting with each other. Such systems are complex, and it is difficult to predict how they will react towards changing the driving parameters and initial conditions. This dissertation is concerned with the numerical aspect of controlling the Lorenz system that studies the production of the magnetic field in sunspots. The study aims to do this by applying different approaches to the system.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Jalal, L. « A calculation of a half-integral weight multiplier system on SU(2,1) ». Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/766292/.

Texte intégral
Résumé :
In this thesis, we construct a half-integral weight multiplier system on the group SU(2,1). In order to do so, we first find a formula for a 2-cocycle representing the double cover of SU(2,1)(k), where k is a local field. For each non-archimedean local field k, we describe how the cocycle splits on a compact open subgroup. The multiplier system is then expressed in terms of the product of the local splittings at each prime.
Styles APA, Harvard, Vancouver, ISO, etc.
42

Houghton, C. J. « A summary of multimonopoles ». Thesis, University of Cambridge, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.604260.

Texte intégral
Résumé :
For the most part, the thesis is about how finite symmetry groups are used to study Bogomolny-Prasad-Sommerfield multimonopoles. The Nahm equations corresponding to 7-monopoles and to 5-monopoles are generally intractable. However, the Nahm equations corresponding to an icosahedrally invariant 7-monopole and to an octahedrally invariant 5-monopole are tractable and these equations are calculated and solved. From a solution of the Nahm equations the monopole fields are numerically attainable. The Donaldson and Jarvis rational maps describe monopole moduli spaces and are used here to study symmetric multimonopoles. Geodesics of symmetric monopoles are found using the Donaldson rational maps. The twisted line scattering geodesics of monopoles with rotary-reflection symmetries are studied in this way. Using the Jarvis rational map it is possible to tell precisely which symmetric monopoles there are. A symmetric monopole corresponds to a symmetric rational map and the question of which symmetric monopoles exist is reduced to the question of which symmetric rational maps there are, a question answered using elementary group representation theory. It is found that there is a geodesic of tetrahedral 4-monopoles. The Nahm equations are solved for these monopoles. There is a two-dimensional space of D2 3-monopoles. The D2 3-monopole Nahm equations are complex Euler-Poinsot equations and their solutions are known. There are geodesics in this space that are identical to the 2-monopole right-angle scattering geodesics. The 3-monopole twisted line scattering geodesics also lie in this space. Knowing the Nahm data for these monopoles allows the fields to be computed numerically. It is discovered that there are monopoles along the twisted line scattering geodesics with anti-zeros of the Higgs field. Many different hyperKähler manifolds are monopole moduli spaces. These hyperKähler manifolds always have an isometric SO3 action. In the thesis new hyperKähler manifolds are derived from monopole moduli spaces by fixing monopoles. These fixed monopole spaces do not have an SO3 action.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Walsh, Toby. « A theory of abstraction ». Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/20276.

Texte intégral
Résumé :
Abstraction is the process of mapping one representation of a problem onto a simpler, more abstract representation; the abstract solution can then be used to guide the search for a solution to the original, more complex problem. By providing a global control of the search, abstraction can greatly improve our problem solving ability. Unfortunately, the use of abstraction has in general lacked sound and theoretical foundations causing many problems. This thesis therefore proposes a general purpose theory of abstraction. We use this theory to classify the various types of abstraction, to investigate their formal properties, to analyse and criticise previous work in abstraction, to find methods for building abstractions automatically, and to explore how to use abstractions.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Nadim, A. J. « A periodic monogenic resolution ». Thesis, University College London (University of London), 2015. http://discovery.ucl.ac.uk/1470758/.

Texte intégral
Résumé :
In this thesis we examine the R(2) − D(2) problem for CW complexes with fundamental group isomorphic to the very non-abelian general affine group J = GA(1, F5) of order 20. In particular, since J is finite of cohomological period 8, it admits a periodic free resolution of finitely generated modules. We devote our efforts towards showing the existence of a diagonal free resolution of Z over Λ = Z[J] of period 8 via a neat decomposition of the syzygies Ωn(Z) (classes of modules stably isomorphic to certain kernels), which occur in a decomposed form in the constituent infinite monogenic resolutions. A traditional strategy for constructing a diagonal resolution would entail considering extensions of all possible sub-modules after extensive trial and error techniques. However, we side step this method by considering a more intellectual line, namely the kernels K(i) of the generators �(i) : Λ → θi of the row submodules θi ⊂ T4(Z, 5). We describe these kernels as distinct extensions of indecomposable modules and prove that each K(i) is in fact congruent to a quotient module of Λ. Moreover, we also examine the monogenicity of certain K(i) and we show how they relate to the Ωn(Z) . A diagonal resolution would significantly simplify group cohomology calculations Hn Λ(J; Z) ∼= Extn Λ (Z, Z) with coefficients in Z. Moreover, detailed knowledge of the free resolution is an essential step towards solving the R(2)− D(2)-problem positively for J. The D(2)-problem asks if a three-dimensional CW complex is homotopy equivalent to a two-dimensional CW complex provided H3(X, ˜ Z) = H3(X; B) = 0 for all coefficient systems B. Johnson proved that this problem is equivalent to a purely algebraic problem he called the R(2)-problem.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Harrison, Elizabeth. « Optimising power flow in a volatile electrical grid using a message passing algorithm ». Thesis, Aston University, 2017. http://publications.aston.ac.uk/37487/.

Texte intégral
Résumé :
Current methods of optimal power flow were not designed to handle increasing level of volatility in the electrical networks, this thesis suggests that a message passing-based approach could be useful for managing power distribution in electricity networks. This thesis shows the adaptability of message passing algorithms and demonstrates and validates its capabilities in addressing scenarios with inherent fluctuations, in minimising load shedding and generation costs, and in limiting voltages. Results are promising but more work is needed for this to be practical to real networks.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Jain, Akash. « A universal framework for hydrodynamics ». Thesis, Durham University, 2018. http://etheses.dur.ac.uk/12707/.

Texte intégral
Résumé :
In this thesis, we present a universal framework for hydrodynamics starting from the fundamental considerations of symmetries and the second law of thermodynamics, while allowing for additional gapless modes in the low-energy spectrum. Examples of such fluids include superfluids and fluids with surfaces. Typically, additional dynamical modes in hydrodynamics also need to be supplied with their own equations of motion by hand, like the Josephson equation for superfluids and the Young-Laplace equation for fluid surfaces. However, we argue that these equations can be derived within the hydrodynamic framework by a careful off-shell generalisation of the second law. This potentially provides a universal framework for a large class of hydrodynamic theories, based on their underlying symmetries and gapless modes. Motivated by this newly found universality, we present an all-order analysis of the second law of thermodynamics and propose a classification scheme for the allowed hydrodynamic transport, including arbitrary gapless modes, independent spin current, and background torsion. In the second half of this thesis, we look at the construction of null fluids which are a new viewpoint of Galilean fluids. These are essentially fluids coupled to spacetime backgrounds carrying a covariantly constant null isometry, but with additional constraints imposed on the background gauge field and affine connection to reproduce the correct Galilean degrees of freedom. We discuss the Galilean version of quantum anomalies and their effect on hydrodynamics. Finally, we follow our relativistic discussion to allow for arbitrary gapless modes in Galilean hydrodynamics and present a classification scheme for the second law abiding hydrodynamic transport at all orders in the derivative expansion. We apply these abstract ideas to review the theory of ordinary relativistic/Galilean hydrodynamics and provide novel constructions for relativistic/Galilean (non-Abelian) superfluid dynamics and surface transport. We also comment on the possible application to the theory of magnetohydrodynamics.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Fry, H. M. « A study of droplet deformation ». Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1306708/.

Texte intégral
Résumé :
In both engineering and medical applications it is often useful to use the knowledge of the conditions under which adhering liquid droplets appear, deform and interact with surrounding fluids, in order to either remove or create them. Examples include the de-wetting of aircraft surfaces and the process of injecting glue into the bloodstream in the treatment of aneurysms. The particular types of models discussed here theoretically are based on droplets with a large density compared to that of the surrounding fluid. Using this ratio as a small parameter, the Navier-Stokes equations may be simplified, and in view of the nature of the interfacial bound- ary conditions the droplet may be considered as solid to leading order at any given time step for a certain time scale. In the first part of the thesis, we study an example of an initially semicircular droplet adhering to a wall for low-to-medium Reynolds numbers (along with simpler test problems). We numerically determine unsteady solutions in both the surrounding fluid and the droplet, coupling them together to obtain a model of the droplet deformation. Analysis within the droplet leads to the identification of two temporal stages, and the effect on large-time velocities is discussed. The second part of the thesis sees a similar approach applied to a surface mounted droplet completely contained within the boundary layer of an external fluid for high Reynolds numbers. The two-fluid interface for such a regime is analysed using a lubrication approximation within the viscous sublayer of a triple-deck structure. Finally, the lubrication is abandoned and we present a fully non-linear solution in air over any obstacle shape, as well as a two-way interacting model of droplet deformation, capable of simulating the free surface of the droplet as it becomes severely distorted.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Iskauskas, Andrew. « A study of noncommutative instantons ». Thesis, Durham University, 2015. http://etheses.dur.ac.uk/11106/.

Texte intégral
Résumé :
We consider the properties and behaviour of 2 U(2) noncommutative instantons: solutions of the NC-deformed ADHM equations which arise from U(2) 5d Yang-Mills theory. The ADHM construction allows us to find all such solutions, which form a moduli space of allowed configurations. We derive the metric for such a space, and consider the dynamics of the instantons on this space using the Manton approximation. We examine the reduction of this system to lower-dimensional soliton theories, and finally consider the effect of adding a Higgs field to the SYM theory, resulting in a potential on the instanton moduli space.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Hill, Tony. « Mellin and Wiener-Hopf operators in a non-classical boundary value problem describing a Levy process ». Thesis, King's College London (University of London), 2017. https://kclpure.kcl.ac.uk/portal/en/theses/mellin-and-wienerhopf-operators-in-a-nonclassical-boundary-value-problem-describing-a-levy-process(2d1c9b67-cfe9-4eb8-9fda-edc5ac828995).html.

Texte intégral
Résumé :
This research, into non-classical boundary value problems, is motivated by the study of stochastic processes, restricted to a domain, that can have discontinuous trajectories. We demonstrate that the singularities, for example delta functions, that might be expected at the boundary, can be mitigated, using current probability theory, by what amounts to the inclusion of a carefully chosen potential. To make this general problem more tractable, we consider a particular operator, A, which is chosen to be the generator of a certain stable Levy process restricted to the positive half-line. We are able to represent A as a (hyper- ) singular integral and, using this representation and other methods, deduce simple conditions for its boundedness, between Bessel potential spaces. Moreover, from energy estimates, we prove that, under certain conditions, A has a trivial kernel. A central feature of this research is our use of Mellin operators to deal with the leading singular terms that combine, and cancel, at the boundary. Indeed, after considerable analysis, the problem is reformulated in the context of an algebra of multiplication, Wiener-Hopf and Mellin operators, acting on a Lebesgue space. The resulting generalised symbol is examined and, it turns out, that a certain transcendental equation, involving gamma and trigonometric functions with complex arguments, plays a pivotal role. Following detailed consideration of this transcendental equation, we are able to determine when our operator is Fredholm and, in that case, calculate its index. Finally, combining information on the kernel with the Fredholm index, we establish precise conditions for the invertibility of A.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Al-Ghafli, Ahmed Ali M. « Mathematical and numerical analysis of a pair of coupled Cahn-Hilliard equations with a logarithmic potential ». Thesis, Durham University, 2010. http://etheses.dur.ac.uk/475/.

Texte intégral
Résumé :
Mathematical and numerical analysis has been undertaken for a pair of coupled Cahn-Hilliard equations with a logarithmic potential and with homogeneous Neumann boundary conditions. This pair of coupled equations arises in a phase separation model of thin film of binary liquid mixture. Global existence and uniqueness of a weak solution to the problem is proved using Faedo-Galerkin method. Higher regularity results of the weak solution are established under further regular requirements on the initial data. Further, continuous dependence on the initial data is presented. Numerically, semi-discrete and fully-discrete piecewise linear finite element approximations to the continuous problem are proposed for which existence, uniqueness and various stability estimates of the approximate solutions are proved. Semi-discrete and fully-discrete error bounds are derived where the time discretisation error is optimal. An iterative method for solving the resulting nonlinear algebraic system is introduced and linear stability analysis in one space dimension is studied. Finally, numerical experiments illustrating some of the theoretical results are performed in one and two space dimensions.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie