Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Iterators.

Dissertationen zum Thema „Iterators“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Iterators" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Shen, Jiasi. „RIFL : a language with filtered iterators“. Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101587.

Der volle Inhalt der Quelle
Annotation:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 143-146).
RIFL is a new programming language that enables developers to write only common-case code to robustly process structured inputs. RIFL eliminates the need to manually handle errors with a new control structure, filtered iterators. A filtered iterator treats inputs as collections of input units, iterates over the units, uses the program itself to filter out unanticipated units, and atomically updates program state for each unit. Filtered iterators can greatly simplify the development of robust programs. We formally define filtered iterators in RIFL. The semantics of filtered iterators ensure that each input unit affects program execution atomically. Our benchmarks show that using filtered iterators reduces an average of 41.7% lines of code, or 58.5% conditional clauses and 33.4% unconditional computation, from fully manual implementations.
by Jiasi Shen.
S.M.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Manilov, Stanislav Zapryanov. „Analysis and transformation of legacy code“. Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/29612.

Der volle Inhalt der Quelle
Annotation:
Hardware evolves faster than software. While a hardware system might need replacement every one to five years, the average lifespan of a software system is a decade, with some instances living up to several decades. Inevitably, code outlives the platform it was developed for and may become legacy: development of the software stops, but maintenance has to continue to keep up with the evolving ecosystem. No new features are added, but the software is still used to fulfil its original purpose. Even in the cases where it is still functional (which discourages its replacement), legacy code is inefficient, costly to maintain, and a risk to security. This thesis proposes methods to leverage the expertise put in the development of legacy code and to extend its useful lifespan, rather than to throw it away. A novel methodology is proposed, for automatically exploiting platform specific optimisations when retargeting a program to another platform. The key idea is to leverage the optimisation information embedded in vector processing intrinsic functions. The performance of the resulting code is shown to be close to the performance of manually retargeted programs, however with the human labour removed. Building on top of that, the question of discovering optimisation information when there are no hints in the form of intrinsics or annotations is investigated. This thesis postulates that such information can potentially be extracted from profiling the data flow during executions of the program. A context-aware data dependence profiling system is described, detailing previously overlooked aspects in related research. The system is shown to be essential in surpassing the information that can be inferred statically, in particular about loop iterators. Loop iterators are the controlling part of a loop. This thesis describes and evaluates a system for extracting the loop iterators in a program. It is found to significantly outperform previously known techniques and further increases the amount of information about the structure of a program that is available to a compiler. Combining this system with data dependence profiling improves its results even more. Loop iterator recognition enables other code modernising techniques, like source code rejuvenation and commutativity analysis. The former increases the use of idiomatic code and as a result increases the maintainability of the program. The latter can potentially drive parallelisation and thus dramatically improve runtime performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Denis, Xavier. „Deductive verification of Rust programs“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG101.

Der volle Inhalt der Quelle
Annotation:
Rust est un langage de programmation introduit en 2015, qui apporte au programmeur des éléments de sûreté concernant l'utilisation de la mémoire. Le but de cette thèse est le développement d'un outil de vérification déductive pour le langage Rust, en exploitant les spécificités de son système de types afin notamment de simplifier la gestion de l'aliasing mémoire. Une telle approche de vérification permet de s'assurer de l'absence d'erreurs à l'exécution des programmes considérés, ainsi que leur conformité vis-a-vis d'une spécification formelle du comportement fonctionnel attendu. Le fondement théorique de l'approche proposé dans cette thèse est d'utiliser une notion de prophétie qui permet d'interpréter les emprunts mutables du langage Rust en une valeur courante et une valeur future cet emprunt. L'assistant de preuve Coq a été utilisé pour formaliser cet encodage prophétique et prouver la correction de la génération d'obligation de preuves associée. Par ailleurs l'approche a été mise en œuvre dans une implémentation d'un logiciel de vérification pour Rust qui automatise la génération des obligations de preuve et fait appel à des solveurs externes pour valider ces obligations. Afin de supporter les itérateurs de Rust, une extension a été développée pour manipuler les clôtures ainsi qu'une technique de vérification pour les itérateurs et combinateurs. L'implémentation a été évaluée expérimentalement sur des exemples d'algorithmes et structures de données pertinentes. Elle a été également validée par une étude de cas conséquente: la vérification d'un solveur de satisfiabilité modulo theories (SMT)
Rust is a programming language introduced in 2015, which provides the programmer with safety features regarding the use of memory. The goal of this thesis is the development of a deductive verification tool for the Rust language, by leveraging the specificities of its type system, in order to simplify memory aliasing management, among other things. Such a verification approach ensures the absence of errors during the execution of the considered programs, as well as their compliance with a formal specification of the expected functional behavior. The theoretical foundation of the approach proposed in this thesis is to use a notion of prophecy that interprets the mutable borrows in the Rust language as a current value and a future value of this borrow. The Coq proof assistant was used to formalize this prophetic encoding and prove the correctness of the associated proof obligation generation. Furthermore, the approach has been implemented in a verification software for Rust that automates the generation of proof obligations and relies on external solvers to validate these obligations. In order to support Rust iterators, an extension has been developed to manipulate closures, as well as a verification technique for iterators and combinators. The implementation has been experimentally evaluated on relevant algorithm and data structure examples. It has also been validated through a significant case study: the verification of a satisfiability modulo theories (SMT) solver
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Galbraith, Steven Douglas. „Iterations of elliptic curves“. Thesis, Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/28620.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Lodin, Viktor, und Magnus Olovsson. „Prestanda- och beteendeanalys av parallella köer med iterator“. Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-17770.

Der volle Inhalt der Quelle
Annotation:
I modern utveckling av hårdvara ligger det stort fokus på att producera processorer med fler och fler kärnor. Därmed behöver även mjukvaran utvecklas för att på bästa sätt utnyttja all denna parallella potential. En stor del av detta är då att kunna dela data mellan flera parallella processer, vilket uppnås med hjälp av parallella samlingsdatatyper. En vanlig operation på samlingsdatatyper är att iterera denna. Studiens mål var att analysera prestanda och beteende hos ett flertal kända algoritmer för iteration av datasamlingen kö. Även hur olika förutsättningar kan påverka iteratorns prestanda har värderats. Några exempel på dessa förutsättningar är antalet arbetstrådar som arbetar mot kön, initial storlek hos kön samt olika pinning strategier. Initial storlek beskriver hur många element som befinner sig i kön vid experimentens start och pinning strategi beskriver vilken kärna varje tråd skall binda sig till. Vissa iterator algoritmer lämnar garantier för att det tillstånd som returneras är ett atomiskt snapshot av kön. Ett atomiskt snapshot är en ögonblicksbild av hur kön såg ut vid någon fast tidpunkt. På grund av detta har det även varit ett mål att mäta hur stor kostnaden är för att få denna garanti. Utöver detta har prestandan hos enqueue och dequeue operationerna för respektive kö testats för att få en helhetsblick över köns prestanda.För att mäta prestandan har ett benchmarkprogram implementerats. Detta benchmarkprogram förser ett gränssnitt för samtliga köer att implementera, och kan utefter detta gränssnitt testa prestandan hos kön. Programmet kör mikrobenchmarks som mäter prestandan hos varje enskild operation hos kön. Det sätt som kön pressas på under dessa benchmarks är inte realistiskt för hur kön kan tänkas användas i skarpt läge. Istället mäts prestandan vid högsta möjliga belastning. Detta görs för att enklast kunna jämföra prestandan mellan de olika köerna.I studien har prestandan hos fyra köer med iteratorer testats, experimenten är utförda i C# med .NET 4.5 i en Windows miljö. Den parallella kö som finns i .NET biblioteket var en av köerna som testades. Dels för att det är intressant att se hur väl Microsoft optimerat denna, men också för att få en utgångspunkt att jämföra med de andra testade köerna. Michael och Scotts kö har även den testats, med två stycken olika iteratorer tillagda. Dessa är Scan and Return och Double Collect. Även en parallell kö framtagen med hjälp av universella metoder för att konstruera paralllella dataobjekt från sekventiella, baserad på den immutable kö som finns i .NET biblioteket har testats. En immutable kö är en kö som inte kan modifieras efter initiering.Resultaten från utförda benchmarks visar att Michael och Scott kön med Scan and Return iteratorn är den snabbaste på iteration, med Double Collect iteratorn som tvåa. Snabbast enqueue och dequeue operationer hittas i .NET bibliotekets parallella kö. Kön som bygger på immutable visar sig vara långsammast vad gäller iteration i de flesta fall. Den är även långsammast vad gäller enqueue och dequeue operationerna i samtliga fall. Kostnaden för att få en garanti för ett atomiskt snaphot mäter vi i skillnaden mellan Scan and Return och Double Collect iteratorerna. Detta på grund av att dessa är de två snabbaste iteratorerna och Scan and Return inte lämnar garantin medan Double Collect gör det. Denna kostnad visar sig vara relativt stor, Scan and Return presterar upp emot tre gånger så snabbt som Double Collect.Med hjälp av resultaten från denna studie kan nu utvecklare göra väl informerade val vad gäller vilken kö med iterator algoritm de skall välja för att optimera sina system. Detta kanske är som viktigast vid utveckling av större system, men kan även vara användbart vid mindre.
Program: Systemarkitekturutbildningen
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Shlapunov, Alexander. „On Iterations of double layer potentials“. Universität Potsdam, 2000. http://opus.kobv.de/ubp/volltexte/2008/2568/.

Der volle Inhalt der Quelle
Annotation:
We prove the existence of Hp(D)-limit of iterations of double layer potentials constructed with the use of Hodge parametrix on a smooth compact manifold X, D being an open connected subset of X. This limit gives us an orthogonal projection from Sobolev space Hp(D) to a closed subspace of Hp(D)-solutions of an elliptic operator P of order p ≥ 1. Using this result we obtain formulae for Sobolev solutions to the equation Pu = f in D whenever these solutions exist. This representation involves the sum of a series whose terms are iterations of double layer potentials. Similar regularization is constructed also for a P-Neumann problem in D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Schrader, U. „Convergence of Asynchronous Jacobi-Newton-Iterations“. Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199801324.

Der volle Inhalt der Quelle
Annotation:
Asynchronous iterations often converge under different conditions than their syn- chronous counterparts. In this paper we will study the global convergence of Jacobi- Newton-like methods for nonlinear equationsF x = 0. It is a known fact, that the synchronous algorithm converges monotonically, ifF is a convex M-function and the starting valuesx0 andy0 meet the conditionF x04 04F y0 . In the paper it will be shown, which modifications are necessary to guarantee a similar convergence behavior for an asynchronous computation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Torshage, Axel. „Linear Functional Equations and Convergence of Iterates“. Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-56450.

Der volle Inhalt der Quelle
Annotation:
The subject of this work is functional equations with direction towards linear functional equations. The .rst part describes function sets where iterates of the functions converge to a .xed point. In the second part the convergence property is used to provide solutions to linear functional equations by de.ning solutions as in.nite sums. Furthermore, this work contains some transforms to linear form, examples of functions that belong to di¤erent classes and corresponding linear functional equations. We use Mathematica to generate solutions and solve itera- tively equations.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

MONTAGNAC, MARC. „Controle dynamique d'algorithmes iteratifs de resolution de systemes lineaires“. Paris 6, 1999. http://www.theses.fr/1999PA066351.

Der volle Inhalt der Quelle
Annotation:
Cette these presente la resolution de systemes lineaires par des methodes de type krylov avec l'utilisation de l'arithmetique stochastique discrete. Cette arithmetique permet de prendre en compte la propagation des erreurs d'arrondi de calcul par une approche probabiliste et d'estimer la precision numerique des variables au sein d'un algorithme. Notre objectif est de rendre ces methodes plus robustes et efficaces a l'aide de strategies informatiques dynamiques tout en exercant une validation numerique des resultats. Pour les methodes de type lanczos, bicgstab et cgs, la convergence est souvent erratique et la presence de divisions par zero est problematique. Des methodes prevoyantes de traitement de ces situations appelees arrets apportent une solution a ce dernier phenomene. En arithmetique a virgule flottante, les criteres de detections des arrets ne sont pas adaptes du fait de leur nature a priori. Avec l'arithmetique stochastique discrete, ils sont choisis de maniere dynamique. Pour les grands systemes, ces situations sont rares et il est preferable de redemarrer l'algorithme lorsque les variables de la recurrence n'ont plus de chiffres significatifs. Pour les methodes avec de longues relations de recurrence de type gmres, la principale preoccupation est de trouver une frequence de redemarrage convenable pour assurer la convergence en un temps raisonnable et limiter le stockage en memoire. De nombreuses techniques a priori existent sans toutefois se reveler ideales. Nous proposons une strategie de redemarrage dynamique et etudions son interet suivant les differents schemas d'orthogonalisation. La qualite numerique est d'autant plus importante dans une approche parallele que le nombre d'operations effectuees est souvent plus eleve que sur des machines sequentielles. Nous expliquons donc la realisation de l'interface entre les librairies cadna et mpi qui nous permet de generaliser nos travaux sur des architectures paralleles mimd pour une programmation spmd.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Stumpo, Gordon. „Design Iterations Through Fusion of Additive and Subtractive Design“. Kent State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=kent1461602511.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

DEO, BLANCHARD ANNE. „Sequence-specifications algebriques. Application a une semantique pour les iterateurs“. Paris 11, 1994. http://www.theses.fr/1994PA112244.

Der volle Inhalt der Quelle
Annotation:
Dans les formalismes actuels de specifications algebriques, les proprietes des fonctions sont la plupart du temps decrites de facon recursive, ce qui conduit souvent a des specifications lourdes et peu claires. L'usage d'iterateurs peut simplifier les specifications en augmentant leur clarte et leur concision. Le propos de cette these est d'ajouter un outil d'iteration dans les specifications algebriques dont la definition soit la plus implicite possible, tout en restant dans un cadre de premier ordre. Dans la mesure ou une specification admet souvent une large classe de modeles, un iterateur doit etre coherent avec tous les modeles et ne peut donc pas etre defini en tenant compte des choix d'implementation (comme dans les langages de programmation). C'est pourquoi il est necessaire de definir une semantique propre aux iterateurs. Nous definissons le formalisme des sequence-specifications avec predicats: il constitue un nouveau formalisme dedie a la manipulation de listes de longueur non fixee. Ce formalisme etend le pouvoir d'expression des specifications algebriques et fournit un cadre tres general de specifications comportant des aspects non-deterministes, et polymorphes. Ce formalisme verifie des resultats d'initialite, de structuration, et l'existence d'un calcul correct et complet. Nous definissons le formalisme des specifications avec iterateurs a partir des sequence-specifications et nous donnons une semantique propre aux iterateurs. Nous prouvons qu'il existe une traduction des specifications avec iterateurs en sequence-specifications avec predicats qui conserve la semantique
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Shlapunov, Alexander. „Iterations of self-adjoint operators and their applications to elliptic systems“. Universität Potsdam, 1999. http://opus.kobv.de/ubp/volltexte/2008/2540/.

Der volle Inhalt der Quelle
Annotation:
Let Hsub(0), Hsub(1) be Hilbert spaces and L : Hsub(0) -> Hsub(1) be a linear bounded operator with ||L|| ≤ 1. Then L*L is a bounded linear self-adjoint non-negative operator in the Hilbert space Hsub(0) and one can use the Neumann series ∑∞sub(v=0)(I - L*L)v L*f in order to study solvability of the operator equation Lu = f. In particular, applying this method to the ill-posed Cauchy problem for solutions to an elliptic system Pu = 0 of linear PDE's of order p with smooth coefficients we obtain solvability conditions and representation formulae for solutions of the problem in Hardy spaces whenever these solutions exist. For the Cauchy-Riemann system in C the summands of the Neumann series are iterations of the Cauchy type integral. We also obtain similar results 1) for the equation Pu = f in Sobolev spaces, 2) for the Dirichlet problem and 3) for the Neumann problem related to operator P*P if P is a homogeneous first order operator and its coefficients are constant. In these cases the representations involve sums of series whose terms are iterations of integro-differential operators, while the solvability conditions consist of convergence of the series together with trivial necessary conditions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Chan, Yuen-fai, und 陳遠輝. „On exact algorithms for small-sample bootstrap iterations and their applications“. Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222298.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Hama, Muzafar. „Matrix eigenvalues : localization through subsets and triangularization with Newton-like iterations“. Saint-Etienne, 2009. http://www.theses.fr/2009STET4016.

Der volle Inhalt der Quelle
Annotation:
In this thesis we develop two main subjects: First, the notion of a pseudospectrum of a complex square matrix, and then the application of Newton-Kantorovich method and its Fixed Slope inexact variant to compute Schur and Gauss upper triangular similar forms of a given complex square matrix. The chapter on pseudospectra compiles and provides a synthesis of the principal results on this theme and contains some original contributions too. We also propose the use of Newton Kantorovich method and its Fixed Slope inexact variant to rene the QR and (L+I)U matrix factorizations needed to compute the triangular forms
Dans cette thèse on développe deux sujets : d'une part, la notion de pseudo-spectre d'une matrice complexe carrée et, d'autre part, l'application de la méthode de Newton-Kantorovich au calcul des formes triangulaires supérieures semblables d'une matrice carrée complexe. Le travail sur le pseudo-spectre est une compilation de résultats qui réalise une synthèse de cette notion. Quelques contributions originales complètent ce texte. La recherche sur les formes triangulaires repose sur l'application de la méthode de Newton-Kantorovich et sa variante de la Pente xe au calcul d'une forme de Schur par similarité unitaire et d'une forme de Gauss par similarité triangulaire infèrieure a diagonal unite. On propose que les factorisations QR et (L+I)U nécessaires au calcul des formes triangulaires de Schur et de Gauss par les algorithmes de Francis et de Crout respectivement, soient accomplies par ranement itératif en utilisant encore une fois la méthode de Newton-Kantorovich et sa variante de la Pente x
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Ciavatti, Enrico. „Teorema del Limite Centrale e Legge del Logaritmo Iterato“. Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18248/.

Der volle Inhalt der Quelle
Annotation:
Questo testo è volto a descrivere il comportamento di alcuni processi stocastici da punti di vista differenti; in particolare, tali punti di vista saranno le Martingale, il Teorema del Limite Centrale e la Legge del Logaritmo Iterato; verranno fornite dimostrazioni dei due enunciati che danno il titolo alla tesi, nonché della Legge Forte dei Grandi Numeri. Verrà data un'interpretazione del TLC e della LLI, ed inoltre verranno fatti confronti fra questi due Teoremi e la LGN forte. Inoltre, saranno presenti in questo testo anche un enunciato con dimostrazione del TLC per le martingale, ed un enunciato con dimostrazione del Teorema delle Grandi Deviazioni. Tutti gli enunciati principali verranno analizzati dal punto di vista di un processo di Bernoulli, ossia una successione di variabili aleatorie accomunate dal fatto di restituire valore 1 con probabilità p in (0,1), e valore 0 con probabilità 1-p. Il primo capitolo sarà dedicato a tutti i concetti preliminari di Probabilità e di Analisi Matematica atti alla comprensione di tutto ciò che si vedrà più avanti.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Cuata, Cervantes Jonathan Eduardo. „Optimizing automotive electrical distribution systems design and development by reducing design iterations“. Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106239.

Der volle Inhalt der Quelle
Annotation:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, System Design and Management Program, Engineering and Management Program, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 122-123).
The design and development (D&D) of electrical distribution systems (EDS) is a practice that has been performed in the automotive industry for more than 100 years. The amount of technology infusion in vehicles within this history impacts the design and development of electrical distribution systems in an exponential manner. The electrical architecture of a vehicle increases in complexity with every new product launched into the market. The number of interactions and interdependencies between design and development activities, and across functional groups, has been increasing as a consequence of the constant innovation in the vehicle electrical architecture. These interdependencies and interactions with design and development tasks and cross functional groups generate potential design iterations and rework loops that have direct impacts on the cost, scope and schedule of automotive projects. This research has a fundamental purpose, the review of the electrical distribution systems design and development process inside an automotive OEM through the use of (1) traditional and modern project management tools, (2) surveys and interviews inside the OEM EDS organization, and (3) a review of product development literature, in order to identify recommendations to reduce unplanned design iterations and rework generated by the nonlinear nature of automotive product development. While the analyses, summary and recommendations are specific to EDS product development, it is hoped that the use of both traditional and modern project management tools described in this thesis can serve as a model for those in other industries.
by Jonathan Eduardo Cuata Cervantes.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Fanfakh, Ahmed Badri Muslim. „Energy consumption optimization of parallel applications with Iterations using CPU frequency scaling“. Thesis, Besançon, 2016. http://www.theses.fr/2016BESA2021/document.

Der volle Inhalt der Quelle
Annotation:
Au cours des dernières années, l'informatique “green” est devenue un sujet important dans le calcul intensif. Cependant, les plates-formes informatiques continuent de consommer de plus en plus d'énergie en raison de l'augmentation du nombre de noeuds qui les composent. Afin de minimiser les coûts d'exploitation de ces plates-formes de nombreuses techniques ont été étudiées, parmi celles-ci, il y a le changement de la fréquence dynamique des processeurs (DVFS en anglais). Il permet de réduire la consommation d'énergie d'un CPU, en abaissant sa fréquence. Cependant, cela augmente le temps d'exécution de l'application. Par conséquent, il faut trouver un seuil qui donne le meilleur compromis entre la consommation d'énergie et la performance d'une application. Cette thèse présente des algorithmes développés pour optimiser la consommation d'énergie et les performances des applications parallèles avec des itérations synchrones et asynchrones sur des clusters ou des grilles. Les modèles de consommation d'énergie et de performance proposés pour chaque type d'application parallèle permettent de prédire le temps d'exécution et la consommation d'énergie d'une application pour toutes les fréquences disponibles.La contribution de cette thèse peut être divisé en trois parties. Tout d'abord, il s'agit d'optimiser le compromis entre la consommation d'énergie et les performances des applications parallèles avec des itérations synchrones sur des clusters homogènes. Deuxièmement, nous avons adapté les modèles de performance énergétique aux plates-formes hétérogènes dans lesquelles chaque noeud peut avoir des spécifications différentes telles que la puissance de calcul, la consommation d'énergie, différentes fréquences de fonctionnement ou encore des latences et des bandes passantes réseaux différentes. L'algorithme d'optimisation de la fréquence CPU a également été modifié en fonction de l'hétérogénéité de la plate-forme. Troisièmement, les modèles et l'algorithme d'optimisation de la fréquence CPU ont été complètement repensés pour prendre en considération les spécificités des algorithmes itératifs asynchrones.Tous ces modèles et algorithmes ont été appliqués sur des applications parallèles utilisant la bibliothèque MPI et ont été exécutés avec le simulateur Simgrid ou sur la plate-forme Grid'5000. Les expériences ont montré que les algorithmes proposés sont plus efficaces que les méthodes existantes. Ils n’introduisent qu’un faible surcoût et ne nécessitent pas de profilage au préalable car ils sont exécutés au cours du déroulement de l’application
In recent years, green computing has become an important topic in the supercomputing research domain. However, the computing platforms are still consuming more and more energy due to the increase in the number of nodes composing them. To minimize the operating costs of these platforms many techniques have been used. Dynamic voltage and frequency scaling (DVFS) is one of them. It can be used to reduce the power consumption of the CPU while computing, by lowering its frequency. However, lowering the frequency of a CPU may increase the execution time of the application running on that processor. Therefore, the frequency that gives the best trade-off between the energy consumption and the performance of an application must be selected.This thesis, presents the algorithms developed to optimize the energy consumption and theperformance of synchronous and asynchronous message passing applications with iterations runningover clusters or grids. The energy consumption and performance models for each type of parallelapplication predicts its execution time and energy consumption for any selected frequency accordingto the characteristics of both the application and the architecture executing this application.The contribution of this thesis can be divided into three parts: Firstly, optimizing the trade-offbetween the energy consumption and the performance of the message passing applications withsynchronous iterations running over homogeneous clusters. Secondly, adapting the energy andperformance models to heterogeneous platforms where each node can have different specificationssuch as computing power, energy consumption, available frequency gears or network’s latency andbandwidth. The frequency scaling algorithm was also modified to suit the heterogeneity of theplatform. Thirdly, the models and the frequency scaling algorithm were completely rethought to takeinto considerations the asynchronism in the communication and computation. All these models andalgorithms were applied to message passing applications with iterations and evaluated over eitherSimGrid simulator or Grid’5000 platform. The experiments showed that the proposed algorithms areefficient and outperform existing methods such as the energy and delay product. They also introducea small runtime overhead and work online without any training or profiling
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Quan, Zhi. „Computationally efficient multiuser and MIMO detection based on dichotomous coordinate descent iterations“. Thesis, University of York, 2009. http://etheses.whiterose.ac.uk/1628/.

Der volle Inhalt der Quelle
Annotation:
The detection in multiuser (MUD) and multiple-input multiple-output (MIMO) systems can increase the spectral efficiency, and therefore is of great interest. Although multiuser and MIMO detection is mature in theory, the real-time implementation is still an open issue. Many suboptimal detection schemes have been proposed, possessing low computational load, but also having poorer detection performance compared to the optimal detector. Multiuser detection can be described as a solution of an optimisation problem; in most cases it is the quadratic optimisation problem. Unconstrained quadratic optimisation is known to result in decorrelating and MMSE multiuser detection, which cannot provide high detection performance. The optimal detection is equivalent to the solution of a constrained problem. However, such detection is too complex for practical systems. In this work, we propose several detectors which possess low complexity and high detection performance. These detectors are based on Dichotomous Coordinate Descent (DCD) iterations, which are multiplication and division free, and therefore are attractive for real-time implementation. We propose a box-constrained DCD algorithm, and apply it to multiuser detection. We also design an FPGA architecture of the box-constrained DCD detector and implement it in an FPGA. This design requires a very small area usage. The fixed-point implementation offers a constant throughput over the signal-to-noise ratio (SNR) and provides almost same detection performance as that of a floating-point implementation. We further exploit the box-constrained DCD algorithm and propose a complex-valued box-constrained DCD algorithm. A box-constrained MIMO detector based on the DCD algorithm shows a better detection performance than the MMSE detector. The proposed FPGA design requires a small area usage, which is significantly less than that required by known designs of the MMSE MIMO detector. Since the box-constrained DCD algorithm could not offer the optimal detection performance, while the sphere decoder encounters high complexity at low SNRs, we suggest a combination of the box-constrained DCD algorithm with the sphere decoder (fast branch and bound algorithm). The combined detection results in reduced complexity at low SNRs while retaining outstanding detection performance at all SNRs. As the box-constrained DCD algorithm is efficient for hardware implementation, we apply it to the nonstationary iterative Tikhonov regularization and propose a DCD-BTN detector. The DCD-BTN detector shows the detection performance very close to the optimal performance. It also shows the lowest complexity among the most advanced detectors. An architecture of the detector has been developed. This detector has been implemented on an FPGA platform. The design requires a small number of FPGA slices. Numerical results have shown that the fixed-point FPGA implementation and a floating-point implementation have similar detection performance. The DCD-BTN detector can only be applied in systems with BPSK modulation. Therefore, we also propose a multiple phase decoder (MPD), which is based on a phase descent search (PDS) algorithm. The PDS algorithm uses coordinate descent iterations, where coordinates are unknown symbol phases, while constraining the symbols to have a unit magnitude. The MPD is investigated in application to detection of M-PSK symbols in multiuser and MIMO systems. In the multiuser detection, the MPD is applied to highly loaded scenarios and numerical results show that it provides the near-optimal performance at low complexity. The MPD significantly outperforms such advanced detector as the semidefinite relaxation detector in both the detection performance and complexity. In MIMO systems, the MPD exhibits more favorable performance/complexity characteristics and can be considered as a promising alternative to the sphere decoder. The matrix inversion is required in many applications. The complexity of matrix inversion is too high and makes its implementation difficult. To overcome the problem, we propose an approach based on the DCD algorithm to simplify the matrix inversion. This approach obtains separately the individual columns of the inverse matrix and costs a very small number of slices, which is suitable for application, e.g. in MIMO-OFDM systems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Layton, Simon. „Fast multipole boundary element solutions with inexact Krylov iterations and relaxation strategies“. Thesis, Boston University, 2013. https://hdl.handle.net/2144/11115.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph.D.)--Boston University
Boundary element methods (BEM) have been used for years to solve a multitude of engineering problems, ranging from Bioelectrostatics, to fluid flows over micro-electromechanical devices and deformations of cell membranes. Only requiring the discretization of a surface into panels rather than the entire domain, they effectively reduce the dimensionality of a problem by one. This reduction in dimensionality nevertheless comes at a cost. BEM requires the solution of a large, dense linear system with each matrix element formed of an integral between two panels, often performed used an iterative solver based on Krylov subspace methods. This requires the repeated calculation of a matrix vector product that can be approximated using a hierarchical approximation known as the fast multipole method (FMM). While adding complexity, this reduces order of the time-to-solution from O(cN^2) to OcN), where c is some function of the condition number of the dense matrix. This thesis obtains algorithmic speedups for the solutions of FMM-BEM systems by applying the mathematical theory behind inexact matrix-vector products to our solver, implementing a relaxation scheme to control the error incurred by the FMM in order to minimize the total time-to-solution. The theory is extensively verified for both Laplace equation and Stokes flow problems, with an investigation to determine how further problems may benefit from the addition of a relaxed solver. We also present experiments for the Stokes flow around both single and multiple red blood cells, an area of ongoing research, showing good speedups that would be applicable for any other code that chose to implement a similar relaxed solver. All of these results are obtained with an easy-to-use, extensible and open-source FMM-BEM code.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Wall, Gina. „Photographic dissemination : iterations of difference in the text of landscape and photographic writing“. Thesis, University of Dundee, 2011. https://discovery.dundee.ac.uk/en/studentTheses/ede09f8b-b9e3-4922-909d-7617c01f4d33.

Der volle Inhalt der Quelle
Annotation:
This thesis challenges the notion that landscape can be seen or thought as a picture i.e. in terms of its modern definition and etymology. In questioning the modern definition of landscape the thesis asks a number of specific questions: does the etymology of landscape yield any latent meanings which may be profitably explored? Can these be used as the basis for a new formulation of landscape i.e. ‘landtext’ or landscape as text? The thesis goes on to consider what the implications of this are. Importantly, this thesis is practice based which has entailed that the work is interdisciplinary in nature, the working method amounts to a dialogue between disciplines. The practice with which the thesis concerns itself is photography and it has been a pivotal component of this research to consider photography in terms of Jacques Derrida’s expanded field of writing. The photograph as a motif of the metaphysics of presence, a Barthesean emanation, is presented in relation to Derrida’s grammatology, or generalised system of difference. Critically this thesis asks is photography a form of writing? If so, what are the consequences of this for the relation between photographic writing, or as it is termed here, photogrammatology and landtext? The thesis explores whether intertextuality adequately describe the nexus of relations between each of the systems of difference. Due to the practice led nature of the project, a significant consideration has been the implication of a relational, text based understanding of practice for the viewer or reader in the gallery. To this end the thesis investigates relational aesthetics vis à vis text with a view to theorising photographic practice in a gallery setting in terms of a text which the reader enters. In addition, the role of light in the intertextual relation is considered, especially with respect to the articulation of difference.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Porte, Christian. „Le bilan etiologique des avortements spontanes precoces iteratifs : a propos d'une enquete multicentrique francaise“. Nancy 1, 1988. http://www.theses.fr/1988NAN11193.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Aknin, Patrice. „Algorithmes iteratifs pour l'inversion d'un modele non lineaire de multicapteur a courants de foucault“. Paris 11, 1990. http://www.theses.fr/1990PA112133.

Der volle Inhalt der Quelle
Annotation:
Ce memoire presente une structure multicapteur qui permet l'obtention d'images de profils metalliques a une dimension. Cependant son manque de resolution laterale et sa forte non-linearite verticale obligent a concevoir un traitement specifique de restauration des images qui en sont issue. Afin de permettre de cette inversion, la modelisation du capteur est necessaire. Grace a une demarche originale, intermediaire entre une approche de connaissance et de representation,un modele lineaire local est etabli. Son extension, basee sur des remarques physiques, conduit a une modelisation non lineaire complete et sa validation finale est obtenue apres un ajustement fin de ses parametres. Les deux derniers chapitres traitent des algorithmes d'inversion parametriques ou non parametriques. Concernant l'approche parametrique, deux methodes sont presentees. La premiere considere le capteur comme une boite noire et conduit a son identification complete a partir d'un ensemble de couples (parametres d'entree, observation). La seconde est une methode du hessien approche et utilise le modele non lineaire etabli precedemment. Concernant l'approche non parametrique, des methodes d'inversion generalisees sont citees, mais leur mauvais comportement vis-a-vis de donnees bruitees nous ont amene a elaborer une methode sequentielle originale denommee methode avec contraintes sur operateur equivalent. Celle-ci utilise une serie de coefficients de reinjection choisie pour obtenir la meilleure approximation de l'estimateur lineaire a variance minimale (filtre de wiener dans le cas stationnaire). L'analyse theorique des performances de ce schema sequentiel est menee dans le cas d'un operateur lineaire, mais nous montrons qu'elle s'applique a des systemes dont la relation entree sortie se presente sous forme localement lineaire
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Juan, Huguet Jordi. „Iterates of differential operators and vector valued functions on non quasi analytic classes“. Doctoral thesis, Universitat Politècnica de València, 2011. http://hdl.handle.net/10251/9401.

Der volle Inhalt der Quelle
Annotation:
En el año 1960 Komatsu introdujo ciertas clases de funciones infinitamente derivables definidas mediante estimaciones del crecimiento de los sucesivos iterados de un operador en derivadas parciales cuando estudiaba propiedades de regularidad de las soluciones de ciertas ecuaciones en derivadas parciales. Esta línea de investigación ha sido muy activa hasta la actualidad a través de los trabajos de muchos autores. Destacamos, entre otros, Bolley, Camus, Kotake, Langenbruch, Métivier, Narasimhan, Newberger, Rodino, Zanghirati y Zielezny. Toda esta bibliografía involucra el llamado problema de los iterados que consiste, grosso modo, en caracterizar las funciones de una cierta clase en términos del comportamiento de los iterados de un operador previamente fijado. En la primera parte de esta tesis seguimos con la investigación mencionada antes en un contexto más general: clases no casi analíticas de funciones ultradiferenciables en el sentido de Braun, Meise y Taylor. El estudio de estas clases no casi analíticas es una área de investigación muy activa debido a sus aplicaciones a la teoría de operadores en derivadas parciales: destacamos entre otros el trabajo de Bonet, Braun, Domanski, Fernández, Frerick, Galbis, Taylor y Vogt. En el Capítulo 1 introducimos estas clases y enunciamos las propiedados que utilizaremos a lo largo de esta tesis. En el Capítulo 2 definimos las clases no casi analíticas con respecto a los iterados de un operador en derivadas parciales P(D) y estudiamos sus propiedades topológicas como la completitud y la nuclearidad. En particular, demostramos que estas clases son un espacio localmente convexo completo si y sólo si el operador P(D) es hipoelíptico y vemos que en tal caso son además un espacio nuclear. A continuación, demostramos que estas clases verifican un teorema de tipo Paley-Wiener. En el Capítulo 3 tenemos como objetivo obtener resultados sobre el problema de los iterados en clases no casi analíticas. Generalizamos varios resultados de Newberger, Zielezny, Métivier y Komatsu y damos caracterizaciones de cuándo una clase no casi analítica definida en términos de los iterados de un operador coincide con una clase no casi analítica según Braun, Meise y Taylor. Toda la investigación que se había hecho sobre espacios de funciones definidos por iterados de operadores se había centrado en clases de tipo Roumieu. Sin embargo, demostramos que los resultados dados en los Capítulos 2 y 3 también son válidos para clases de tipo Beurling. En el año 1990, Langenbruch y Voigt demostraron que todo espacio de Fréchet formado por distribuciones que sea invariante bajo la acción de un operador hipoelíptico está continuamente incluido en C¥. En el capítulo 4 introducimos los operadores ultradiferenciales e investigamos extensiones del resultado de Langenbruch y Voigt al contexto ultradiferenciable. El nuevo concepto de espacio de Fréchet (w, P(D))-estable involucra a los iterados de P(D) mediante una condición de equicontinuidad y nos permite mostrar la relación de este tipo de resultados con el problema de los iterados. La segunda parte de esta tesis se centra en el estudio de funciones con valores vectoriales en un espacio localmente convexo.
Juan Huguet, J. (2011). Iterates of differential operators and vector valued functions on non quasi analytic classes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/9401
Palancia
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Henk, Michael B. „A study on project iterations and the effects seen on system constraint and project duration“. Menomonie, WI : University of Wisconsin--Stout, 2007. http://www.uwstout.edu/lib/thesis/2007/2007henkm.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Steinhoff, Tim [Verfasser], und Wolfgang [Akademischer Betreuer] Mackens. „Approximate and projected natural level functions for Newton-type iterations / Tim Steinhoff. Betreuer: Wolfgang Mackens“. Hamburg-Harburg : Universitätsbibliothek der Technischen Universität Hamburg-Harburg, 2011. http://d-nb.info/1048455556/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Thummadi, B. Veeresh. „SOFTWARE DESIGN METHODOLOGIES, ROUTINES AND ITERATIONS: A MULTIPLE-CASE STUDY OF AGILE AND WATERFALL PROCESSES“. Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1396363465.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Garcia, Ignacio de Mateo. „Iterative matrix-free computation of Hopf bifurcations as Neimark-Sacker points of fixed point iterations“. Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://dx.doi.org/10.18452/16478.

Der volle Inhalt der Quelle
Annotation:
Klassische Methoden für die direkte Berechnung von Hopf Punkten und andere Singularitaten basieren auf der Auswertung und Faktorisierung der Jakobimatrix. Dieses stellt ein Hindernis dar, wenn die Dimensionen des zugrundeliegenden Problems gross genug ist, was oft bei Partiellen Diferentialgleichungen der Fall ist. Die betrachteten Systeme haben die allgemeine Darstellung f ( x(t), α) für t grösser als 0, wobei x die Zustandsvariable, α ein beliebiger Parameter ist und f glatt in Bezug auf x und α ist. In der vorliegenden Arbeit wird ein Matrixfreies Schema entwicklet und untersucht, dass ausschliesslich aus Produkten aus Jakobimatrizen und Vektoren besteht, zusammen mit der Auswertung anderer Ableitungsvektoren erster und zweiter Ordnung. Hiermit wird der Grenzwert des Parameters α, der zuständig ist für das Verlieren der Stabilität des Systems, am Hopfpunkt bestimmt. In dieser Arbeit wird ein Gleichungssystem zur iterativen Berechnung des Hopfpunktes aufgestellt. Das System wird mit einer skalaren Testfunktion φ, die aus einer Projektion des kritischen Eigenraums bestimmt ist, ergänzt. Da das System f aus einer räumlichen Diskretisierung eines Systems Partieller Differentialgleichungen entstanden ist, wird auch in dieser Arbeit die Berechung des Fehlers, der bei der Diskretisierung unvermeidbar ist, dargestellt und untersucht. Zur Bestimmung der Hopf-Bedingungen wird ein einzelner Parameter gesteuert. Dieser Parameter wird unabhängig oder zusammen mit dem Zustandsvektor in einem gedämpften Iterationsschritt neu berechnet. Der entworfene Algorithmus wird für das FitzHugh-Nagumo Model erprobt. In der vorliegenden Arbeit wird gezeigt, wie für einen kritischen Strom, das Membranpotential eine fortschreitende Welle darstellt.
Classical methods for the direct computation of Hopf bifurcation points and other singularities rely on the evaluation and factorization of Jacobian matrices. In view of large scale problems arising from PDE discretization systems of the form f( x (t), α ), for t bigger than 0, where x are the state variables, α are certain parameters and f is smooth with respect to x and α, a matrix-free scheme is developed based exclusively on Jacobian-vector products and other first and second derivative vectors to obtain the critical parameter α causing the loss of stability at the Hopf point. In the present work, a system of equations is defined to locate Hopf points, iteratively, extending the system equations with a scalar test function φ, based on a projection of the eigenspaces. Since the system f arises from a spatial discretization of an original set of PDEs, an error correction considering the different discretization procedures is presented. To satisfy the Hopf conditions a single parameter is adjusted independently or simultaneously with the state vector in a deflated iteration step, reaching herewith both: locating the critical parameter and accelerating the convergence rate of the system. As a practical experiment, the algorithm is presented for the Hopf point of a brain cell represented by the FitzHugh-Nagumo model. It will be shown how for a critical current, the membrane potential will present a travelling wave typical of an oscillatory behaviour.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Cuesta, Andrea Jordi. „Contribucions als agorismes de punt interior en mètodes iteratius per a sistemes d'equacions usant regularitzacions quadràtiques“. Doctoral thesis, Universitat Politècnica de Catalunya, 2009. http://hdl.handle.net/10803/30709.

Der volle Inhalt der Quelle
Annotation:
Els mètodes de punt interior per a programació lineal proporcionen algorismes de complexitat polinòmica, que els fa ser molt eficients en l’optimització a gran escala. Aquests algorismes utilitzen el mètode de Newton per a convertir les equacions d’òptim del problema, que són no lineals, en un sistema d’equacions lineals, que solen resoldre’s aplicant factorizacions de matrius esparses. En aquells casos particulars en els quals el problema té una estructura especial, com ara en els problemes d’optimització en xarxes multiarticle, es pot aprofitar per millorar l’eficiència de l’algorisme. Aquests problemes de xarxes pertanyen a la família més general de problemes primals bloc-angulars. El punt de partida d’aquesta tesi va ser un fet empíric: l’observació del millor comportament computacional d’un algorisme especialitzat de punt inferior per a problemes bloc-angulars quan en la funció objectiu figurem termes quadràtics. Aquest algorisme utilitza factoritzacions de matrius per resoldre la part de les equacions associades a la zarza i el mètode del gradient conjugat precondicional per resoldre les equacions asociadse a les restriccions d’acoblament. Llavors l’objectiu original va ser buscar alguna forma d’aproximar un problema lineal per un quadràtic de manera que s’explotés el fet experimental observat sense perjudicar la convergència del problema. Posteriorment el plantejament inicial es va amplificar amb el nou objectiu de demostrar la convergència del mètode, entre altres resultats teòrics. El marc teòric usat per poder formular matemàticament aquesta idea ha estat la regularització de la funció de barrera logarítmica associada al problema d’optimització, entenent com a tal la transformació de la funció de barrera original per una altra que inclou un terme quadràtic variable de pertorbació, que disminueix progressivament conforme l’algorisme s’atansa a l’òptim. Aqueste terme quadràtic converteix el problema lineal original en un de quadràtic, de forma que en les primeres iteracions aprofitem el comportament empíric abans esmentat i, a mesura que progressa l’algorisme, el terme quadràtic esdevé negligible, i el problema amb regularització quadràtica s’atansa al problema lineal original. La barrera regularitzada resulta ser auto-concordant, assegurant així la convergència del mètode de punt interior.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

RADID, ATIKA. „Iterations booleennes et sur des ensembles de cardinal fini analyse numerique de modeles physiques de recuit“. Besançon, 2000. http://www.theses.fr/2000BESA2041.

Der volle Inhalt der Quelle
Annotation:
Ce travail comporte deux parties : @ la premiere est consacree aux iterations sur des ensembles de cardinal fini : _ iterations asynchrones pour lesquelles on a obtenu une extension des resultats precedemment connus, en particulier dans le contexte booleen. _ schemas paralleles explicites, avec regroupement des echanges, qui donne lieu a une etude de complexite dans le cadre d'un modele incluant des couts d'echanges interprocesseurs. Ce dernier point a ete l'occasion de presenter un lien non usuel avec la tres classique notion d'irreductibilite. @ la seconde partie est dediee a l'etude de divers modeles de recuits issus de la physique. Dans le cadre de la mise sous la forme d'une equation differentielle ordinaire de l'un des plus classique d'entre eux, on etablit l'analyse numerique d'une methode proposee par les geologues : la methode du temps equivalent. Il ressort de cette etude mathematique que la methode du temps equivalent peut etre interprete comme une methode inverse d'une situation tres particuliere de la methode de continuation. On s'interesse egalement a la simulation numerique de modeles de recuit issus de la physique sous la forme de la recherche de minima d'energies d'activation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Authié, Gérard. „Contribution a l'optimisation de flots dans les reseaux : un multiprocesseur experimental pour l'etude des iterations asynchrones“. Toulouse 3, 1987. http://www.theses.fr/1987TOU30210.

Der volle Inhalt der Quelle
Annotation:
Recherche d'un flot optimal dans un reseau a commutation de paquets. Pour cela on presente une methode d'analyse basee sur la programmation mathematique. Etude des iterations asynchrones sur une machine parallele
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

SATO, Ken-ichi, Hiroshi HASEGAWA und Masakazu SATO. „Efficient Shared Protection Network Design Algorithm that Iterates Path Relocation with New Resource Utilization Metrics“. 電子情報通信学会, 2013. https://search.ieice.org/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Patel, Rahul. „Maximum Likelihood – Expectation Maximum Reconstruction with Limited Dataset for Emission Tomography“. Akron, OH : University of Akron, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=akron1175781554.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S.)--University of Akron, Dept. of Biomedical Engineering, 2007.
"May, 2007." Title from electronic thesis title page (viewed 04/26/2009) Advisor, Dale Mugler; Co-Advisor, Anthony Passalaqua; Committee member, Daniel Sheffer; Department Chair, Daniel Sheffer; Dean of the College, George K. Haritos; Dean of the Graduate School, George R. Newkome. Includes bibliographical references.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Hermer, Neal [Verfasser], Russell [Akademischer Betreuer] Luke, Russell [Gutachter] Luke und Anja [Gutachter] Sturm. „Random Function Iterations for Stochastic Feasibility Problems / Neal Hermer ; Gutachter: Russell Luke, Anja Sturm ; Betreuer: Russell Luke“. Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2019. http://d-nb.info/1181427460/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Vondrak, David A. „Adaptive selections of sample size and solver iterations in stochastic optimization with applicåation to nonlinear commodity flow problems“. Thesis, Monterey, California. Naval Postgraduate School, 2009. http://hdl.handle.net/10945/4818.

Der volle Inhalt der Quelle
Annotation:
We present an algorithm to approximately solve certain stochastic nonlinear programs through sample average approximations. The sample sizes in these approximations are selected by approximately solving optimal control problems defined on a discrete-time dynamic system. The optimal-control problem seeks to minimize the computational effort required to reach a near-optimal objective value of the stochastic nonlinear program. Unknown control-problem parameters such as rate of convergence, computational effort per solver iteration, and optimal value of the program are estimated within a receding horizon framework as the algorithm progresses. The algorithm is illustrated with single-commodity and multi-commodity network flow models. Measured against the best alternative heuristic policy we consider for selecting sample sizes, the algorithm finds a near-optimal objective value on average up to 17% faster. Further, the optimal-control problem also leads to a 40% reduction in standard deviation of computing times over a set of independent runs of the algorithm on identical problem instances. When we modify the algorithm by selecting a policy heuristically in the first stage (only), we improve computing time, on average, by nearly 47% against the best heuristic policy considered, while reducing the standard deviation across the independent runs by 55%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Vondrak, David A. „Adaptive selections of sample size and solver iterations in stochastic optimization with application to nonlinear commodity flow problems“. Monterey, Calif. : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/March/09Mar%5FVondrak.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, March 2009.
Thesis Advisor(s): Royset, Johannes O. "March 2009." Description based on title screen as viewed on April 23, 2009. Author(s) subject terms: Nonlinear Stochastic Optimization, Optimal Control, Dynamic Programming, Network Commodity Flow, Sample Average Approximation, Projected Gradient Method. Includes bibliographical references (p. 37-38). Also available in print.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Guenther, Marco. „Effizient Programmieren mit der C++ Standard Template Library“. Universitätsbibliothek Chemnitz, 2003. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200300381.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Mazine, Alexandre. „La caractérisation de front d'onde dans un système de propagation à multi-illumination gérée par un modulateur spatial de lumière“. Phd thesis, Télécom ParisTech, 2006. http://pastel.archives-ouvertes.fr/pastel-00001748.

Der volle Inhalt der Quelle
Annotation:
La caractérisation de phase est la pierre angulaire de l'analyse de front d'onde. Devenant un secteur d'activité de plus en plus large, il nécessite de nouveaux moyens de contrôle plus efficaces, plus performants et meilleur marché. Les performances des techniques les plus utilisées dans ce domaine reposent en grande partie sur un équipement optique sophistiqué, alors que l'utilisation de la diffraction par propagation libre permet de simplifier au maximum le matériel et de reporter la charge aux algorithmes intelligents de traitement de données. L'objectif de ce travail est de mettre en valeur une technique d'analyse de front d'onde multivue ainsi que de construire une installation-prototype capable de caractériser les diverses cartes de phase. L'entité de l'étude réalisée consiste à proposer une méthode d'accès à la forme de la phase d'une onde inconnue à partir d'une séquence de ses figures de diffraction créées avec un modulateur spatial de lumière selon le principe des multiples ondes illuminantes. Pour contribuer au problème, un algorithme itératif de type IFTA (Iterative Fourier/Fresnel Transform Algorithm) dit la ``multi-illumination'' a été mis en oeuvre en deux versions qui sous-entendent soit les conditions de l'imagerie cohérente combinée avec une propagation libre, soit une double propagation de Fresnel/Fourier. Le fonctionnement de l'algorithme a été vérifié aussi bien en simulation numérique qu'au sein d'un montage optique experimental gouverné par un logiciel artisanal de pilotage. Les résultats obtenus démontrent sa convergence sûre et particulièrement rapide. Mots-clefs: analyse de front d'onde, reconstruction de phase, modulateur spatial de lumière, multivue, algorithmes iteratifs par transformée de Fourier/Fresnel discrète.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Meier, Daniel Steven. „Generative Modeling as a tool in Urban Riverfront Design; an exploration of Parametric Design in Landscape Architecture“. The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1338355682.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Hosking, Peter. „Policy Reform and Resistance: A Case Study of Police Pursuit Policy Change in Queensland, Australia“. Thesis, Griffith University, 2022. http://hdl.handle.net/10072/413694.

Der volle Inhalt der Quelle
Annotation:
Police high-speed vehicular pursuits are contentious due to lives lost and property damage resulting from unintended crashes. To reduce pursuit-related trauma, and potential litigation, police jurisdictions have introduced restrictive policies that limit when officers may engage in a pursuit. However, opponents of restrictive pursuit policies believe this results in reduced deterrence, increased criminal offending and dangerous driving practices. This thesis tested these assumptions using a single case study of Queensland, Australia, where the Queensland Police Service (QPS) implemented two restrictive pursuit policy iterations in 2007 and 2011. Five studies sought to establish the policies’ specific aims; whether they were achieved; if there was resistance to the policy reforms; and, if so, what were the rationales for such resistance. The major theoretical contribution of this thesis was to support the notion that Dent and Goldberg’s (1999a; 1999b) Loss Resistance Theory can help explain why police might resist policy reform. Loss Resistance Theory argues that change per se is not the root cause for resistance to performance altering policies, but stakeholders’ perceived losses in terms of their autonomy, status, and independent discretion, resulting from the policy change. Lipsky’s (2010) Discretionary Independence Theory applied to police officers acting as ‘street-level bureaucrats’ (Lipsky, 2010), provided an additional theoretical platform to test policy limitations on officers’ decision-making. Several other theories were derived from the literature and used to assist data collection and provide focus to the analysis. These included Classical Deterrence Theory, as derived from Hobbes (1651), Beccaria (1764/1872) and Bentham (1780/1988), that was tested relative to alterations in offending behaviour. Moore’s Public Value Theory (1995), that explains public acceptance of authority and coercion is judged against citizens’ expectations for justice, fairness, efficiency, and effectiveness, provided an opportunity to explore external policy acceptance and/or resistance. Cohen and Felson’s (1979) Routine Activity Theory proposes that for a crime to be successfully committed, the three necessary elements are a motivated offender, the availability of a suitable target, and the absence of a capable guardian. The theory was tested by analysis of crime patterns and offending behaviours where police guardianship may have been affected by the policy restrictions. The research began with study 1, a documentary archival search and analysis spanning the pre-policy years from 1989 to 2006, that sought to confirm the intent of Queensland’s restrictive pursuit policies. Applying a One Group Pretest/Posttest Design method, using official QPS data from 2003 to 2015, study 2 then explored changes in pursuit frequencies, and associated trauma, before and after each policy iteration. Study 3 used the same method to test changes in frequency and rate of selected offence categories. Study 4 analysed operational reports to identify policy noncompliance that may infer resistance. And, finally, study 5 analysed interview responses from fifteen operational police officers. Findings from study 1 reveal the primary intent of the policies was to reduce the number of deaths associated with police pursuits. Study 2 found that both restrictive policies reduced pursuit-related trauma, as intended. Crime classes tested in study 3 all showed reductions to varying degrees, except evasion offences, which increased exponentially. Early policy resistance was evident from the results of study 4 but diminished over time. The results of studies 4 & 5 found early resistance to the restrictive policies was predicated on officers’ fears of potential loss to their autonomy, independent decision-making capacity, and operational feasibility. This research established that restrictive police pursuit policies did not contribute to increases in the general road death toll due to any lack of road policing enforcement, as predicted in the literature. And, except for evasion offences, they did not facilitate increased crime where the use of a vehicle is either mandatory or desirable for the successful completion of the offence. With the passing of time, and the negation of pre-empted outcomes, resistance is now largely eliminated. However, police officers reportedly continue to resist applying the evasion offence policy requirements, while in their view prosecutors and magistrates fail to adhere to the relevant legislation. Future researchers may wish to test the findings in an alternative jurisdiction to establish if the results can be equally observed and replicated. However, the findings imply that police administrators contemplating policy reform should focus greater attention and resources on ongoing training investment before and after policy implementation. Their goal should be to ensure officers are thoroughly versed in the organization’s aims, so that policies may be fully embraced by operational and prosecutorial staff, while assuaging any perceived losses from the outset, particularly to officers’ status and authority.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Crim & Crim Justice
Arts, Education and Law
Full Text
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Nel, Philip J. „Indigenous knowledge systems and language practice : interface of a knowledge discourse“. Journal for New Generation Sciences : Socio-constructive language practice : training in the South African context : Special Edition, Vol 6, Issue 3: Central University of Technology, Free State, Bloemfontein, 2008. http://hdl.handle.net/11462/516.

Der volle Inhalt der Quelle
Annotation:
Published Article
The paper seeks to engage constructively with the challenges and opportunities Indigenous Knowledge (IK) may offer disciplines in Language Practice. The approach will be contextualized in terms of the theoretical shift in knowledge production and use, as well as the current debate pertaining to the feasibility of the incorporation of IK into curricula. Specific attention will be rendered to topics of Africanizing scholarship, a performance model of knowledge, the socio-cultural embeddedness of language, and brief thoughts on the translation of the oral. These thematic issues are of particular importance to Language Practice, perceived here to be at the gateway between theory of language/communication and receiver communities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Arruda, Dilermano Honório de. „O uso da calculadora simples em sala de aula“. Universidade Federal de Goiás, 2013. http://repositorio.bc.ufg.br/tede/handle/tde/2956.

Der volle Inhalt der Quelle
Annotation:
Submitted by Erika Demachki (erikademachki@gmail.com) on 2014-08-28T19:49:36Z No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Revisão Dilermano V Regis.PDF: 739794 bytes, checksum: d08c982161f7e5a51c6e06ede504f527 (MD5)
Made available in DSpace on 2014-08-28T19:49:36Z (GMT). No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Revisão Dilermano V Regis.PDF: 739794 bytes, checksum: d08c982161f7e5a51c6e06ede504f527 (MD5) Previous issue date: 2013-04-11
O objetivo deste estudo é apresentar uma ferramenta que todos possuem, que é fácil de carregar e que portanto tem alcance popular. Nas primeiras seções apresenta-se algumas operações matemáticas elementares executadas com uma calculadora simples. Começa-se por estudar a operação de divisão de inteiros e como encontrar o resto na calculadora, como operar com frações em uma calculadora e responder na forma de fração, como encontrar a fração geratriz de uma dízima periódica ou ainda como extrair a raíz "n-ésima"de um número real. Para cada operação destas há a preocupação de se expôr também suas devidas aplicações e, quando necessário, de se realizar estudos matemáticos mais profundos como, por exemplo, no caso das congruências módulo m nas dízimas ou derivadas para raízes n-ésima. Em iterações com a adição, por exemplo, trabalha-se o uso adequado de algumas teclas para um procedimento rápido no estudo das progressões aritméticas, juro simples e funções polinomiais do 1o grau. Já na seção de iterações com o produto, propõe-se um estudo das progressões geométricas, funções exponenciais e juro composto. Iterando com as fórmulas de Maclaurin, veremos como é possível utilizar a calculadora simples para encontrar valores das funções trigonométricas circulares e hiperbólicas, bem como os valores dos logaritmos decimais e neperianos. Esse trabalho mostra ao leitor que pelo conhecimento matemático pode se otimizar a calculadora simples, responder questões que antes os cálculos ficariam apenas indicados, ganhar tempo nos cálculos e utilizar o tempo restante para melhor elaborar a estratégia de resolução, e demais vantagens que o leitor descobrirá.
Quero agradecer aos amigos, àqueles que o tempo separou mas as lembranças não, e de forma especial aos 29 novos amigos que o mestrado me proporcionou: Amanda, Viviane, Leozão, Hélio, Flávio, Carlos André, Fabão, Rogério, Marquinhos, Junior César, Paulo César, Edmaldo, Haniel, Simão, Leandro, Kadu, Frederico, Kariton, Marcão, Welington. Àqueles que foram além da amizade e se fizeram irmãos: Túlio, Robison, Alan, Hugo César, César Pereira, Leo Alcântara, Maradona, Eduardo Vasconcelos e Mateus. A esses devo meus reconhecimentos. A esses que leram ou ouviram com cuidado as primeiras versões, que opniram, corrigiram e criticaram, meus sinceros agradecimentos. Desejo agradecer de forma mais pessoal à minha família, pelo apoio incondicional, por me ouvirem mesmo quando nada entendiam. À Dagma e à Dalma pelas lições diárias de força e coragem; a minha esposa Juliana que conseguiu suportar a construção de um livro dentro de casa e por acreditar que eu conseguiria escrevê-lo. Meu maior agradecimento vai aos meus filhos: Priscila pelas horas ao telefone me ouvindo; Pedro, Mariana e Gabriel, pela correção ou reescrita de meu português, pelas brincadeiras, pelas partidas de futebol no vídeo game buscando aliviar meu cansaço, por me obrigarem a ir ao cinema ou por fazermos nada ao final dos meus dias de trabalho. Sem isto, e sem eles, eu não sobreviveria à escrita deste.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Suaysuwan, Noparat. „English language textbooks in Thailand 1960-1997 : constructing postwar, industrial and global iterations of Thai society through and for the child language learner /“. [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18722.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Létourneau, Étienne. „Impact of algorithm, iterations, post-smoothing, count level and tracer distribution on single-frame positrom emission tomography quantification using a generalized image space reconstruction algorithm“. Thesis, McGill University, 2012. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=110750.

Der volle Inhalt der Quelle
Annotation:
Positron Emission Tomography (PET) is a medical imaging technique tracing the functional processes inside a subject. One of the common applications of this device is to perform a subjective diagnostic from the images. However, quantitative imaging (QI) allows one to perform an objective analysis as well as providing extra information such as the time activity curves (TAC) and visual details that the eye can't see. The aim of this work was to, by comparing several reconstruction algorithms such as the MLEM PSF, ISRA PSF and its related algorithms and FBP for single-frame imaging, to develop a robust analysis on the quantitative performance depending on the region of interest (ROI), the size of the ROI, the noise level, the activity distribution and the post-smoothing parameters. By simulating an acquisition using a 2-D digital axial brain phantom on Matlab, comparison has been done on a quantitative point of view helped by visual figures as explanatory tools for all the techniques using the Mean Absolute Error (MAE) and the Bias-Variance relation. Results show that the performance of each algorithm depends mainly on the number of counts coming from the ROI and the iteration/post-smoothing combination that, when adequately chosen, allows nearly every algorithms to give similar quantitative results in most cases. Among the 10 analysed techniques, 3 distinguished themselves: ML-EM PSF, ISRA PSF with the smoothed expected data as weight and the FBP with an adequate post-smoothing were the main contenders for achieving the lowest MAE. Keywords: Positron Emission Tomography, Maximum-Likelihood Expectation-Maximization, Image Space Reconstruction Algorithm, Filtered Backprojection, Mean Absolute Error, Quantitative Imaging.
La tomographie par Émission de Positons est une technique d'imagerie médicale traçant les procédures fonctionnelles qui se déroulent dans le patient. L'une des applications courantes de cet appareil consiste à performer un diagnostique subjectif à partir des images obtenues. Cependant, l'imagerie quantitative (IQ) permet de performer une analyse objective en plus de nous procurer de l'information additionnelle telle que la courbe temps-activité (CTA) ainsi que des détails visuels qui échappent à l'œil. Le but de ce travail était, en comparant plusieurs algorithmes de reconstruction tels que le ML-EM PSF, le ISRA PSF et les algorithmes qui en découlent ainsi que la rétroprojection filtrée pour une image bidimensionnelle fixe, de développer une analyse robuste sur les performances quantitatives dépendamment de la localisation des régions d'intérêt (RdI), de leur taille, du niveau de bruit dans l'image, de la distribution de l'activité et des paramètres post-lissage. En simulant des acquisitions à partir d'une coupe axiale d'un cerveau digitale sur Matlab, une comparaison quantitative appuyée de figures qualitative en guise d'outils explicatifs a été effectuée pour toutes les techniques de reconstruction à l'aide de l'Erreur Absolue Moyenne (EAM) et de la relation Biais-Variance. Les résultats obtenus démontrent que la performance de chaque algorithme dépend principalement du nombre d'événements enregistré provenant de la RdI ainsi que de la combinaison itération/post-lissage utilisée qui, lorsque choisie adéquatement, permet à la majorité des algorithmes étudiés de donner des quantités similaires dans la majorité des cas. Parmi les 10 techniques analysées, 3 se sont démarquées : ML-EM PSF, ISRA PSF en utilisant les valeurs prévues avec lissage comme facteur de pondération et RPF avec un post-lissage adéquat les principaux prétendants pour atteindre l'EMA minimale. Mots-clés: Tomographie par émission de positons, Maximum-Likelihood Expectation-Maximization, Image Space Reconstruction Algorithm, Rétroprojection Filtrée, Erreur Absolue Moyenne, Imagerie quantitative.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Barros, Carlos Eduardo Rosado de. „Sequências monótonas aplicadas a um problema de cauchy para um sistema de reação-difusão-convecção“. Universidade Federal de Goiás, 2015. http://repositorio.bc.ufg.br/tede/handle/tede/5543.

Der volle Inhalt der Quelle
Annotation:
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-05-05T20:25:53Z No. of bitstreams: 2 Dissertação - Carlos Eduardo Rosado de Barros - 2015.pdf: 1334566 bytes, checksum: 373183ed73dd83bfac5d91d2670c2e36 (MD5) license_rdf: 19874 bytes, checksum: 38cb62ef53e6f513db2fb7e337df6485 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-05-06T11:43:27Z (GMT) No. of bitstreams: 2 Dissertação - Carlos Eduardo Rosado de Barros - 2015.pdf: 1334566 bytes, checksum: 373183ed73dd83bfac5d91d2670c2e36 (MD5) license_rdf: 19874 bytes, checksum: 38cb62ef53e6f513db2fb7e337df6485 (MD5)
Made available in DSpace on 2016-05-06T11:43:27Z (GMT). No. of bitstreams: 2 Dissertação - Carlos Eduardo Rosado de Barros - 2015.pdf: 1334566 bytes, checksum: 373183ed73dd83bfac5d91d2670c2e36 (MD5) license_rdf: 19874 bytes, checksum: 38cb62ef53e6f513db2fb7e337df6485 (MD5) Previous issue date: 2015-08-07
Conselho Nacional de Pesquisa e Desenvolvimento Científico e Tecnológico - CNPq
In this work, mainly based on the articles, [1], [2] and [7], one studies a reactiondiffusion- convection system, related the propagation of a combustion front through a porous medium, giving origin a Cauchy problem. Such a problem has been approached by the methode of the monotone iterations, which leads to an unique time-global solution.
Nesse trabalho, baseado principalmente nos artigos [1], [2] e [7], estuda-se um sistema reação-difusão-convecção, relacionado à propagação de uma frente de combustão em um meio poroso, recaindo sobre um problema de Cauchy. Tal problema é abordado através do método de iterações monótonas, o qual conduz a uma única solução global no tempo.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Garay, Jose. „Asynchronous Optimized Schwarz Methods for Partial Differential Equations in Rectangular Domains“. Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/510451.

Der volle Inhalt der Quelle
Annotation:
Mathematics
Ph.D.
Asynchronous iterative algorithms are parallel iterative algorithms in which communications and iterations are not synchronized among processors. Thus, as soon as a processing unit finishes its own calculations, it starts the next cycle with the latest data received during a previous cycle, without waiting for any other processing unit to complete its own calculation. These algorithms increase the number of updates in some processors (as compared to the synchronous case) but suppress most idle times. This usually results in a reduction of the (execution) time to achieve convergence. Optimized Schwarz methods (OSM) are domain decomposition methods in which the transmission conditions between subdomains contain operators of the form \linebreak $\partial/\partial \nu +\Lambda$, where $\partial/\partial \nu$ is the outward normal derivative and $\Lambda$ is an optimized local approximation of the global Steklov-Poincar\'e operator. There is more than one family of transmission conditions that can be used for a given partial differential equation (e.g., the $OO0$ and $OO2$ families), each of these families containing a particular approximation of the Steklov-Poincar\'e operator. These transmission conditions have some parameters that are tuned to obtain a fast convergence rate. Optimized Schwarz methods are fast in terms of iteration count and can be implemented asynchronously. In this thesis we analyze the convergence behavior of the synchronous and asynchronous implementation of OSM applied to solve partial differential equations with a shifted Laplacian operator in bounded rectangular domains. We analyze two cases. In the first case we have a shift that can be either positive, negative or zero, a one-way domain decomposition and transmission conditions of the $OO2$ family. In the second case we have Poisson's equation, a domain decomposition with cross-points and $OO0$ transmission conditions. In both cases we reformulate the equations defining the problem into a fixed point iteration that is suitable for our analysis, then derive convergence proofs and analyze how the convergence rate varies with the number of subdomains, the amount of overlap, and the values of the parameters introduced in the transmission conditions. Additionally, we find the optimal values of the parameters and present some numerical experiments for the second case illustrating our theoretical results. To our knowledge this is the first time that a convergence analysis of optimized Schwarz is presented for bounded subdomains with multiple subdomains and arbitrary overlap. The analysis presented in this thesis also applies to problems with more general domains which can be decomposed as a union of rectangles.
Temple University--Theses
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Mateo, Garcia Ignacio de [Verfasser], Andreas [Akademischer Betreuer] Griewank, Nicolas R. [Akademischer Betreuer] Gauger und Willy J. F. [Akademischer Betreuer] Govaerts. „Iterative matrix-free computation of Hopf bifurcations as Neimark-Sacker points of fixed point iterations / Ignacio de Mateo Garcia. Gutachter: Andreas Griewank ; Nicolas R. Gauger ; Willy J. F. Govaerts“. Berlin : Humboldt Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://d-nb.info/1020871148/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Wang, Qianxue. „Création et évaluation statistique d'une nouvelle de générateurs pseudo-aléatoires chaotiques“. Thesis, Besançon, 2012. http://www.theses.fr/2012BESA2031.

Der volle Inhalt der Quelle
Annotation:
Dans cette thèse, une nouvelle manière de générer des nombres pseudo-aléatoires est présentée.La proposition consiste à mixer deux générateurs exitants avec des itérations chaotiquesdiscrètes, qui satisfont à la définition de chaos proposée par Devaney. Un cadre rigoureux estintroduit, dans lequel les propriétés topologiques du générateur résultant sont données. Deuxréalisations pratiques d’un tel générateur sont ensuite présentées et évaluées. On montre que lespropriétés statistiques des générateurs fournis en entrée peuvent être grandement améliorées enprocédant ainsi. Ces deux propositions sont alors comparées, en profondeur, entre elles et avecun certain nombre de générateurs préexistants. On montre entre autres que la seconde manièrede mixer deux générateurs est largement meilleure que la première, à la fois en terme de vitesseet de performances.Dans la première partie de ce manuscrit, la fonction d’itérations considérée est la négation vectorielle.Dans la deuxième partie, nous proposons d’utiliser des graphes fortement connexescomme critère de sélection de bonnes fonctions d’itérations. Nous montrons que nous pouvonschanger de fonction sans perte de propriétés pour le générateur obtenu. Finalement, une illustrationdans le domaine de l’information dissimulée est présentée, et la robustesse de l’algorithmede tatouage numérique proposé est évalué
In this thesis, a new way to generate pseudorandom numbers is presented. The propositionis to mix two exiting generators with discrete chaotic iterations that satisfy the Devaney’sdefinition of chaos. A rigorous framework is introduced, where topological properties of theresulting generator are given, and two practical designs are presented and evaluated. It is shownthat the statistical quality of the inputted generators can be greatly improved by this way, thusfulfilling the up-to-date standards. Comparison between these two designs and existing generatorsare investigated in details. Among other things, it is established that the second designedtechnique outperforms the first one, both in terms of performance and speed.In the first part of this manuscript, the iteration function embedded into chaotic iterations isthe vectorial Boolean negation. In the second part, we propose a method using graphs havingstrongly connected components as a selection criterion.We are thus able to modify the iterationfunction without deflating the good properties of the associated generator. Simulation resultsand basic security analysis are then presented to evaluate the randomness of this new family ofpseudorandom generators. Finally, an illustration in the field of information hiding is presented,and the robustness of the obtained data hiding algorithm against attacks is evaluated
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Gbikpi, benissan Tete guillaume. „Méthodes asynchrones de décomposition de domaine pour le calcul massivement parallèle“. Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC071/document.

Der volle Inhalt der Quelle
Annotation:
Une large classe de méthodes numériques possède une propriété d’échelonnabilité connue comme étant la loi d’Amdahl. Elle constitue l’inconvénient majeur limitatif du calcul parallèle, en ce sens qu’elle établit une borne supérieure sur le nombre d’unités de traitement parallèles qui peuvent être utilisées pour accélérer un calcul. Des activités de recherche sont donc largement conduites à la fois sur les plans mathématiques et informatiques, pour repousser cette limite afin d’être en mesure de tirer le maximum des machines parallèles. Les méthodes de décomposition de domaine introduisent une approche naturelle et optimale pour résoudre de larges problèmes numériques de façon distribuée. Elles consistent en la division du domaine géométrique sur lequel une équation est définie, puis le traitement itératif de chaque sous-domaine, séparément, tout en assurant la continuité de la solution et de sa dérivée sur leur interface de jointure. Dans le présent travail, nous étudions la suppression de la limite d’accélération en appliquant des itérations asynchrones dans différents cadres de décomposition, à la fois de domaines spatiaux et temporels. Nous couvrons plusieurs aspects du développement d’algorithmes asynchrones, de l’analyse théorique de convergence à la mise en oeuvre effective. Nous aboutissons ainsi à des méthodes asynchrones efficaces pour la décomposition de domaine, ainsi qu’à une nouvelle bibliothèque de communication pour l’expérimentation asynchrone rapide d’applications scientifiques existantes
An important class of numerical methods features a scalability property well known as the Amdahl’s law, which constitutes the main limiting drawback of parallel computing, as it establishes an upper bound on the number of parallel processing units that can be used to speed a computation up. Extensive research activities are therefore conducted on both mathematical and computer science aspects to increase this bound, in order to be able to squeeze the most out of parallel machines. Domain decomposition methods introduce a natural and optimal approach to solve large numerical problems in a distributed way. They consist in dividing the geometrical domain on which an equation is defined, then iteratively processing each sub-domain separately, while ensuring the continuity of the solution and of its derivative across the junction interface between them. In the present work, we investigate the removal of the scalability bound by the application of the asynchronous iterations theory in various decomposition frameworks, both for space and time domains. We cover various aspects of the development of asynchronous iterative algorithms, from theoretical convergence analysis to effective parallel implementation. Efficient asynchronous domain decomposition methods are thus successfully designed, as well as a new communication library for the quick asynchronous experimentation of existing scientific applications
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Demir, Abdullah. „Form Finding And Structural Analysis Of Cables With Multiple Supports“. Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613609/index.pdf.

Der volle Inhalt der Quelle
Annotation:
Cables are highly nonlinear structural members under transverse loading. This nonlinearity is mainly due to the close relationship between the final geometry under transverse loads and the resulting stresses in its equilibrium state rather than the material properties. In practice, the cables are usually used as isolated single-segment elements fixed at the ends. Various studies and solution procedures suggested by researchers are available in the literature for such isolated cables. However, not much work is available for continuous cables with multiple supports. In this study, a multi-segment continuous cable is defined as a cable fixed at the ends and supported by a number of stationary roller supports in between. Total cable length is assumed constant and the intermediate supports are assumed to be frictionless. Therefore, the critical issue is to find the distribution of the cable length among its segments in the final equilibrium state. Since the solution of single-segment cables is available the additional condition to be satisfied for multi-segment continuous cables with multiple supports is to have stress continuity at intermediate support locations where successive cable segments meet. A predictive/corrective iteration procedure is proposed for this purpose. The solution starts with an initially assumed distribution of total cable length among the segments and each segment is analyzed as an independent isolated single-segment cable. In general, the stress continuity between the cable segments will not be satisfied unless the assumed distribution of cable length is the correct distribution corresponding to final equilibrium state. In the subsequent iterations the segment lengths are readjusted to eliminate the unbalanced tensions at segment junctions. The iterations are continued until the stress continuity is satisfied at all junctions. Two alternative approaches are proposed for the segment length adjustments: Direct stiffness method and tension distribution method. Both techniques have been implemented in a software program for the analysis of multi-segment continuous cables and some sample problems are analyzed for verification. The results are satisfactory and compares well with those obtained by the commercial finite element program ANSYS.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Zou, Weiyao. „OPTIMIZATION OF ZONAL WAVEFRONT ESTIMATION AND CURVATURE MEASUREMENTS“. Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4134.

Der volle Inhalt der Quelle
Annotation:
Optical testing in adverse environments, ophthalmology and applications where characterization by curvature is leveraged all have a common goal: accurately estimate wavefront shape. This dissertation investigates wavefront sensing techniques as applied to optical testing based on gradient and curvature measurements. Wavefront sensing involves the ability to accurately estimate shape over any aperture geometry, which requires establishing a sampling grid and estimation scheme, quantifying estimation errors caused by measurement noise propagation, and designing an instrument with sufficient accuracy and sensitivity for the application. Starting with gradient-based wavefront sensing, a zonal least-squares wavefront estimation algorithm for any irregular pupil shape and size is presented, for which the normal matrix equation sets share a pre-defined matrix. A Gerchberg–Saxton iterative method is employed to reduce the deviation errors in the estimated wavefront caused by the pre-defined matrix across discontinuous boundary. The results show that the RMS deviation error of the estimated wavefront from the original wavefront can be less than λ/130~ λ/150 (for λ equals 632.8nm) after about twelve iterations and less than λ/100 after as few as four iterations. The presented approach to handling irregular pupil shapes applies equally well to wavefront estimation from curvature data. A defining characteristic for a wavefront estimation algorithm is its error propagation behavior. The error propagation coefficient can be formulated as a function of the eigenvalues of the wavefront estimation-related matrices, and such functions are established for each of the basic estimation geometries (i.e. Fried, Hudgin and Southwell) with a serial numbering scheme, where a square sampling grid array is sequentially indexed row by row. The results show that with the wavefront piston-value fixed, the odd-number grid sizes yield lower error propagation than the even-number grid sizes for all geometries. The Fried geometry either allows sub-sized wavefront estimations within the testing domain or yields a two-rank deficient estimation matrix over the full aperture; but the latter usually suffers from high error propagation and the waffle mode problem. Hudgin geometry offers an error propagator between those of the Southwell and the Fried geometries. For both wavefront gradient-based and wavefront difference-based estimations, the Southwell geometry is shown to offer the lowest error propagation with the minimum-norm least-squares solution. Noll's theoretical result, which was extensively used as a reference in the previous literature for error propagation estimate, corresponds to the Southwell geometry with an odd-number grid size. For curvature-based wavefront sensing, a concept for a differential Shack-Hartmann (DSH) curvature sensor is proposed. This curvature sensor is derived from the basic Shack-Hartmann sensor with the collimated beam split into three output channels, along each of which a lenslet array is located. Three Hartmann grid arrays are generated by three lenslet arrays. Two of the lenslets shear in two perpendicular directions relative to the third one. By quantitatively comparing the Shack-Hartmann grid coordinates of the three channels, the differentials of the wavefront slope at each Shack-Hartmann grid point can be obtained, so the Laplacian curvatures and twist terms will be available. The acquisition of the twist terms using a Hartmann-based sensor allows us to uniquely determine the principal curvatures and directions more accurately than prior methods. Measurement of local curvatures as opposed to slopes is unique because curvature is intrinsic to the wavefront under test, and it is an absolute as opposed to a relative measurement. A zonal least-squares-based wavefront estimation algorithm was developed to estimate the wavefront shape from the Laplacian curvature data, and validated. An implementation of the DSH curvature sensor is proposed and an experimental system for this implementation was initiated. The DSH curvature sensor shares the important features of both the Shack-Hartmann slope sensor and Roddier's curvature sensor. It is a two-dimensional parallel curvature sensor. Because it is a curvature sensor, it provides absolute measurements which are thus insensitive to vibrations, tip/tilts, and whole body movements. Because it is a two-dimensional sensor, it does not suffer from other sources of errors, such as scanning noise. Combined with sufficient sampling and a zonal wavefront estimation algorithm, both low and mid frequencies of the wavefront may be recovered. Notice that the DSH curvature sensor operates at the pupil of the system under test, therefore the difficulty associated with operation close to the caustic zone is avoided. Finally, the DSH-curvature-sensor-based wavefront estimation does not suffer from the 2-ambiguity problem, so potentially both small and large aberrations may be measured.
Ph.D.
Optics and Photonics
Optics and Photonics
Optics PhD
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie