Rozprawy doktorskie na temat „Algorithmes à phases”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Algorithmes à phases”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Baala, Oumaya. "Protocole de validation en deux phases : algorithme généralisé". Paris 8, 1996. http://www.theses.fr/1996PA081044.
Pełny tekst źródłaMaggiochi, Paul. "Développement d'algorithmes de calcul d'équilibres entre phases pour la simulation des procédés chimiques". Toulouse, INPT, 1986. http://www.theses.fr/1986INPT026G.
Pełny tekst źródłaDeredempt, Olivier. "Étude comparative des algorithmes de recherche des phases de la porteuse et de l'horloge de bits d'un signal à modulation de phase". Toulouse, INPT, 1993. http://www.theses.fr/1993INPT150H.
Pełny tekst źródłaPrzybylski, Anthony. "Méthode en deux phases pour la résolution exacte de problèmes d'optimisation combinatoire comportant plusieurs objectifs : nouveaux développements et application au problème d'affectation linéaire". Nantes, 2006. http://www.theses.fr/2006NANT2123.
Pełny tekst źródłaThe purpose of this work is the exact solution of multi-objective combinatorial optimisation problems with the two phase method. For this, we use assignment problem as a support for our investigations. The two phase method is a general solving scheme that has been popularized by Ulungu in 1993. The main idea of this method is to exploit the specific structure of combinatorial optimisation problems in a multi-objective context. It has been applied to a number of problems, with a limitation on the bi-objective case. We present improvements in this method and in its application to the bi-objective assignment problem. In particular, we propose improved upper bounds and the use of a ranking algorithm as main routine in the second phase of the method. We propose next a generalisation of this method to the multi-objective case, done in two steps. For the first phase, we analyse the weight set decomposition in correspondance with the nondominated extreme points. This allows us to highlight a geometric notion of adjacency between these points and an optimality condition on their enumeration. The second phase consists in the definition and the exploration of the area inside of which enumerations are required to finalize the resolution to the problem. Our solution is based primarily on an appropriate description of this area, that allows to explore it by analogy with the bi-objective case. It is therefore possible to reuse a strategy developped for this case. Experimental results on three-objective assignment problem show the efficiency of the method
Vincent, Thomas. "Caractérisation des solutions efficaces et algorithmes d’énumération exacts pour l’optimisation multiobjectif en variables mixtes binaires". Nantes, 2013. http://archive.bu.univ-nantes.fr/pollux/show.action?id=c984a17c-6904-454d-9b3a-e63846e9fb9b.
Pełny tekst źródłaThe purpose of this work is the exact solution of multiple objective binary mixed integer linear programmes. The mixed nature of the variables implies significant differences with purely continuous or purely discrete programmes. Thus, we propose to take these differences into account using a proper representation of the solution sets and a dedicated update procedure. These propositions allow us to adapt for the biobjective case two solution methods commonly used for combinatorial problems: the Branch & Bound algorithm and the two phase method. Several improvements are proposed, such as bound sets or visiting strategies. We introduce a new routine for the second phase of the two phase method that takes advantage of all the relevant features of the previously studied methods. In the 3-objective context, the solution sets representation is extended by analogy with the biobjective case. Solutions methods are extended and studied as well. In particular, the decomposition of the search area during the second phase is thoroughly described. The proposed software solution has been applied on a real world problem: evaluation of a vehicle choice policy. The possible choices range from classical to electric vehicles that are powered by grid or solar power
Mahamdi, Célia. "Multi-Consensus distribué : agrégation et révocabilité". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS426.
Pełny tekst źródłaThis thesis presents two contributions to the field of distributed systems: OMAHA and the f-Revoke Consensus.OMAHA (Opportunistic Message Aggregation for pHase-based Algorithms) is a message aggregation mechanism designed for phase-based algorithms. In cloud environments, multiple applications share the same infrastructure, making bandwidth a critical resource. A significant portion of traffic in data centers consists of small messages. Each message includes a header, leading to substantial bandwidth consumption. Several mechanisms have been proposed to address this issue, but few consider application-specific characteristics. Most rely on aggregation at the network layer. OMAHA leverages the features of phase-based algorithms to intelligently and opportunistically aggregate messages. Many applications, such as Google Spanner and Zookeeper, depend on phase-based algorithms. They are often message-intensive but offer a key advantage: predictable communications. By anticipating future communications, OMAHA delays messages and groups them with others intended for the same process. This approach reduces the number of messages sent over the network, resulting in bandwidth savings. Our experiments show bandwidth saving of up to 30%, while limiting latency degradation to 5% for the well-known Paxos algorithm. In distributed systems, achieving consensus on an action or value is complex, especially when processes face constraints. Many systems, including multi-agent systems (like autonomous vehicles and robotics) and resource allocation systems, need to respect these constraints while working towards a common goal. Unfortunately, traditional consensus algorithms often overlook these constraints, focusing only on the values proposed by the processes. A straightforward solution would be to gather all constraints, but due to asynchrony and potential failures, this is impossible. To handle failures, some algorithms set a limit on the number of faults they can tolerate. This allows them to move forward without waiting for responses from every process. As a result, the final decision is made by a subset of processes known as the majority. This leads to the exclusion of constraints from the minority. To tackle this problem, we introduced the f-Revoke Consensus. This new approach enables the selection of a value that considers processes' constraints. It also allows for the revocation of a majority decision if it violates the constraints of a minority process. Importantly, convergence is ensured because the number of revocations is limited by the size of the minority. We developed two adaptations of the Paxos algorithm to implement this new consensus
Hollette, Matthieu. "Modélisation de la propagation des ondes élastiques dans un milieu composite à microstructure 3D". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00840603.
Pełny tekst źródłaDeroulers, Christophe. "Application de la mécanique statistique à trois problèmes hors d'équilibre : algorithmes, épidémies, milieux granulaires". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2006. http://tel.archives-ouvertes.fr/tel-00102083.
Pełny tekst źródłaBen, Sedrine Emna. "Machines à commutation de flux à grand nombre de phases : modèles comportementaux en mode dégradé et élaboration d’une stratégie de commande en vue de l’amélioration de la tolérance aux pannes". Thesis, Cachan, Ecole normale supérieure, 2014. http://www.theses.fr/2014DENS0047/document.
Pełny tekst źródłaIn this thesis, we are interested in the study of a five-phase flux switching permanent magnet machine (five-phase FSPM machine) behavior in healthy and faulty mode. First, a comparison of electromagnetic performances between this machine and an equivalent three-phase machine is carried out. These performances are calculated by a Finite Element (FE 2D) model and validated by experiments. Results showed the five-phase machine contribution with a higher torque density, lower torque ripples, lower short-circuit current and ability to tolerate phases faults. The study of open-circuit tolerance is then developed for this five-phase FSPM. The behavior of the machine (the average torque, torque ripples, copper losses and the current in the neutral) in the case of open-circuit on a single and two adjacent and non-adjacent phases is presented. Then reconfiguration methods to improve the operation are proposed including a minimum reconfiguration allowing to end up with a feeding equivalent to that of a three-phase or a four-phase machine, an analytical calculation of optimal currents to cancel both the neutral current and torque ripples while ensuring the average torque, and finally a reconfiguration performed by a genetic optimization algorithm which is a non-deterministic algorithm multi-objective functions and multi-constraints. In this context, various combinations of different objectives and constraints are proposed and optimal currents are injected into the 2D FE model of the machine to see if performances have been improved. The analytical model of the torque used in the optimization algorithm is then revised to take into account the influence of the degraded mode. Different solutions of Pareto front are analyzed and electromagnetic performances are improved. This is verified by FE 2D calculations and followed by experimental validation. Faults impact on the radial magnetic forces is also analyzed. In the second part of this work, the study of the five-phase FSPM machine tolerance to short-circuit faults is performed. First steps of the faults isolation are proposed. Thereafter, short-circuit currents, taking into account the reluctance machine impact, are calculated analytically and their effects on machine performances are analyzed. Reconfigurations are also calculated by the genetic algorithm optimization and new references currents improved the degraded mode operation. All results are validated by the FE 2D calculation and experimentally. In conclusion, comparisons between fault-tolerance to phases openings and short-circuits of the five-phase FSPM machine are performed. Results led to conclude regarding the operation of this machine in healthy and degraded modes with and without correction. Analytical, numerical and experimental results showed good efficiency of the proposed control to improve fault-tolerance to phases openings and short-circuits
Lesieur, Thibault. "Factorisation matricielle et tensorielle par une approche issue de la physique statistique". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS345/document.
Pełny tekst źródłaIn this thesis we present the result on low rank matrix and tensor factorization. Matrices being such an ubiquitous mathematical object a lot of machine learning can be mapped to a low-rank matrix factorization problem. It is for example one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. The result presented in this thesis have been included in previous work [LKZ 201].The problem of low rank matrix becomes harder once one adds constraint to the problem like for instance the positivity of one of the factor of the factorization. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a paralel with the study of vector-spin glass models -- presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail ageneral form of the low-rank approximate message passing (Low-RAMP) algorithm that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods
Kreis, Adrien. "Optimisation multiobjectifs de systèmes dynamiques : application à la suspension de groupes motopropulseurs de véhicules automobiles en phase d'avant-projet". Valenciennes, 2001. https://ged.uphf.fr/nuxeo/site/esupversions/8591a683-68e4-4103-8942-6ee1042e7cc9.
Pełny tekst źródłaAubin, Benjamin. "Mean-field methods and algorithmic perspectives for high-dimensional machine learning". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP083.
Pełny tekst źródłaAt a time when the use of data has reached an unprecedented level, machine learning, and more specifically deep learning based on artificial neural networks, has been responsible for very important practical advances. Their use is now ubiquitous in many fields of application, from image classification, text mining to speech recognition, including time series prediction and text analysis. However, the understanding of many algorithms used in practice is mainly empirical and their behavior remains difficult to analyze. These theoretical gaps raise many questions about their effectiveness and potential risks. Establishing theoretical foundations on which to base numerical observations has become one of the fundamental challenges of the scientific community. The main difficulty that arises in the analysis of most machine learning algorithms is to handle, analytically and numerically, a large number of interacting random variables. In this manuscript, we revisit an approach based on the tools of statistical physics of disordered systems. Developed through a rich literature, they have been precisely designed to infer the macroscopic behavior of a large number of particles from their microscopic interactions. At the heart of this work, we strongly capitalize on the deep connection between the replica method and message passing algorithms in order to shed light on the phase diagrams of various theoretical models, with an emphasis on the potential differences between statistical and algorithmic thresholds. We essentially focus on synthetic tasks and data generated in the teacher-student paradigm. In particular, we apply these mean-field methods to the Bayes-optimal analysis of committee machines, to the worst-case analysis of Rademacher generalization bounds for perceptrons, and to empirical risk minimization in the context of generalized linear models. Finally, we develop a framework to analyze estimation models with structured prior informations, produced for instance by deep neural networks based generative models with random weights
Kleyn, Werner Frederick. "Decoding algorithms for continuous phase modulation". Master's thesis, University of Cape Town, 2002. http://hdl.handle.net/11427/6984.
Pełny tekst źródłaContinuous Phase Modulation (CPM) possesses characteristics that make it very attractive for many applications. Efficient non-linear power amplifiers can be used in the transmitters of constant envelope CPM schemes. CPM also allows for the use of simple limiters in the demodulator rather than linear receivers with gain control. These characteristics not only increases the life of the power source, but it improves circuit reliability since less heat is generated. In some applications, such as satellite transmitters, where power and circuit failure is very expensive, CPM is the most attractive choice. Bandwidth efficiency, also, is very attractive, and improves as the order of the scheme increases (together with reduction in modulation index). Still further improvement is obtained through pulse shaping which normally result in partial response schemes as opposed to full-response (CPFSK) schemes. The inherent memory or coding gain of CPM increases the minimum distance, which is a figure of merit for a scheme's error performance. The length of the inherent memory is the constraint length of the scheme. Successful extraction of this inherent memory result in improved power efficiency. By periodic variation of the modulation index as in multi-h CPFSK, a sub class of CPM, coding gain or inherent memory can be significantly improved. CPM demodulation is also less sensitive to fading channels than some other comparable systems. Well-known schemes such as GSM digital mobile systems, DECT and Iridium all use some form of CPM to transport their information. These implementations are normally pulse-shaped FSK or MSK and are used for the reasons above, except that their receivers do not always exploit the inherent memory. Unfortunately, though, when one wants to exploit the inherent memory of higher level CPM schemes, all these attractive characteristics are offset by the complexity of the receiver structures which increases exponentially in complexity as the order or constraint length is increased. Optimum receivers for binary CPFSK were first described by Osborne and Luntz [19] in 1974 and their research was later extended by Schonhoff [26] to include M-ary CPFSK. These receivers evaluate likelihood functions after observing the received signal for a certain number of symbol intervals, say N, then calculate a set of likelihood parameters on which a likelihood ratio test regarding the first symbol is based. These receivers are complex and impractical but does provide valuable insight. This is called maximum likelihood sequence estimation (MLSE). Another way to do MLSE would be to correlate all possible transmitted sequences (reference signals at the demodulator) over a period of N symbol intervals with the received sequence. The first symbol of the reference sequence with which the received sequence has the largest correlation, is decoded as the most likely symbol. The number of reference sequences required at the receiver grow very fast as the observation period increases. Up to now, only the lowest order CPM schemes have feasible optimal receiver structures. The only practical solution thus far for the MLSE of higher order schemes is the use of software implementations of which the Viterbi algorithm is the most popular. Through recursive or sequential processing of data per interval, the number of matched filters required can be reduced. However, for schemes beyond a certain order and constraint length, the Viterbi algorithm's consumption of computational resources reduces its feasibility. Research into CPM is focused mainly on the quest for simpler demodulators and decoders or lower order schemes with better coding gain. In order to gain further insight into CPM, research is approached from different angles.
Marsh, David Moyle. "Phased Array Digital Beamforming Algorithms and Applications". BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7499.
Pełny tekst źródłaOrús, Lacort Román. "Entanglement, quantum phase transitions and quantum algorithms". Doctoral thesis, Universitat de Barcelona, 2006. http://hdl.handle.net/10803/482202.
Pełny tekst źródłaDesde las pioneras ideas de Feynman hasta el día de hoy, la información y computación cuánticas han evolucionado de forma veloz. Siendo la mecánica cuántica en sus orígenes considerada esencialmente como un marco teórico en el que poder explicar ciertos procesos fundamentales que acontecían en la Naturaleza, fue durante los años 80 y 90 cuando se empezó a pensar sobre el comportamiento intrínsecamente cuántico del mundo en el que vivimos como una herramienta con la que poder desarrollar tecnologías de la información más potentes, basadas en los mismos principios de la física cuántica. Tal y como Landauer dijo, la información es física, por lo que no debe en absoluto extrañarnos el que se intentara comulgar la mecánica cuántica con la teoría de la información. Y nada más lejos de la realidad, pues pronto se vio que era posible utilizar las leyes de la física cuántica para realizar tareas inconcebibles desde un punto de vista clásico. Por ejemplo, el descubrimiento de la teleportación, la codificación superdensa, la criptografía cuántica, el algoritmo de factorización de Shor o el algoritmo de búsqueda de Grover, constituyen algunos de los logros remarcables que han atraído la atención de mucha gente, dentro y fuera de la ciencia. Queda la información cuántica, pues, constituida como un campo genuinamente pluridisciplinar, en el que se concentran investigadores provenientes de diferentes ramas de la física, las matemáticas y la ingeniería. Mientras en sus orígenes era la información cuántica quien se beneficiaba del conocimiento de otros campos, a día de hoy las herramientas desarrolladas en el marco de la teoría cuántica de la información pueden ser asimismo usadas en el estudio de problemas de diferentes áreas, como la física de muchos cuerpos o la teoría cuántica de campos. Ello es debido al estudio detallado que la información cuántica desarrolla de las correlaciones cuánticas, o entrelazamiento cuántico. Cualquier sistema físico descrito por las leyes de la mecánica cuántica se puede por lo tanto considerar bajo la perspectiva de la teoría cuántica de la información a través de la teoría del entrelazamiento.
Ahmeda, Shubat Senoussi. "Adaptive target tracking algorithms for phased array radar". Thesis, University of Nottingham, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336953.
Pełny tekst źródłaVarner, Christopher Champion. "DGPS carrier phase networks and partial derivative algorithms". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0027/NQ49546.pdf.
Pełny tekst źródłaFarrell, C. T. "New algorithms for high accuracy phase shifting interferometry". Thesis, University of Aberdeen, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.592420.
Pełny tekst źródłaGreen, Roger James. "The use of Fourier transform methods in automatic fringe pattern analysis". Thesis, King's College London (University of London), 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307203.
Pełny tekst źródłaCalmeyn, Timothy Joseph. "A design algorithm for continuous melt-phase polyester manufacturing processes optimal design, product sensitivity, and process flexibility". Ohio : Ohio University, 1998. http://www.ohiolink.edu/etd/view.cgi?ohiou1175097000.
Pełny tekst źródłaBartee, Jon A. "Genetic algorithms as a tool for phased array radar design". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02Jun%5FBartee.pdf.
Pełny tekst źródłaOUKSSISSE, LAHCEN. "Etude comparative des algorithmes pour l'interferometrie holographique a decalage de phase". Université Louis Pasteur (Strasbourg) (1971-2008), 2000. http://www.theses.fr/2000STR13145.
Pełny tekst źródłaHautphenne, Sophie. "An algorithmic look at phase-controlled branching processes". Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210255.
Pełny tekst źródłaOur objective is to develop numerical methods to answer several questions about Markovian binary trees. The issue of the extinction probability is the main question addressed in the thesis. We first assume independence between individuals. In this case, the extinction probability is the minimal nonnegative solution of a matrix fixed point equation which can generally not be solved analytically. In order to solve this equation, we develop a linear algorithm based on functional iterations, and a quadratic algorithm, based on Newton's method, and we give their probabilistic interpretation in terms of the tree.
Next, we look at some transient features for a Markovian binary tree: the distribution of the population size at any given time, of the time until extinction and of the total progeny. These distributions are obtained using the Kolmogorov and the renewal approaches.
We illustrate the results mentioned above through an example where the Markovian binary tree serves as a model for female families in different countries, for which we use real data provided by the World Health Organization website.
Finally, we analyze the case where Markovian binary trees evolve under the external influence of a random environment or a catastrophe process. In this case, individuals do not behave independently of each other anymore, and the extinction probability may no longer be expressed as the solution of a fixed point equation, which makes the analysis more complicated. We approach the extinction probability, through the study of the population size distribution, by purely numerical methods of resolution of partial differential equations, and also by probabilistic methods imposing constraints on the external process or on the maximal population size.
/
Les processus de branchements sont des processus stochastiques décrivant l'évolution de populations d'individus qui se reproduisent et meurent indépendamment les uns des autres, suivant des lois de probabilités spécifiques.
Nous considérons une classe particulière de processus de branchement, appelés arbres binaires Markoviens, dans lesquels la vie d'un individu et ses instants de reproduction sont contrôlés par un MAP. Notre objectif est de développer des méthodes numériques pour répondre à plusieurs questions à propos des arbres binaires Markoviens.
La question de la probabilité d'extinction d'un arbre binaire Markovien est la principale abordée dans la thèse. Nous faisons tout d'abord l'hypothèse d'indépendance entre individus. Dans ce cas, la probabilité d'extinction s'exprime comme la solution minimale non négative d'une équation de point fixe matricielle, qui ne peut être résolue analytiquement. Afin de résoudre cette équation, nous développons un algorithme linéaire, basé sur l'itération fonctionnelle, ainsi que des algorithmes quadratiques, basés sur la méthode de Newton, et nous donnons leur interprétation probabiliste en termes de l'arbre que l'on étudie.
Nous nous intéressons ensuite à certaines caractéristiques transitoires d'un arbre binaire Markovien: la distribution de la taille de la population à un instant donné, celle du temps jusqu'à l'extinction du processus et celle de la descendance totale. Ces distributions sont obtenues en utilisant l'approche de Kolmogorov ainsi que l'approche de renouvellement.
Nous illustrons les résultats mentionnés plus haut au travers d'un exemple où l'arbre binaire Markovien sert de modèle pour des populations féminines dans différents pays, et pour lesquelles nous utilisons des données réelles fournies par la World Health Organization.
Enfin, nous analysons le cas où les arbres binaires Markoviens évoluent sous une influence extérieure aléatoire, comme un environnement Markovien aléatoire ou un processus de catastrophes. Dans ce cas, les individus ne se comportent plus indépendamment les uns des autres, et la probabilité d'extinction ne peut plus s'exprimer comme la solution d'une équation de point fixe, ce qui rend l'analyse plus compliquée. Nous approchons la probabilité d'extinction au travers de l'étude de la distribution de la taille de la population, à la fois par des méthodes purement numériques de résolution d'équations aux dérivées partielles, ainsi que par des méthodes probabilistes en imposant des contraintes sur le processus extérieur ou sur la taille maximale de la population.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Munu, Mbalu. "Tracking algorithms with variable update time for phased array radar". Thesis, University of Nottingham, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239456.
Pełny tekst źródłaFaure, Cynthia. "Détection de ruptures et identification des causes ou des symptômes dans le fonctionnement des turboréacteurs durant les vols et les essais". Thesis, Paris 1, 2018. http://www.theses.fr/2018PA01E059/document.
Pełny tekst źródłaAnalysing multivariate time series created by sensors during a flight or a bench test represents a new challenge for aircraft engineers. Each time series can be decomposed univariately into a series of stabilised phases, well known by the expert, and transient phases that are merely explored but very informative when the engine is running. Our project aims at converting these time series into a succession of labels, designing transient and stabilised phases in a bivariate context. This transformation of the data will allow several perspectives: tracking similar behaviours or bivariate patterns seen during a flight, finding similar curves from a given curve, identifying the atypical curves, detecting frequent or rare sequences of labels during a flight, discovering hidden multivariate structures, modelling a representative flight, and spotting unusual flights. This manuscript proposes : methodology to automatically identify transient and stabilized phases, cluster all engine transient phases, label multivariate time series and analyse them. All algorithms are applied on real flight measurements with a validation of the results from expert knowledge
Chen, Li. "Design of linear phase paraunitary filter banks and finite length signal processing /". Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18678233.
Pełny tekst źródła[Verfasser], Monarin Uervirojnangkoorn. "Genetic algorithms for phase determination in macromolecular crystallography / Monarin Uervirojnangkoorn". Lübeck : Zentrale Hochschulbibliothek Lübeck, 2013. http://d-nb.info/1036153274/34.
Pełny tekst źródłaAvallone, Niccolo. "Hydrogen dynamics in solids : quantum diffusion and plastic phase transition in hydrates under pressure". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS622.
Pełny tekst źródłaAtomic-scale simulations of ammonia hydrates pose major theoretical and numerical challenges for several reasons. The description of disordered and/or frustrated systems requires large-scale simulations (several thousand atoms on nanosecond time scales). This makes impossible to use ab initio methods to describe interatomic interactions. Moreovere, the presence of hydrogen leads to a highly complex phase diagram. The specific properties of hydrogen bonds between water and ammonia molecules explain the plasticity, proton jumps produce ionic phases, and at high pressures, the quantum behavior of protons is not negligible: the usual molecular dynamics approximation, which treats atomic nuclei as classical objects, is no longer valid. After a theoretical chapter on the simulation techniques used, the second chapter of this work deals with the problem of proton diffusion in a solid, taking nuclear quantum effects into account. Two main classes of molecular dynamics methods are compared, i.e. quantum bath methods (QTB/adQTB), based on the generalized Langevin equation, and methods derived from the quantum mechanical path integral formalism ((T)RPMD). The aim is to determine which method would be the most accurate and numerically the least expensive for studying proton hopping and diffusion in ammonia hydrates. The (T)RPMD method appears to approximately meet this objective, while the QTB/adQTB methods considerably overestimate diffusion. However, their low computational cost does not completely exclude them from the study of the quantum properties of these systems. The third chapter presents a theoretical study of the crystal-plastic phase transition in ammonia hemihydrate, between 2GPa and 10GPa, and between 300K and 600K. The experimental results show the appearance of plastic and disordered phases, although they do not provide a complete explanation of the mechanisms behind the phase transitions. We mainly use classical molecular dynamics, coupled with force fields, to simulate 100,000 atoms on time scales of tens of nanoseconds. Our results correctly localize the phase transition and detect the change from a monoclinic crystal to a disordered molecular alloy with a bcc cell, which melts at very high temperatures. Furthermore, we can explain how the hydrogen bonding network evolves with temperature, and characterize the plastic phase in terms of the orientational disorder of the molecular dipoles. Finally, we have determined the molecular diffusion that occurs at and above the transition, enabling the formation of the water-ammonia alloy predicted by the experiments. Nuclear quantum effects have been tested by adQTB and (T)RPMD methods, assessing which properties are most affected by the quantum nature of hydrogen atoms
Andreu, Altava Ramon. "Calcul du profil optimal d'un aéronef dans les phases de descente et d'approche". Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30026.
Pełny tekst źródłaThe continued increase of air traffic, which doubles every 15 years, produces large economic benefits but poses environmental issues that put at risk the sustainable development of air transport. Other factors such as jet fuel prices volatility, the introduction of new environmental regulations and intense competition in the airline industry, have stimulated in the last years research on trajectory optimization and flight efficiency topics. The Flight Management System (FMS) is an onboard avionic system, standard in all transport aircraft, which is used by flight crews to manage the lateral and vertical flight-plan. Since current avionic systems are limited in terms of computational capacity, the computations performed by their algorithms are usually done on the basis of conservative hypotheses. Thus, notorious deviations may occur between FMS computations and the actual flight profile flown by the aircraft. The goal of this thesis is to develop an onboard function, which could be integrated in future Airbus cockpits, that computes optimal trajectories, readjusts the flight strategy according to the dynamic aircraft condition and minimizes operating costs. Flight energy management principles has been used for optimizing aircraft trajectories in descent and approach phases with respect to fuel consumption, greenhouse gas and noise emissions. The proposed function has been developed on the basis of dynamic programming techniques, in particular the A* algorithm. The algorithm minimizes a certain objective function by generating incrementally the search space. The exploration of the search space gives the optimal profile that links the aircraft current position to the runway threshold, independently of the current flight mode and aircraft energy condition. Results show 13% fuel savings and a decrease of 12% in gas emissions compared with a best-in-class FMS. Furthermore, the algorithm proposes the flight strategy to dissipate the excess of energy in situations where aircraft fly too high and/or too fast close to the destination runway. A preliminary operational evaluation of the computed trajectories has been conducted in the flight simulators. These tests demonstrate that the computed trajectories can be tracked with current guidance modes, although new modes should be required to decrease the workload of flight crews. In conclusion, this paper constitutes a solid background for the generation of real-time optimal trajectories in light of the automation of descent and approach flight phases
Topiwala, Diven. "The phase retrieval algorithm as a dynamical system". Thesis, De Montfort University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.400681.
Pełny tekst źródłaMaciel, Lucas da Silva. "A novel swarm-based algorithm for phase unwrapping". reponame:Repositório Institucional da UFSC, 2014. https://repositorio.ufsc.br/xmlui/handle/123456789/129595.
Pełny tekst źródłaMade available in DSpace on 2015-02-05T21:16:46Z (GMT). No. of bitstreams: 1 331656.pdf: 3739147 bytes, checksum: 4434e631824c3243a2eb1e0e7148fe81 (MD5) Previous issue date: 2014
O correto funcionamento de tubulações subterrâneas para o transporte de gás e petróleo depende de um monitoramento frequente e correto dos estados de tensões. Avanços recentes na medição de tensões residuais têm aplicado métodos ópticos em conjunto com o alívio de tensões de maneira a avaliar o campo de tensões no componente. Estes métodos requerem uma etapa de remoção do salto de fase para interpretar corretamente os dados adquiridos. Esta remoção do salto de fase tem sido um desafio para diversas aplicações metrológicas. Este trabalho tem por objetivo propor uma abordagem original para a solução deste problema. Neste trabalho é apresentado o algoritmo proposto assim como diversos resultados com diferentes imagens comparados com métodos consagrados.A luz, comportando-se como onda, obedece ao princípio de superposição que por sua vez dá lugar ao fenômeno de interferência. Este fenômeno pode ser utilizado de diversas maneiras para a medição de superfícies e formas geométricas. No entanto, várias dessas aplicações, como interferometria speckle e shearografia, fornecem os valores de interesse restringidos a um intervalo de ?p a p. Assim, faz-se necessária uma operação para retomar os valores reais que produziram o resultado obtido. Esta operação é chamada de remoção do salto de fase.Por décadas tem-se estudado diversas técnicas para realizar a remoção do salto de fase. Elas podem ser divididas em duas categorias principais: métodos que seguem caminhos e métodos independente de caminhos. Métodos que seguem caminhos aplicam uma simples equação de comparação e adição de múltiplos de 2p por toda a imagem. Elas diferem nos caminhos de pixels escolhidos. Para que o resultado seja confiável, é necessário que esse caminho evite pixels de baixa qualidade ou corrompidos. As técnicas de branch-cut identificam esses pixels através da teoria de resíduos e conectando resíduos de sinais opostos, ela é capaz de traçar caminhos confiáveis para a remoção do salto de fase. Técnicas baseadas em qualidade atribuem notas relativas a diferentes critérios de qualidade para cada pixel, excluindo da análise aqueles que se encontram abaixo de um limiar arbitrário.Técnicas independentes de caminhos, como os métodos de norma mínima, assemelham-se a métodos de otimização. Estes são iterativos e procuram por um mínimo na diferença entre as derivadas da solução proposta e as derivadas da imagem original. Estes métodos são considerados bastante robustos e confiáveis. No entanto, estes tambémdemandam maior tempo de processamento para encontrar a resposta correta.Em paralelo aos desenvolvimentos na área de remoção do salto de fase, cientistas têm desenvolvido técnicas computacionais baseadas no comportamento de animais sociais. O campo de Inteligência de Enxame é inspirado por insetos como formigas, abelhas e cupins e outros animais como peixes e pássaros. Estes animais têm em comum o fato de criarem sistemas organizados embora compostos de elementos simples e a ausência de uma liderança clara. O comportamento de formigas e abelhas na busca por comida e os movimentos em grupo de peixes e pássaros são os exemplos mais claros do conceito de comportamento emergente: um comportamento que, embora não explícito na descrição de seus elementos individuais, surge com a interação entre diversos desses elementos. Este comportamento emergente pode ser explicado em termos de agentes simples e independentes, regras simples e um comportamento descentralizado.Este fenômeno tem inspirado as ciências da computação por décadas. Diversas soluções computacionais para problemas matemáticos ou operacionais têm sido propostas a partir das soluções elegantes encontradas na natureza. Exemplos dessas soluções são os algoritmos de otimização baseados no comportamento de formigas e abelhas. No entanto, pouco deste conceito tem sido aplicado na área de processamento de imagem. Quanto ao problema de remoção do salto de fase, mais especificamente, não foi encontrado nenhum trabalho que propusesse uma solução baseada em Inteligência de Enxame.Assim, o presente trabalho propõe uma solução baseada nestes conceitos. Por causa da natureza imprevisível do comportamento emergente, o desenvolvimento do algoritmo proposto foi pouco convencional. Em primeiro lugar, foi necessário o desenvolvimento de um ambiente de testes onde o enxame pudesse ser observado em tempo real durante a sua operação. Em segundo lugar, a criação do algoritmo se deu de maneira iterativa até que fosse encontrado um conjunto de regras satisfatório.Uma primeira solução foi encontrada modelando os agentes como máquinas de estados finitos. Este modelo de agente foi implementado com dinâmicas de comunicação indireta através de estigmergia e comunicação direta em casos de necessidade. Este método, apesar de ter apresentado bons resultados em termos de qualidade da remoção do salto de fase, necessitava ainda de um critério de parada independente do usuário. Na criação deste critério de parada, novas regras deram espaço para a criação de um algoritmo completamente diferente.Esta segunda solução modela o agente a partir de cinco regras simples que permitem, entre outras coisas, a criação e desativação de novos agentes. Uma vez que todos os agentes são desativados, o programa chega ao fim e retorna a imagem com o salto de fase removido. A primeira destas regras afirma que se há um ou mais pixels que podem ter seu salto removido na vizinhança do agente, um deles será escolhido aleatoriamente para a operação. O agente então se move para o pixel escolhido e ganha um ponto de energia. Se não há pixels aptos a serem trabalhados, um pixel já trabalhado na vizinhança é escolhido aleatoriamente, de acordo com a segunda regra. O agente se move para o pixel escolhido e perde um ponto de energia. A terceira regra faz com que agentes que encontram dois pixels vizinhos já trabalhados mas inconsistentes entre si, marcarem estes pixels como defeituosos e desativarem-se. As duas últimas regras fazem com que agentes com energia excedente repliquem-se e aqueles sem energia desativem-se.O comportamento esperado é que os agentes de distribuam pela imagem de maneira eficiente, aproveitando ao máximo os ciclos de processamento. Além disso, a regra de marcação de remoções duvidosas faz com que problemas de ambiguidade na remoção do salto de fase não sejam propagados por grandes regiões da imagem. Este algoritmo foi testado em diversas condições e comparado com outros métodos estabelecidos.Os primeiros resultados foram gerados aplicando-se o enxame em imagens sintéticas sem quaisquer erros. Assim, foi possível avaliar a influência de diferentes parâmetros escolhidos pelo usuário no comportamento do enxame e qualidade dos resultados. Foi possível observar o impacto dos parâmetros de energia na densidade do enxame que, por sua vez, é importante para a correção de ambiguidades propagadas.Em seguida, foram testadas imagens sintéticas com erros artificiais. Os resultados foram comparados com um algoritmo baseado em qualidade e um algoritmo de norma mínima. Foi observado que o algoritmo proposto foi extremamente capaz de contornar as dificuldades das imagens de maneira, produzindo resultados confiáveis. Para certas condições, os resultados foram ainda melhores que os obtidos pelo outro algoritmo baseado em qualidade.Foram testadas ainda imagens provenientes de aplicações metrológicas reais: projeção de franjas, interferometria speckle e shearografia. Os resultados obtidos pelo algoritmo baseado em Inteligência de Enxame foram bastante satisfatórios, comparáveis aos métodos mais robustos. Ainda, o algoritmo proposto apresentou melhoresresultados para imagens muito ruidosas quando comparado com o outro algoritmo baseado em qualidade testado. Estes resultados atestam do potencial do método proposto em obter resultados rápidos e confiáveis.Por fim, este trabalho foi concluído com um breve resumo destes resultados e a validação dos objetivos originais, afirmando assim o sucesso do método proposto. Foram listadas ainda algumas sugestões para avanços futuros como os testes com imagens e parâmetros de qualidade novos, a implementação de processamento paralelo e a criação de novas abordagens baseadas em Inteligência de Enxame para a solução deste problema e outros semelhantes.
Abstract : The proper functioning of underground oil and gas pipelines depend on the frequent and correct monitoring of stress states. Recent developments on residual stress measurement techniques have employed optical methods allied with stress relief in order to assess the underlying stress field. These optical methods require a phase unwrapping step to interpret the acquired data correctly. Phase unwrapping has posed a challenge for many optical metrology applications for decades and saw the development of many different solutions. For the past decades, the field of Swarm Intelligence, based on the behavior observed among ants, bees and other social insects, has been studied and many algorithms have been designed to perform a variety of computational tasks. Swarm Intelligence is commonly regarded as robust and fast, which are desirable features in a phase unwrapping algorithm. This work proposes a novel approach to phase unwrapping based on Swarm Intelligence, assessing its applicability, comparing it to existing methods and evaluating its potential to future developments. The proposed algorithm is thoroughly explained and the results for several different images are presented. These results show a great potential of the proposed method, performing better than some established techniques in specific situations. This potential is assessed and suggestion for future advancements are given.
Bates, James S. "The Phase Gradient Autofocus Algorithm with Range Dependent Stripmap SAR". BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/68.
Pełny tekst źródłaCarlstedt, Tobias. "Algorithms for analysis of GSM phones’ modulation quality". Thesis, Linköping University, Department of Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-17248.
Pełny tekst źródłaXie, Xinjun. "Absolute distance contouring and a phase unwrapping algorithm for phase maps with discontinuities". Thesis, Liverpool John Moores University, 1997. http://researchonline.ljmu.ac.uk/5572/.
Pełny tekst źródłaGalanis, Andreas. "Phase transitions in the complexity of counting". Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52211.
Pełny tekst źródłaDeng, Zhi-De. "Stochastic chaos and thermodynamic phase transitions : theory and Bayesian estimation algorithms". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41649.
Pełny tekst źródłaIncludes bibliographical references (p. 177-200).
The chaotic behavior of dynamical systems underlies the foundations of statistical mechanics through ergodic theory. This putative connection is made more concrete in Part I of this thesis, where we show how to quantify certain chaotic properties of a system that are of relevance to statistical mechanics and kinetic theory. We consider the motion of a particle trapped in a double-well potential coupled to a noisy environment. By use of the classic Langevin and Fokker-Planck equations, we investigate Kramers' escape rate problem. We show that there is a deep analogy between kinetic rate theory and stochastic chaos, for which we propose a novel definition. In Part II, we develop techniques based on Volterra series modeling and Bayesian non-linear filtering to distinguish between dynamic noise and measurement noise. We quantify how much of the system's ergodic behavior can be attributed to intrinsic deterministic dynamical properties vis-a-vis inevitable extrinsic noise perturbations.
by Zhi-De Deng.
M.Eng.and S.B.
Ito, Kei. "Study on high-precision numerical algorithms for multi-phase flow analyses". 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/126506.
Pełny tekst źródłaBates, James S. "The phase gradient autofocus algorithm with range dependent stripmap SAR /". Diss., CLICK HERE for online access, 1998. http://contentdm.lib.byu.edu/ETD/image/etd3.pdf.
Pełny tekst źródła陳力 i Li Chen. "Design of linear phase paraunitary filter banks and finite length signal processing". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31235608.
Pełny tekst źródłaSadek, Ahmad, i Ruben Pozzi. "Iterative Reconstruction Algorithm for Phase-Contrast X-Ray Imaging". Thesis, KTH, Medicinteknik och hälsosystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277802.
Pełny tekst źródłaFaskontrastavbildning är en ny medicinsk röntgenavbildningsteknik, som har utvecklats för att ge bättre kontrast än konventionell röntgenavbildning, särskilt för objekt med låg attenuationskoefficient, såsom mjuk vävnad. I detta projekt användes s.k. propagationsbaserad faskonstrantavbildning, som är en av de enkla metoder som möjliggör faskontrastavbildningen, utan extra optiska element än det som ingår i en konventionell avbildning. Metoden kräver dock mer avancerad bildbehandling. Två av de huvudsakliga problemen som oftast uppstår vid faskontrastavbildning är minskad bildkvalité efter den väsentliga bildrekonstruktionen, samt att den är tidskrävande p.g.a. manuella justeringar som måste göras. I det här projektet implementerades en enkel metod baserad på en kombination av den iterativa algoritmen för bildrekonstruktion, Simultaneous Iterative Reconstruction Technique (SIRT), med propagationsbaserad faskonstrantavbildning. Resultaten jämfördes med en annan fasåterhämtningsmetod, som är välkänd och ofta används inom detta område, Paganinsmetod. Efter jämförelsen konstaterades att upplösningen blev högre och artefakter som suddighet reducerades. Det noterades också att den utvecklade metoden var mindre känslig för manuell inmatning av parametern för attenuationskoefficient. Metoden visade sig dock vara mer tidskrävande än Paganin-metoden.
Jattiem, Mogamad Shaheed. "An improved algorithm for phase-based voltage dip classification". Master's thesis, University of Cape Town, 2007. http://hdl.handle.net/11427/5201.
Pełny tekst źródłaIncludes bibliographical references (leaves 71-72)
In this thesis, a new phase-based algorithm is developed, which overcomes the shortcomings of the Bollen algorithms. The new algorithm computes the dip type based on the difference in phase angle between the measured voltages.
Frazao, Rodrigo José Albuquerque. "PMU based situation awareness for smart distribution grids". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT061/document.
Pełny tekst źródłaRobust metering infrastructure based on classical digital measurements has been used to enable a comprehensive power distribution network management, however synchronized phasor measurements, also known as synchrophasors, are especially welcome to improve the overall framework capabilities. Synchrophasor is a phasor digitally computed from data samples using an absolute and accuracy time source as reference. In this way, since the absolute time source has sufficient accuracy to synchronize voltage and current measurements at geographically distant locations, it is possible to extract valuable informations of the real grid operating status without full knowledge of its characteristics.Due to this fact, applications of synchronized phasor measurements in wide-area management systems (WAMSs) have been achieved. Angular separation, linear state estimation, islanding detection, oscillatory stability, and disturbance location identification are some of the several applications that have been proposed. Thus, we could be lead to believe that to bring the well-known benefits of the synchronized measurements toward electric distribution grids it is only required to place in a straightforward manner conventional Phasor Measurement Units (PMUs) into the electric distribution environment. Unfortunately, this is not as simple as it seems.Electric power distribution systems and high-voltage power systems have different operational characteristics, hence PMUs or PMU-enabled IEDs dedicated to distribution systems should have different features from those devoted to the high-voltage systems. Active distribution grids with shorter line lengths produce smaller angular aperture between their adjacent busbars. In addition, high harmonic content and frequency deviation impose more challenges for estimating phasors. Generally, frequency deviation is related to high-voltage power systems, however, due to the interconnected nature of the overall power system, frequency deviation can be propagated toward the distribution grid. The integration of multiple high-rate DERs with poor control capabilities can also impose local frequency drift. Advanced synchronized devices dedicated to smart monitoring framework must overcome these challenges in order to lead the measurement accuracy beyond the levels stipulated by current standard requirements.This overall problematic is treated and evaluated in the present thesis. Phasor estimation accuracy is directly related to the algorithm's performance used for processing the incoming data. Robustness against pernicious effects that can degrade the quality of the estimates is highly desired. Due to this fact, three frequency-adaptive algorithms are presented aiming to boost the phasor estimation process in active distribution grids. Several simulations using spurious and distorted signals are performed for evaluating their performances under static and/or dynamic conditions.Taking into account accurate phasor estimates, four potential applications are presented seeking to increase situational awareness in distribution environment. Contributions are presented concerning online Thévenin's equivalent (TE) circuit seen by the Point of Common Coupling (PCC) between DERs and the grid side, dynamic external equivalents and online three-phase voltage drop assessment in primary radial distribution grids, as well as assessment of harmonic issues for improving the classical PH method (harmonic active power) to detect both the main source of harmonic pollution and true power flow direction under frequency deviation.The issue of synchronized phasor measurements in electric power distribution systems is still underexplored and suspicions about its applicability are common, however this thesis aims to provide propositions to contribute with the advent of phasor measurements in electric distribution environment
Kalvelage, Frank. "An algorithmic approach to erosion control in three phase contactors". Thesis, University of the West of England, Bristol, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409448.
Pełny tekst źródłaMonceau, Pascal. "TRANSITIONS DE PHASE EN DIMENSIONS FRACTALES". Habilitation à diriger des recherches, Université Paris-Diderot - Paris VII, 2004. http://tel.archives-ouvertes.fr/tel-00521313.
Pełny tekst źródłaPalani, Ananta. "Development of an optical system for dynamic evaluation of phase recovery algorithms". Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.708850.
Pełny tekst źródłaLo, Victor Lai-Xin. "Iterative projection algorithms and applications in x-ray crystallography". Thesis, University of Canterbury. Electrical and Computer Engineering, 2011. http://hdl.handle.net/10092/5476.
Pełny tekst źródłaLandon, Jonathan Charles. "Development of an Experimental Phased-Array Feed System and Algorithms for Radio Astronomy". BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2794.
Pełny tekst źródłaMiracle, Sarah. "The effects of bias on sampling algorithms and combinatorial objects". Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53526.
Pełny tekst źródłaMoreau, Gilles. "On the Solution Phase of Direct Methods for Sparse Linear Systems with Multiple Sparse Right-hand Sides". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN084/document.
Pełny tekst źródłaWe consider direct methods to solve sparse linear systems AX = B, where A is a sparse matrix of size n x n with a symmetric structure and X and B are respectively the solution and right-hand side matrices of size n x nrhs. A is usually factorized and decomposed in the form LU, where L and U are respectively a lower and an upper triangular matrix. Then, the solve phase is applied through two triangular resolutions, named respectively the forward and backward substitutions.For some applications, the very large number of right-hand sides (RHS) in B, nrhs >> 1, makes the solve phase the computational bottleneck. However, B is often sparse and its structure exhibits specific characteristics that may be efficiently exploited to reduce this cost. We propose in this thesis to study the impact of the exploitation of this structural sparsity during the solve phase going through its theoretical aspects down to its actual implications on real-life applications.First, we investigate the asymptotic complexity, in the big-O sense, of the forward substitution when exploiting the RHS sparsity in order to assess its efficiency when increasing the problem size. In particular, we study on 2D and 3D regular problems the asymptotic complexity both for traditional full-rank unstructured solvers and for the case when low-rank approximation is exploited. Next, we extend state-of-the-art algorithms on the exploitation of RHS sparsity, and also propose an original approach converging toward the optimal number of operations while preserving performance. Finally, we show the impact of the exploitation of sparsity in a real-life electromagnetism application in geophysics that requires the solution of sparse systems of linear equations with a large number of sparse right-hand sides. We also adapt the parallel algorithms that were designed for the factorization to solve-oriented algorithms.We validate and combine the previous improvements using the parallel solver MUMPS, conclude on the contributions of this thesis and give some perspectives
Kantamneni, Sravya Mounika [Verfasser]. "Genetic Algorithm as a Computational Approach for Phase Improvement and Solving Protein Crystal Structures : Genetischer Algorithmus als rechnergestützter Ansatz zur Phasenverbesserung und Lösung von Proteinkristallstrukturen / Sravya Mounika Kantamneni". Hamburg : Staats- und Universitätsbibliothek Hamburg Carl von Ossietzky, 2020. http://d-nb.info/1221135384/34.
Pełny tekst źródła