Dissertations / Theses on the topic 'Alternating direction methods of multipliers'

To see the other types of publications on this topic, follow the link: Alternating direction methods of multipliers.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Alternating direction methods of multipliers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Selvatici, Elena. "Variational formulation for Granular Contact Dynamics simulation via the Alternating Direction Method of Multipliers." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
The subject of this thesis is the development of a calculation software for the numerical analysis and the dynamic simulation of a granular flow. One of the major problems encountered in the dynamic analysis of the mechanical behaviour of a granular medium is the enormous computational time taken to reach the solution of the problem. Following studies that have verified the effectiveness of the implicit formulation proposed by the Granular Contact Dynamics approach, the idea of this thesis arises from the desire to apply the Alternating Direction Method of Multipliers for the optimization of the solution, a parallelizable algorithm already validated in similar contexts. The main part of the work consisted in the realization of the program using the Python programming language. During the process particular importance was given to computational optimization, and each part of the program has been designed to handle large scale problems. To generate the starting conditions we have implemented algorithms both for importing and for generating a model, and we have implemented methods for the introduction and management of the static and kinematic boundaries. As for the solution algorithm we reviewed the mathematical model of the GCD: the solution of the problem leads to the application of the principle of minimum of the total potential energy associated with the system. We introduced the augmented Lagrangian: its minimization, with respect to one of the primary variables and assuming the other unknowns as constants, constitutes the core of the ADMM. The software has been included within mechpy, an open source platform for the development of unconventional finite element formulations, and is able to manage both two-dimensional and three-dimensional models. The results are very promising: the output of the simulations has been compared with experimental results, and the noticeable correspondence validates the software functionality.
APA, Harvard, Vancouver, ISO, and other styles
2

Gu, Yan. "STUDIES ON ALTERNATING DIRECTION METHOD OF MULTIPLIERS WITH ADAPTIVE PROXIMAL TERMS FOR CONVEX OPTIMIZATION PROBLEMS." Kyoto University, 2020. http://hdl.handle.net/2433/259758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fécamp, Vivien. "Recalage/Fusion d'images multimodales à l'aide de graphes d'ordres supérieurs." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC005/document.

Full text
Abstract:
L’objectif principal de cette thèse est l’exploration du recalage d’images à l’aide de champs aléatoires de Markov d’ordres supérieurs, et plus spécifiquement d’intégrer la connaissance de transformations globales comme une transformation rigide, dans la structure du graphe. Notre cadre principal s’applique au recalage 2D-2D ou 3D-3D et utilise une approche hiérarchique d’un modèle de champ de Markov dont le graphe est une grille régulière. Les variables cachées sont les vecteurs de déplacements des points de contrôle de la grille.Tout d’abord nous expliciterons la construction du graphe qui permet de recaler des images en cherchant entre elles une transformation affine, rigide, ou une similarité, tout en ne changeant qu’un potentiel sur l’ensemble du graphe, ce qui assure une flexibilité lors du recalage. Le choix de la métrique est également laissée à l’utilisateur et ne modifie pas le fonctionnement de notre algorithme. Nous utilisons l’algorithme d’optimisation de décomposition duale qui permet de gérer les hyper-arêtes du graphe et qui garantit l’obtention du minimum exact de la fonction pourvu que l’on ait un accord entre les esclaves. Un graphe similaire est utilisé pour réaliser du recalage 2D-3D.Ensuite, nous fusionnons le graphe précédent avec un autre graphe construit pour réaliser le recalage déformable. Le graphe résultant de cette fusion est plus complexe et, afin d’obtenir un résultat en un temps raisonnable, nous utilisons une méthode d’optimisation appelée ADMM (Alternating Direction Method of Multipliers) qui a pour but d’accélérer la convergence de la décomposition duale. Nous pouvons alors résoudre simultanément recalage affine et déformable, ce qui nous débarrasse du biais potentiel issu de l’approche classique qui consiste à recaler affinement puis de manière déformable
The main objective of this thesis is the exploration of higher order Markov Random Fields for image registration, specifically to encode the knowledge of global transformations, like rigid transformations, into the graph structure. Our main framework applies to 2D-2D or 3D-3D registration and use a hierarchical grid-based Markov Random Field model where the hidden variables are the displacements vectors of the control points of the grid.We first present the construction of a graph that allows to perform linear registration, which means here that we can perform affine registration, rigid registration, or similarity registration with the same graph while changing only one potential. Our framework is thus modular regarding the sought transformation and the metric used. Inference is performed with Dual Decomposition, which allows to handle the higher order hyperedges and which ensures the global optimum of the function is reached if we have an agreement among the slaves. A similar structure is also used to perform 2D-3D registration.Second, we fuse our former graph with another structure able to perform deformable registration. The resulting graph is more complex and another optimisation algorithm, called Alternating Direction Method of Multipliers is needed to obtain a better solution within reasonable time. It is an improvement of Dual Decomposition which speeds up the convergence. This framework is able to solve simultaneously both linear and deformable registration which allows to remove a potential bias created by the standard approach of consecutive registrations
APA, Harvard, Vancouver, ISO, and other styles
4

Tang, Shuhan. "Spectral Analysis Using Multitaper Whittle Methods with a Lasso Penalty." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1586863604571678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Al, Sarray Basad. "Estimation et choix de modèle pour les séries temporelles par optimisation convexe." Besançon, 2016. http://www.theses.fr/2016BESA2084.

Full text
Abstract:
Les séries temporelles sont définies comme une séquence ordonnée d’observation à travers le temps. La structure des séries temporelles est représentée par la somme des composantes indépendantes. Généralement, ces composantes sont estimées indépendamment les unes des autres chaque composant fait partie d’une catégorie particulière. Les modèles Auto régressifs et Moyenne Mobile sont utilisées pour la modélisation des séries chronologiques il y a un grand nombre d’applications telle que le traitement du signal, la finance, l’imagerie médicale le radar, et la communication. […] Cette étude présente quelques-unes des méthodes d’apprentissage machine, et des méthodes convexes pour la sélection de modèle ARMA et l’estimation est basée sur la conversion des modèles ARMA-AR et des modèles ARMA aux modèle espace d’état. […]
[…] this study presents some of machine learning and convex methodes for ARMA model selection and estimation based on the conversion between ARMA –AR models and ARMA-State Space Models. Also in this study, for a time series decomposition and time series components analysis some of convex methods are implemented and simulated. The results show the ability of convex methods of analysing and modelling a given series
APA, Harvard, Vancouver, ISO, and other styles
6

Ojha, Abhi. "Coupled Natural Gas and Electric Power Systems." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78666.

Full text
Abstract:
Decreasing gas prices and the pressing need for fast-responding electric power generators are currently transforming natural gas networks. The intermittent operation of gas-fired plants to balance wind generation introduces spatiotemporal fluctuations of increasing gas demand. At the heart of modeling, monitoring, and control of gas networks is a set of nonlinear equations relating nodal gas injections and pressures to flows over pipelines. Given gas demands at all points of the network, the gas flow task aims at finding the rest of the physical quantities. For a tree network, the problem enjoys a closed-form solution; yet solving the equations for practical meshed networks is non-trivial. This problem is posed here as a feasibility problem involving quadratic equalities and inequalities, and is further relaxed to a convex semidefinite program (SDP) minimization. Drawing parallels to the power flow problem, the relaxation is shown to be exact if the cost function is judiciously designed using a representative set of network states. Numerical tests on a Belgian gas network corroborate the superiority of the novel method in recovering the actual gas network state over a Newton-Raphson solver. This thesis also considers the coupled infrastructures of natural gas and electric power systems. The gas and electric networks are coupled through gas-fired generators, which serve as shoulder and peaking plants for the electric power system. The optimal dispatch of coupled natural gas and electric power systems is posed as a relaxed convex minimization problem, which is solved using the feasible point pursuit (FPP) algorithm. For a decentralized solution, the alternating direction method of multipliers (ADMM) is used in collaboration with the FPP. Numerical experiments conducted on a Belgian gas network connected to the IEEE 14 bus benchmark system corroborate significant enhancements on computational efficiency compared with the centralized FPP-based approach.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
7

Velay, Maxime. "Méthodes d’optimisation distribuée pour l’exploitation sécurisée des réseaux électriques interconnectés." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT063/document.

Full text
Abstract:
Notre société étant plus dépendante que jamais au vecteur électrique, la moindre perturbation du transport ou de l’acheminement de l’électricité a un impact social et économique important. La fiabilité et la sécurité des réseaux électriques sont donc cruciales pour les gestionnaires de réseaux, en plus des aspects économiques. De plus, les réseaux de transport sont interconnectés pour réduire les coûts des opérations et pour améliorer la sécurité. Un des plus grand défis des gestionnaires des réseaux de transport est ainsi de se coordonner avec les réseaux voisins, ce qui soulève des problèmes liés à la taille du problème, à l’interopérabilité et à la confidentialité des données.Cette thèse se focalise principalement sur la sécurité des opérations sur les réseaux électriques, c’est pourquoi l’évolution des principales caractéristiques des blackouts, qui sont des échecs de la sécurité des réseaux, sont étudiés sur la période 2005-2016. L’approche de cette étude consiste à déterminer quelles sont les principales caractéristiques des incidents de ces 10 dernières années, afin d’identifier ce qui devrait être intégré pour réduire le risque que ces incidents se reproduisent. L’évolution a été étudiée et comparé avec les caractéristiques des blackouts qui se sont produit avant 2005. L’étude se focalise sur les préconditions qui ont mené à ces blackouts et sur les cascades, et particulièrement sur le rôle de la vitesse des cascades. Les caractéristiques importante sont extraites et intégrées dans la suite de notre travail.Un algorithme résolvant un problème préventif d’Optimal Power Flow avec contraintes de sécurité (SCOPF) de manière distribuée est ainsi développé. Ce problème consiste en l’ajout de contraintes qui assure qu’après la perte de n’importe quel appareil d’importance, le nouveau point d’équilibre, atteint suite au réglage primaire en fréquence, respecte les contraintes du système. L’algorithme développé utilise une décomposition fine du problème et est implémenté sous le paradigme multi-agent, basé sur deux catégories d’agents : les appareils et les bus. Les agents sont coordonnés grâce à l’ « Alternating Direction Method of Multipliers (ADMM)» et grâce à un problème de consensus. Cette décomposition procure l’autonomie et la confidentialité nécessaire aux différents acteurs du système, mais aussi, un bon passage à l’échelle par rapport à la taille du problème. Cet algorithme a aussi pour avantage d’être robuste à n’importe quelle perturbation, incluant la séparation du système en plusieurs régions.Puis, pour prendre en compte l’incertitude sur la production créée par les erreurs de prédiction des fermes éoliennes, une approche distribuée à deux étapes est développée pour résoudre un problème d’Optimal Power Flow avec contraintes probabilistes (CCOPF), d’une manière complétement distribuée. Les erreurs de prédiction des fermes éoliennes sont modélisées par des lois normales indépendantes et les écarts par rapport aux plannings de production sont considérés compensés par le réglage primaire en fréquence. La première étape de l’algorithme a pour but de déterminer des paramètres de sensibilités nécessaires pour formuler le problème. Les résultats de cette étape sont ensuite des paramètres d’entrée de la seconde étape qui, elle, résout le problème de CCOPF. Une extension de cette formulation permet d’ajouter de la flexibilité au problème en permettant la réduction de la production éolienne. Cet algorithme est basé sur la même décomposition fine que précédemment où les agents sont également coordonnés par l’ADMM et grâce à un problème de consensus. En conclusion, cet algorithme en deux étapes garantit la confidentialité et l’autonomie des différents acteurs, et est parallèle et adaptée aux plateformes hautes performances
Our societies are more dependent on electricity than ever, thus any disturbance in the power transmission and delivery has major economic and social impact. The reliability and security of power systems are then crucial to keep, for power system operators, in addition to minimizing the system operating cost. Moreover, transmission systems are interconnected to decrease the cost of operation and improve the system security. One of the main challenges for transmission system operators is therefore to coordinate with interconnected power systems, which raises scalability, interoperability and privacy issues. Hence, this thesis is concerned with how TSOs can operate their networks in a decentralized way but coordinating their operation with other neighboring TSOs to find a cost-effective scheduling that is globally secure.The main focus of this thesis is the security of power systems, this is why the evolution of the main characteristics of the blackouts that are failures in power system security, of the period 2005-2016 is studied. The approach consists in determining what the major characteristics of the incidents of the past 10 years are, to identify what should be taken into account to mitigate the risk of incidents. The evolution have been studied and compared with the characteristics of the blackouts before 2005. The study focuses on the pre-conditions that led to those blackouts and on the cascades, and especially the role of the cascade speed. Some important features are extracted and later integrated in our work.An algorithm that solve the preventive Security Constrained Optimal Power Flow (SCOPF) problem in a fully distributed manner, is thus developed. The preventive SCOPF problem consists in adding constraints that ensure that, after the loss of any major device of the system, the new steady-state reached, as a result of the primary frequency control, does not violate any constraint. The developed algorithm uses a fine-grained decomposition and is implemented under the multi-agent system paradigm based on two categories of agents: devices and buses. The agents are coordinated with the Alternating Direction method of multipliers in conjunction with a consensus problem. This decomposition provides the autonomy and privacy to the different actors of the system and the fine-grained decomposition allows to take the most of the decomposition and provides a good scalability regarding the size of the problem. This algorithm also have the advantage of being robust to any disturbance of the system, including the separation of the system into regions.Then, to account for the uncertainty of production brought by wind farms forecast error, a two-step distributed approach is developed to solve the Chance-Constrained Optimal Power Flow problem, in a fully distributed manner. The wind farms forecast errors are modeled by independent Gaussian distributions and the mismatches with the initials are assumed to be compensated by the primary frequency response of generators. The first step of this algorithm aims at determining the sensitivity factors of the system, needed to formulate the problem. The results of this first step are inputs of the second step that is the CCOPF. An extension of this formulation provides more flexibility to the problem and consists in including the possibility to curtail the wind farms. This algorithm relies on the same fine-grained decomposition where the agents are again coordinated by the ADMM and a consensus problem. In conclusion, this two-step algorithm ensures the privacy and autonomy of the different system actors and it is de facto parallel and adapted to high performance platforms
APA, Harvard, Vancouver, ISO, and other styles
8

Guiducci, Martina. "Metodo delle Direzioni Alternate per la ricostruzione di immagini Poissoniane." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19501/.

Full text
Abstract:
Il problema della ricostruzione di immagini si propone di eliminare dall'immagine acquisita il blur e ridurre il rumore per ottenerne una che sia simile il più possibile all'oggetto esatto. In termini matematici tale ricostruzione si traduce nella minimizzazione di una funzione obiettivo formata da due termini: la divergenza di Kullback-Leibler, che rappresenta la distanza tra l'immagine acquisita e l'immagine ricostruita, e un termine di regolarizzazione in norma L_1, che esprime delle informazioni aggiuntive sulla soluzione. Tuttavia la divergenza di Kullback-Leibler coinvolge un logaritmo; dunque nei casi in cui l'immagine da ricostruire sia costituita da molti pixel neri, è necessario imporre un vincolo di non negatività sull'argomento del logaritmo e sulla soluzione. Il problema di minimo studiato in questa tesi è quindi un problema di ottimizzazione vincolata, in cui nella funzione obiettivo compare anche un vincolo che obbliga la soluzione a essere non negativa. A tal fine l'algoritmo implementato è l'Alternating Direction Method of Multipliers (ADMM). Il metodo delle direzioni alternate richiede a sua volta l'utilizzo dell'Orthant-Wise Limited Memory Quasi-Newton Method, il quale è un metodo di tipo quasi Newton progettato per la risoluzione di problemi generali di grandi dimensioni, regolarizzati in norma L_1. Per determinare la direzione di ricerca, richiede ad ogni iterazione la risoluzione di un sistema lineare la cui matrice dei coefficienti è la matrice Hessiana. Questa modalità del calcolo della direzione di ricerca non tiene conto della forma dell'Hessiana. Pertanto è stata proposta una modifica al metodo OWLQN che tenesse conto delle caratteristiche peculiari del problema e quindi della forma dell'Hessiana, in modo tale da poter invertire velocemente quest'ultima in uno spazio di Fourier e rendere il metodo più efficiente. Infine, è stata condotta un'analisi sperimentale sull'immagine oggetto di studio, facendo il confronto tra i metodi impiegati.
APA, Harvard, Vancouver, ISO, and other styles
9

Recupero, Giuseppe Antonio. "Un modello variazionale non convesso per il denoising di superfici." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23210/.

Full text
Abstract:
La scannerizzazione 3D di un oggetto produce una mesh triangolata soggetta a rumore. Il processo di denoising ha l’obiettivo di rimuovere il rumore e ricostruire la superficie originale, mantenendone i dettagli come spigoli, angoli o creste. Presentiamo un nuovo modello variazionale non convesso per il denoising di superfici. La funzione costo, dipendente dai vertici, è formata da un termine di fedeltà in norma L2 e da un termine di penalità definito tramite una versione riscalata e riparametrizzata della funzione Minimax Concave Penalty (MCP). Il problema di minimizzazione è risolto usando il metodo ADMM. Ne risulta un processo iterativo, costituito da tre sottoproblemi, di cui due con soluzione esatta e uno con soluzione approssimata. I risultati sperimentali mostrano la convergenza dell’algoritmo e la sua efficacia, a livello sia qualitativo sia quantitativo, nel rimuovere il rumore preservando le caratteristiche geometriche della superficie. Il processo di denoising riesce inoltre a rendere più regolari e uniformi le triangolazioni, riducendo il numero di facce sovrapposte o degeneri.
APA, Harvard, Vancouver, ISO, and other styles
10

Scrivanti, Gabriele Luca Giovanni. "Nonsmooth Nonconvex Variational Reconstruction for Electrical Impedance Tomography." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20700/.

Full text
Abstract:
Electrical Impedance Tomography is an imaging technique that aims to reconstruct the inner conductivity distribution of a medium starting from a set of measured voltages registered by a series of electrodes that are positioned on the surface of the medium. Such technique was used for the first time in geological studies in 1930 and then applied to industrial procedures. The first clinical use of EIT dates back to 1987. In 2018 EIT was validated in tissue engineering as a noninvasive and label-free imaging and monitoring tool for cell distribution (cell growth, differentiation and tissue formation) in 3D scaffolds. EIT problem can be split into a Forward and an Inverse problem. The aim of Forward EIT is to define the set of measured voltages starting from a known conductivity distribution. If the forward problem is characterized by a nonlinear mapping, called Forward Operator, from the conductivity distribution to the measured voltages, inverse EIT consists of inverting the Forward Operator. This leads to an ill-posed problem which requires regularization, either in the model or in the numerical method that is applied to define the solution. The inverse problem is modelled as a Nonlinear Least Squares problem, where one seeks to minimize the mismatch beetween the measured voltages and the ones generated by the reconstructed conductivity. Reconstruction techniques require the introduction of a regularization term which forces the reconstructed data to stick to certain prior information. In this dissertation, some state-of-the-art regularization methods are analyzed and compared via EIDORS, a specific software for EIT problems. The aim is to reconstruct the variation in conductivity within a 2D section of a 3D scaffold. Furthermore a variational formulation on a 2D mesh for a space-variant regularization is proposed, based on a combination of high order and nonconvex operators, which respectively seek to recover piecewise inhomogeneous and piecewise linear regions.
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Zhouye. "Reconstruction of enhanced ultrasound images from compressed measurements." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30222/document.

Full text
Abstract:
L'intérêt de l'échantillonnage compressé dans l'imagerie ultrasonore a été récemment évalué largement par plusieurs équipes de recherche. Suite aux différentes configurations d'application, il a été démontré que les données RF peuvent être reconstituées à partir d'un faible nombre de mesures et / ou en utilisant un nombre réduit d'émission d'impulsions ultrasonores. Selon le modèle de l'échantillonnage compressé, la résolution des images ultrasonores reconstruites à partir des mesures compressées dépend principalement de trois aspects: la configuration d'acquisition, c.à.d. l'incohérence de la matrice d'échantillonnage, la régularisation de l'image, c.à.d. l'a priori de parcimonie et la technique d'optimisation. Nous nous sommes concentrés principalement sur les deux derniers aspects dans cette thèse. Néanmoins, la résolution spatiale d'image RF, le contraste et le rapport signal sur bruit dépendent de la bande passante limitée du transducteur d'imagerie et du phénomène physique lié à la propagation des ondes ultrasonores. Pour surmonter ces limitations, plusieurs techniques de traitement d'image en fonction de déconvolution ont été proposées pour améliorer les images ultrasonores. Dans cette thèse, nous proposons d'abord un nouveau cadre de travail pour l'imagerie ultrasonore, nommé déconvolution compressée, pour combiner l'échantillonnage compressé et la déconvolution. Exploitant une formulation unifiée du modèle d'acquisition directe, combinant des projections aléatoires et une convolution 2D avec une réponse impulsionnelle spatialement invariante, l'avantage de ce cadre de travail est la réduction du volume de données et l'amélioration de la qualité de l'image. Une méthode d'optimisation basée sur l'algorithme des directions alternées est ensuite proposée pour inverser le modèle linéaire, en incluant deux termes de régularisation exprimant la parcimonie des images RF dans une base donnée et l'hypothèse statistique gaussienne généralisée sur les fonctions de réflectivité des tissus. Nous améliorons les résultats ensuite par la méthode basée sur l'algorithme des directions simultanées. Les deux algorithmes sont évalués sur des données simulées et des données in vivo. Avec les techniques de régularisation, une nouvelle approche basée sur la minimisation alternée est finalement développée pour estimer conjointement les fonctions de réflectivité des tissus et la réponse impulsionnelle. Une investigation préliminaire est effectuée sur des données simulées
The interest of compressive sampling in ultrasound imaging has been recently extensively evaluated by several research teams. Following the different application setups, it has been shown that the RF data may be reconstructed from a small number of measurements and/or using a reduced number of ultrasound pulse emissions. According to the model of compressive sampling, the resolution of reconstructed ultrasound images from compressed measurements mainly depends on three aspects: the acquisition setup, i.e. the incoherence of the sampling matrix, the image regularization, i.e. the sparsity prior, and the optimization technique. We mainly focused on the last two aspects in this thesis. Nevertheless, RF image spatial resolution, contrast and signal to noise ratio are affected by the limited bandwidth of the imaging transducer and the physical phenomenon related to Ultrasound wave propagation. To overcome these limitations, several deconvolution-based image processing techniques have been proposed to enhance the ultrasound images. In this thesis, we first propose a novel framework for Ultrasound imaging, named compressive deconvolution, to combine the compressive sampling and deconvolution. Exploiting an unified formulation of the direct acquisition model, combining random projections and 2D convolution with a spatially invariant point spread function, the benefit of this framework is the joint data volume reduction and image quality improvement. An optimization method based on the Alternating Direction Method of Multipliers is then proposed to invert the linear model, including two regularization terms expressing the sparsity of the RF images in a given basis and the generalized Gaussian statistical assumption on tissue reflectivity functions. It is improved afterwards by the method based on the Simultaneous Direction Method of Multipliers. Both algorithms are evaluated on simulated and in vivo data. With regularization techniques, a novel approach based on Alternating Minimization is finally developed to jointly estimate the tissue reflectivity function and the point spread function. A preliminary investigation is made on simulated data
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Mo. "Vers une méthode de restauration aveugle d’images hyperspectrales." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S132.

Full text
Abstract:
Nous proposons dans cette thèse de développer une méthode de restauration aveugle d'images flouées et bruitées où aucune connaissance a priori n'est exigée. Ce manuscrit est composé de trois chapitres : le 1er chapitre est consacré aux travaux de l'état de l'art. Les approches d'optimisation pour la résolution du problème de restauration y sont d'abord discutées. Ensuite les principales méthodes de restauration, dites semi-aveugles car nécessitant un minimum de connaissance a priori sont analysées. Parmi ces méthodes, cinq sont retenues pour évaluation. Le 2ème chapitre est dédié à la comparaison des performances des méthodes retenues dans le chapitre précédent. Les principaux critères objectifs d'évaluation de la qualité des images restaurées sont présentés. Parmi ces critères, la norme L1 de l'erreur d'estimation est sélectionnée. L'étude comparative menée sur une banque d'images monochromes, dégradées artificiellement par deux fonctions floues de supports différents et trois niveaux de bruit a permis de mettre en évidence les deux méthodes les plus pertinentes. La première repose sur une approche alternée mono-échelle où la PSF et l'image sont estimées dans une seule étape. La seconde utilise une approche hybride multi-échelle qui consiste tout d'abord à estimer de manière alternée la PSF et une image latente, puis dans une étape suivante séquentielle, à restaurer l'image. Dans l'étude comparative conduite, l'avantage revient à cette dernière. Les performances de ces méthodes serviront de référence pour comparer ensuite la méthode développée. Le 3ème chapitre porte sur la méthode développée. Nous avons cherché à rendre aveugle l'approche hybride retenue dans le chapitre précédent tout en améliorant la qualité d'estimation de la PSF et de l'image restaurée. Les contributions ont porté sur plusieurs points. Une première série d'améliorations concerne la redéfinition des échelles, celle de l'initialisation de l'image latente à chaque niveau d'échelle, l'évolution des paramètres pour la sélection des contours pertinents servant de support à l'estimation de la PSF et enfin, la définition d'un critère d'arrêt aveugle. Une seconde série de contributions a porté sur l'estimation aveugle des deux paramètres de régularisation impliqués pour éviter d'avoir à les fixer empiriquement. Chaque paramètre est associé à une fonction coût distincte l'une pour l'estimation de la PSF et la seconde pour l'estimation d'une image latente. Dans l'étape séquentielle qui suit, nous avons cherché à affiner le support de la PSF estimée dans l'étape alternée, avant de l'exploiter dans le processus de restauration de l'image. A ce niveau, la seule connaissance a priori nécessaire est une borne supérieure du support de la PSF. Les différentes évaluations conduites sur des images monochromes et hyperspectrales dégradées artificiellement par plusieurs flous de type mouvement, de supports différents, montrent une nette amélioration de la qualité de restauration obtenue par l'approche développée par rapport aux deux meilleures approches de l'état de l'art retenues
We propose in this thesis manuscript to develop a blind restoration method of single component blurred and noisy images where no prior knowledge is required. This manuscript is composed of three chapters: the first chapter focuses on state-of-art works. The optimization approaches for resolving the restoration problem are discussed first. Then, the main methods of restoration, so-called semi-blind ones because requiring a minimum of a priori knowledge are analysed. Five of these methods are selected for evaluation. The second chapter is devoted to comparing the performance of the methods selected in the previous chapter. The main objective criteria for evaluating the quality of the restored images are presented. Of these criteria, the l1 norm for the estimation error is selected. The comparative study conducted on a database of monochromatic images, artificially degraded by two blurred functions with different support size and three levels of noise, revealed the most two relevant methods. The first one is based on a single-scale alternating approach where both the PSF and the image are estimated alternatively. The second one uses a multi-scale hybrid approach, which consists first of alternatingly estimating the PSF and a latent image, then in a sequential next step, restoring the image. In the comparative study performed, the benefit goes to the latter. The performance of both these methods will be used as references to then compare the newly designed method. The third chapter deals with the developed method. We have sought to make the hybrid approach retained in the previous chapter as blind as possible while improving the quality of estimation of both the PSF and the restored image. The contributions covers a number of points. A first series concerns the redefinition of the scales that of the initialization of the latent image at each scale level, the evolution of the parameters for the selection of the relevant contours supporting the estimation of the PSF and finally the definition of a blind stop criterion. A second series of contributions concentrates on the blind estimation of the two regularization parameters involved in order to avoid having to fix them empirically. Each parameter is associated with a separate cost function either for the PSF estimation or for the estimation of a latent image. In the sequential step that follows, we refine the estimation of the support of the PSF estimated in the previous alternated step, before exploiting it in the process of restoring the image. At this level, the only a priori knowledge necessary is a higher bound of the support of the PSF. The different evaluations performed on monochromatic and hyperspectral images artificially degraded by several motion-type blurs with different support sizes, show a clear improvement in the quality of restoration obtained by the newly designed method in comparison to the best two state-of-the-art methods retained
APA, Harvard, Vancouver, ISO, and other styles
13

Boussaid, Haithem. "Efficient inference and learning in graphical models for multi-organ shape segmentation." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2015. http://www.theses.fr/2015ECAP0002/document.

Full text
Abstract:
Cette thèse explore l’utilisation des modèles de contours déformables pour la segmentation basée sur la forme des images médicales. Nous apportons des contributions sur deux fronts: dans le problème de l’apprentissage statistique, où le modèle est formé à partir d’un ensemble d’images annotées, et le problème de l’inférence, dont le but est de segmenter une image étant donnée un modèle. Nous démontrons le mérite de nos techniques sur une grande base d’images à rayons X, où nous obtenons des améliorations systématiques et des accélérations par rapport à la méthode de l’état de l’art. Concernant l’apprentissage, nous formulons la formation de la fonction de score des modèles de contours déformables en un problème de prédiction structurée à grande marge et construisons une fonction d’apprentissage qui vise à donner le plus haut score à la configuration vérité-terrain. Nous intégrons une fonction de perte adaptée à la prédiction structurée pour les modèles de contours déformables. En particulier, nous considérons l’apprentissage avec la mesure de performance consistant en la distance moyenne entre contours, comme une fonction de perte. L’utilisation de cette fonction de perte au cours de l’apprentissage revient à classer chaque contour candidat selon sa distance moyenne du contour vérité-terrain. Notre apprentissage des modèles de contours déformables en utilisant la prédiction structurée avec la fonction zéro-un de perte surpasse la méthode [Seghers et al. 2007] de référence sur la base d’images médicales considérée [Shiraishi et al. 2000, van Ginneken et al. 2006]. Nous démontrons que l’apprentissage avec la fonction de perte de distance moyenne entre contours améliore encore plus les résultats produits avec l’apprentissage utilisant la fonction zéro-un de perte et ce d’une quantité statistiquement significative.Concernant l’inférence, nous proposons des solveurs efficaces et adaptés aux problèmes combinatoires à variables spatiales discrétisées. Nos contributions sont triples: d’abord, nous considérons le problème d’inférence pour des modèles graphiques qui contiennent des boucles, ne faisant aucune hypothèse sur la topologie du graphe sous-jacent. Nous utilisons un algorithme de décomposition-coordination efficace pour résoudre le problème d’optimisation résultant: nous décomposons le graphe du modèle en un ensemble de sous-graphes en forme de chaines ouvertes. Nous employons la Méthode de direction alternée des multiplicateurs (ADMM) pour réparer les incohérences des solutions individuelles. Même si ADMM est une méthode d’inférence approximative, nous montrons empiriquement que notre implémentation fournit une solution exacte pour les exemples considérés. Deuxièmement, nous accélérons l’optimisation des modèles graphiques en forme de chaîne en utilisant l’algorithme de recherche hiérarchique A* [Felzenszwalb & Mcallester 2007] couplé avec les techniques d’élagage développés dans [Kokkinos 2011a]. Nous réalisons une accélération de 10 fois en moyenne par rapport à l’état de l’art qui est basé sur la programmation dynamique (DP) couplé avec les transformées de distances généralisées [Felzenszwalb & Huttenlocher 2004]. Troisièmement, nous intégrons A* dans le schéma d’ADMM pour garantir une optimisation efficace des sous-problèmes en forme de chaine. En outre, l’algorithme résultant est adapté pour résoudre les problèmes d’inférence augmentée par une fonction de perte qui se pose lors de l’apprentissage de prédiction des structure, et est donc utilisé lors de l’apprentissage et de l’inférence. [...]
This thesis explores the use of discriminatively trained deformable contour models (DCMs) for shape-based segmentation in medical images. We make contributions in two fronts: in the learning problem, where the model is trained from a set of annotated images, and in the inference problem, whose aim is to segment an image given a model. We demonstrate the merit of our techniques in a large X-Ray image segmentation benchmark, where we obtain systematic improvements in accuracy and speedups over the current state-of-the-art. For learning, we formulate training the DCM scoring function as large-margin structured prediction and construct a training objective that aims at giving the highest score to the ground-truth contour configuration. We incorporate a loss function adapted to DCM-based structured prediction. In particular, we consider training with the Mean Contour Distance (MCD) performance measure. Using this loss function during training amounts to scoring each candidate contour according to its Mean Contour Distance to the ground truth configuration. Training DCMs using structured prediction with the standard zero-one loss already outperforms the current state-of-the-art method [Seghers et al. 2007] on the considered medical benchmark [Shiraishi et al. 2000, van Ginneken et al. 2006]. We demonstrate that training with the MCD structured loss further improves over the generic zero-one loss results by a statistically significant amount. For inference, we propose efficient solvers adapted to combinatorial problems with discretized spatial variables. Our contributions are three-fold:first, we consider inference for loopy graphical models, making no assumption about the underlying graph topology. We use an efficient decomposition-coordination algorithm to solve the resulting optimization problem: we decompose the model’s graph into a set of open, chain-structured graphs. We employ the Alternating Direction Method of Multipliers (ADMM) to fix the potential inconsistencies of the individual solutions. Even-though ADMMis an approximate inference scheme, we show empirically that our implementation delivers the exact solution for the considered examples. Second,we accelerate optimization of chain-structured graphical models by using the Hierarchical A∗ search algorithm of [Felzenszwalb & Mcallester 2007] couple dwith the pruning techniques developed in [Kokkinos 2011a]. We achieve a one order of magnitude speedup in average over the state-of-the-art technique based on Dynamic Programming (DP) coupled with Generalized DistanceTransforms (GDTs) [Felzenszwalb & Huttenlocher 2004]. Third, we incorporate the Hierarchical A∗ algorithm in the ADMM scheme to guarantee an efficient optimization of the underlying chain structured subproblems. The resulting algorithm is naturally adapted to solve the loss-augmented inference problem in structured prediction learning, and hence is used during training and inference. In Appendix A, we consider the case of 3D data and we develop an efficientmethod to find the mode of a 3D kernel density distribution. Our algorithm has guaranteed convergence to the global optimum, and scales logarithmically in the volume size by virtue of recursively subdividing the search space. We use this method to rapidly initialize 3D brain tumor segmentation where we demonstrate substantial acceleration with respect to a standard mean-shift implementation. In Appendix B, we describe in more details our extension of the Hierarchical A∗ search algorithm of [Felzenszwalb & Mcallester 2007] to inference on chain-structured graphs
APA, Harvard, Vancouver, ISO, and other styles
14

Jeddi, Babak. "A coordinated energy management scheme in a residential neighborhood under given market framework." Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/200710/1/Babak_Jeddi_Thesis.pdf.

Full text
Abstract:
This thesis proposes a computationally efficient home energy management system to optimize the electricity payment and improve the occupant's comfort degree by appropriately scheduling all devices of the home. It incorporates solar panels, battery systems, thermostatically controlled appliances, and deferrable appliances. Also, this thesis develops a coordinated framework for the operation of multiple home energy management systems in a residential neighborhood based on the optimal and secure operation of the grid. The coordinated load scheduling framework enables customers to cooperate to optimize energy consumption at the neighborhood level and prevents any limitation violation in the grid operational constraints.
APA, Harvard, Vancouver, ISO, and other styles
15

Wang, Fan. "Alternating direction methods for image recovery." HKBU Institutional Repository, 2012. https://repository.hkbu.edu.hk/etd_ra/1406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Al-Wali, Azzam Ahmad. "Explicit alternating direction methods for problems in fluid dynamics." Thesis, Loughborough University, 1994. https://dspace.lboro.ac.uk/2134/6840.

Full text
Abstract:
Recently an iterative method was formulated employing a new splitting strategy for the solution of tridiagonal systems of difference equations. The method was successful in solving the systems of equations arising from one dimensional initial boundary value problems, and a theoretical analysis for proving the convergence of the method for systems whose constituent matrices are positive definite was presented by Evans and Sahimi [22]. The method was known as the Alternating Group Explicit (AGE) method and is referred to as AGE-1D. The explicit nature of the method meant that its implementation on parallel machines can be very promising. The method was also extended to solve systems arising from two and three dimensional initial-boundary value problems, but the AGE-2D and AGE-3D algorithms proved to be too demanding in computational cost which largely reduces the advantages of its parallel nature. In this thesis, further theoretical analyses and experimental studies are pursued to establish the convergence and suitability of the AGE-1D method to a wider class of systems arising from univariate and multivariate differential equations with symmetric and non symmetric difference operators. Also the possibility of a Chebyshev acceleration of the AGE-1D algorithm is considered. For two and three dimensional problems it is proposed to couple the use of the AGE-1D algorithm with an ADI scheme or an ADI iterative method in what is called the Explicit Alternating Direction (EAD) method. It is then shown through experimental results that the EAD method retains the parallel features of the AGE method and moreover leads to savings of up to 83 % in the computational cost for solving some of the model problems. The thesis also includes applications of the AGE-1D algorithm and the EAD method to solve some problems of fluid dynamics such as the linearized Shallow Water equations, and the Navier Stokes' equations for the flow in an idealized one dimensional Planetary Boundary Layer. The thesis terminates with conclusions and suggestions for further work together with a comprehensive bibliography and an appendix containing some selected programs.
APA, Harvard, Vancouver, ISO, and other styles
17

Loomis, Christopher F. "Alternating Direction Implicit Method with Adaptive Grids for Modeling Chemotaxis in Dictyostelium discoideum." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5737.

Full text
Abstract:
Dictyostelium discoideum (Dd) is a model organism, studied for reasons from cell movement to chemotaxis to human disease control. Creating a computer model of the life cycle of Dd has garnered great interest, one part of which is the Aggregation Stage, where thousands of amoeba gather together to form a slug. Chemotaxis is the mechanism through which this is accomplished. This thesis develops two- and three-dimensional alternating direction implicit code which solves the diffusion equation on an adaptive grid. The calculated values for both two and three dimensions are checked against the actual solution and error results are provided. Comparisons are made between the coarse grid with refinement case and a fine grid without refinement case. Also, a non-negativity condition for two dimensions is derived to give a bound on the three major parameters: the diffusion coefficient and the spatial and time discretizations.
APA, Harvard, Vancouver, ISO, and other styles
18

Ammanouil, Rita. "Contributions au démélange non-supervisé et non-linéaire de données hyperspectrales." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4079/document.

Full text
Abstract:
Le démélange spectral est l’un des problèmes centraux pour l’exploitation des images hyperspectrales. En raison de la faible résolution spatiale des imageurs hyperspectraux en télédetection, la surface représentée par un pixel peut contenir plusieurs matériaux. Dans ce contexte, le démélange consiste à estimer les spectres purs (les end members) ainsi que leurs fractions (les abondances) pour chaque pixel de l’image. Le but de cette thèse estde proposer de nouveaux algorithmes de démélange qui visent à améliorer l’estimation des spectres purs et des abondances. En particulier, les algorithmes de démélange proposés s’inscrivent dans le cadre du démélange non-supervisé et non-linéaire. Dans un premier temps, on propose un algorithme de démelange non-supervisé dans lequel une régularisation favorisant la parcimonie des groupes est utilisée pour identifier les spectres purs parmi les observations. Une extension de ce premier algorithme permet de prendre en compte la présence du bruit parmi les observations choisies comme étant les plus pures. Dans un second temps, les connaissances a priori des ressemblances entre les spectres à l’échelle localeet non-locale ainsi que leurs positions dans l’image sont exploitées pour construire un graphe adapté à l’image. Ce graphe est ensuite incorporé dans le problème de démélange non supervisé par le biais d’une régularisation basée sur le Laplacian du graphe. Enfin, deux algorithmes de démélange non-linéaires sont proposés dans le cas supervisé. Les modèles de mélanges non-linéaires correspondants incorporent des fonctions à valeurs vectorielles appartenant à un espace de Hilbert à noyaux reproduisants. L’intérêt de ces fonctions par rapport aux fonctions à valeurs scalaires est qu’elles permettent d’incorporer un a priori sur la ressemblance entre les différentes fonctions. En particulier, un a priori spectral, dans un premier temps, et un a priori spatial, dans un second temps, sont incorporés pour améliorer la caractérisation du mélange non-linéaire. La validation expérimentale des modèles et des algorithmes proposés sur des données synthétiques et réelles montre une amélioration des performances par rapport aux méthodes de l’état de l’art. Cette amélioration se traduit par une meilleure erreur de reconstruction des données
Spectral unmixing has been an active field of research since the earliest days of hyperspectralremote sensing. It is concerned with the case where various materials are found inthe spatial extent of a pixel, resulting in a spectrum that is a mixture of the signatures ofthose materials. Unmixing then reduces to estimating the pure spectral signatures and theircorresponding proportions in every pixel. In the hyperspectral unmixing jargon, the puresignatures are known as the endmembers and their proportions as the abundances. Thisthesis focuses on spectral unmixing of remotely sensed hyperspectral data. In particular,it is aimed at improving the accuracy of the extraction of compositional information fromhyperspectral data. This is done through the development of new unmixing techniques intwo main contexts, namely in the unsupervised and nonlinear case. In particular, we proposea new technique for blind unmixing, we incorporate spatial information in (linear and nonlinear)unmixing, and we finally propose a new nonlinear mixing model. More precisely, first,an unsupervised unmixing approach based on collaborative sparse regularization is proposedwhere the library of endmembers candidates is built from the observations themselves. Thisapproach is then extended in order to take into account the presence of noise among theendmembers candidates. Second, within the unsupervised unmixing framework, two graphbasedregularizations are used in order to incorporate prior local and nonlocal contextualinformation. Next, within a supervised nonlinear unmixing framework, a new nonlinearmixing model based on vector-valued functions in reproducing kernel Hilbert space (RKHS)is proposed. The aforementioned model allows to consider different nonlinear functions atdifferent bands, regularize the discrepancies between these functions, and account for neighboringnonlinear contributions. Finally, the vector-valued kernel framework is used in orderto promote spatial smoothness of the nonlinear part in a kernel-based nonlinear mixingmodel. Simulations on synthetic and real data show the effectiveness of all the proposedtechniques
APA, Harvard, Vancouver, ISO, and other styles
19

Shen, Sumin. "Contributions to Structured Variable Selection Towards Enhancing Model Interpretation and Computation Efficiency." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/96767.

Full text
Abstract:
The advances in data-collecting technologies provides great opportunities to access large sample-size data sets with high dimensionality. Variable selection is an important procedure to extract useful knowledge from such complex data. While in many real-data applications, appropriate selection of variables should facilitate the model interpretation and computation efficiency. It is thus important to incorporate domain knowledge of underlying data generation mechanism to select key variables for improving the model performance. However, general variable selection techniques, such as the best subset selection and the Lasso, often do not take the underlying data generation mechanism into considerations. This thesis proposal aims to develop statistical modeling methodologies with a focus on the structured variable selection towards better model interpretation and computation efficiency. Specifically, this thesis proposal consists of three parts: an additive heredity model with coefficients incorporating the multi-level data, a regularized dynamic generalized linear model with piecewise constant functional coefficients, and a structured variable selection method within the best subset selection framework. In Chapter 2, an additive heredity model is proposed for analyzing mixture-of-mixtures (MoM) experiments. The MoM experiment is different from the classical mixture experiment in that the mixture component in MoM experiments, known as the major component, is made up of sub-components, known as the minor components. The proposed model considers an additive structure to inherently connect the major components with the minor components. To enable a meaningful interpretation for the estimated model, we apply the hierarchical and heredity principles by using the nonnegative garrote technique for model selection. The performance of the additive heredity model was compared to several conventional methods in both unconstrained and constrained MoM experiments. The additive heredity model was then successfully applied in a real problem of optimizing the Pringlestextsuperscript{textregistered} potato crisp studied previously in the literature. In Chapter 3, we consider the dynamic effects of variables in the generalized linear model such as logistic regression. This work is motivated from the engineering problem with varying effects of process variables to product quality caused by equipment degradation. To address such challenge, we propose a penalized dynamic regression model which is flexible to estimate the dynamic coefficient structure. The proposed method considers modeling the functional coefficient parameter as piecewise constant functions. Specifically, under the penalized regression framework, the fused lasso penalty is adopted for detecting the changes in the dynamic coefficients. The group lasso penalty is applied to enable a sparse selection of variables. Moreover, an efficient parameter estimation algorithm is also developed based on alternating direction method of multipliers. The performance of the dynamic coefficient model is evaluated in numerical studies and three real-data examples. In Chapter 4, we develop a structured variable selection method within the best subset selection framework. In the literature, many techniques within the LASSO framework have been developed to address structured variable selection issues. However, less attention has been spent on structured best subset selection problems. In this work, we propose a sparse Ridge regression method to address structured variable selection issues. The key idea of the proposed method is to re-construct the regression matrix in the angle of experimental designs. We employ the estimation-maximization algorithm to formulate the best subset selection problem as an iterative linear integer optimization (LIO) problem. the mixed integer optimization algorithm as the selection step. We demonstrate the power of the proposed method in various structured variable selection problems. Moverover, the proposed method can be extended to the ridge penalized best subset selection problems. The performance of the proposed method is evaluated in numerical studies.
Doctor of Philosophy
The advances in data-collecting technologies provides great opportunities to access large sample-size data sets with high dimensionality. Variable selection is an important procedure to extract useful knowledge from such complex data. While in many real-data applications, appropriate selection of variables should facilitate the model interpretation and computation efficiency. It is thus important to incorporate domain knowledge of underlying data generation mechanism to select key variables for improving the model performance. However, general variable selection techniques often do not take the underlying data generation mechanism into considerations. This thesis proposal aims to develop statistical modeling methodologies with a focus on the structured variable selection towards better model interpretation and computation efficiency. The proposed approaches have been applied to real-world problems to demonstrate their model performance.
APA, Harvard, Vancouver, ISO, and other styles
20

Rouf, Hasan. "Unconditionally stable finite difference time domain methods for frequency dependent media." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/unconditionally-stable-finite-difference-time-domain-methods-for-frequency-dependent-media(50e4adf1-d1e4-4ad2-ab2d-70188fb8b7b6).html.

Full text
Abstract:
The efficiency of the conventional, explicit finite difference time domain (FDTD)method is constrained by the upper limit on the temporal discretization, imposed by the Courant–Friedrich–Lewy (CFL) stability condition. Therefore, there is a growing interest in overcoming this limitation by employing unconditionally stable FDTD methods for which time-step and space-step can be independently chosen. Unconditionally stable Crank Nicolson method has not been widely used in time domain electromagnetics despite its high accuracy and low anisotropy. There has been no work on the Crank Nicolson FDTD (CN–FDTD) method for frequency dependent medium. In this thesis a new three-dimensional frequency dependent CN–FDTD (FD–CN–FDTD) method is proposed. Frequency dependency of single–pole Debye materials is incorporated into the CN–FDTD method by means of an auxiliary differential formulation. In order to provide a convenient and straightforward algorithm, Mur’s first-order absorbing boundary conditions are used in the FD–CN–FDTD method. Numerical tests validate and confirm that the FD–CN–FDTD method is unconditionally stable beyond the CFL limit. The proposed method yields a sparse system of linear equations which can be solved by direct or iterative methods, but numerical experiments demonstrate that for large problems of practical importance iterative solvers are to be used. The FD–CN–FDTD sparse matrix is diagonally dominant when the time-stepis near the CFL limit but the diagonal dominance of the matrix deteriorates with the increase of the time-step, making the solution time longer. Selection of the matrix solver to handle the FD–CN–FDTD sparse system is crucial to fully harness the advantages of using larger time-step, because the computational costs associated with the solver must be kept as low as possible. Two best–known iterative solvers, Bi-Conjugate Gradient Stabilised (BiCGStab) and Generalised Minimal Residual (GMRES), are extensively studied in terms of the number of iteration requirements for convergence, CPU time and memory requirements. BiCGStab outperforms GMRES in every aspect. Many of these findings do not match with the existing literature on frequency–independent CN–FDTD method and the possible reasons for this are pointed out. The proposed method is coded in Fortran and major implementation techniques of the serial code as well as its parallel implementation in Open Multi-Processing (OpenMP) are presented. As an application, a simulation model of the human body is developed in the FD–CN–FDTD method and numerical simulation of the electromagnetic wave propagation inside the human head is shown. Finally, this thesis presents a new method modifying the frequency dependent alternating direction implicit FDTD (FD–ADI–FDTD) method. Although the ADI–FDTD method provides a computationally affordable approximation of the CN–FDTD method, it exhibits a loss of accuracy with respect to the CN-FDTD method which may become severe for some practical applications. The modified FD–ADI–FDTD method can improve the accuracy of the normal FD–ADI–FDTD method without significantly increasing the computational costs.
APA, Harvard, Vancouver, ISO, and other styles
21

Bagli, Maria Chiara. "ll metodo ADMM per la regolarizzazione con Variazione Totale Generalizzata." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20966/.

Full text
Abstract:
La tesi tratta del Metodo dei Moltiplicatori a Direzione Alternata (ADMM) per il problema di ricostruzione di immagini con il funzionale di Variazione Totale Generalizzata (TGV). Tale funzionale è convesso e dipende da due parametri positivi reali il cui valore influenza la qualità dell'immagine ricostruita. Parte del lavoro di tesi è stato dedicato al settaggio dei parametri, in particolare alla scelta del parametro di regolarizzazione, dei due parametri da cui dipende il funzionale TGV e dai parametri specifici del metodo ADMM. Si è compiuta un'analisi comparativa tra il metodo ADMM e il metodo PDAL originariamente proposto per la ricostruzione di immagini mediante TGV. I risultati numerici su diversi problemi test mostrano che il metodo ADMM risulta più performante sia in termini di qualità di ricostruzione che in termini di efficienza computazionale. Infine si è studiato come i parametri specifici per il metodo ADMM influenzino la qualità dell'immagine ricostruita e si è proposto un range nel quale scegliere i valori di tali parametri.
APA, Harvard, Vancouver, ISO, and other styles
22

Kraakmo, Kristina. "Numerical Simulations for the Flow of Rocket Exhaust Through a Granular Medium." Master's thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5968.

Full text
Abstract:
Physical lab experiments have shown that the pressure caused by an impinging jet on a granular bed has the potential to form craters. This poses a danger to landing success and nearby spacecraft for future rocket missions. Current numerical simulations for this process do not accurately reproduce experimental results. Our goal is to produce improved simulations to more accurately and efficiently model the changes in pressure as gas flows through a porous medium. A two-dimensional model in space known as the nonlinear Porous Medium Equation as it is derived from Darcy's law is used. An Alternating-Direction Implicit (ADI) temporal scheme is presented and implemented which reduces our multidimensional problem into a series of one-dimensional problems. We take advantage of explicit approximations for the nonlinear terms using extrapolation formulas derived from Taylor-series, which increases efficiency when compared to other common methods. We couple our ADI temporal scheme with different spatial discretizations including a second-order Finite Difference (FD) method, a fourth-order Orthogonal Spline Collocation (OSC) method, and an Nth-order Chebyshev Spectral method. Accuracy and runtime are compared among the three methods for comparison in a linear analogue of our problem. We see the best results for accuracy when using an ADI-Spectral method in the linear case, but discuss possibilities for increased efficiency using an ADI-OSC scheme. Nonlinear results are presented using the ADI-Spectral method and the ADI-FD method.
M.S.
Masters
Mathematics
Sciences
Mathematical Science
APA, Harvard, Vancouver, ISO, and other styles
23

Hossain, Mohammad Sahadet. "Numerical Methods for Model Reduction of Time-Varying Descriptor Systems." Doctoral thesis, Universitätsbibliothek Chemnitz, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-74776.

Full text
Abstract:
This dissertation concerns the model reduction of linear periodic descriptor systems both in continuous and discrete-time case. In this dissertation, mainly the projection based approaches are considered for model order reduction of linear periodic time varying descriptor systems. Krylov based projection method is used for large continuous-time periodic descriptor systems and balancing based projection technique is applied to large sparse discrete-time periodic descriptor systems to generate the reduce systems. For very large dimensional state space systems, both the techniques produce large dimensional solutions. Hence, a recycling technique is used in Krylov based projection methods which helps to compute low rank solutions of the state space systems and also accelerate the computational convergence. The outline of the proposed model order reduction procedure is given with more details. The accuracy and suitability of the proposed method is demonstrated through different examples of different orders. Model reduction techniques based on balance truncation require to solve matrix equations. For periodic time-varying descriptor systems, these matrix equations are projected generalized periodic Lyapunov equations and the solutions are also time-varying. The cyclic lifted representation of the periodic time-varying descriptor systems is considered in this dissertation and the resulting lifted projected Lyapunov equations are solved to achieve the periodic reachability and observability Gramians of the original periodic systems. The main advantage of this solution technique is that the cyclic structures of projected Lyapunov equations can handle the time-varying dimensions as well as the singularity of the period matrix pairs very easily. One can also exploit the theory of time-invariant systems for the control of periodic ones, provided that the results achieved can be easily re-interpreted in the periodic framework. Since the dimension of cyclic lifted system becomes very high for large dimensional periodic systems, one needs to solve the very large scale periodic Lyapunov equations which also generate very large dimensional solutions. Hence iterative techniques, which are the generalization and modification of alternating directions implicit (ADI) method and generalized Smith method, are implemented to obtain low rank Cholesky factors of the solutions of the periodic Lyapunov equations. Also the application of the solvers in balancing-based model reduction of discrete-time periodic descriptor systems is discussed. Numerical results are given to illustrate the effciency and accuracy of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Nuswantara, Wolfgang Xaverius Dorojatun Jalma, and Jalma. "Dictionary Learning by Alternating Direction Method of Multipliers." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/45259232160673563018.

Full text
Abstract:
碩士
國立臺灣科技大學
電子工程系
103
This thesis investigates the application of the alternating direction method of multipliers (ADMM) to dictionary learning for noise removal problem. Dictionary learning (DL) is the process of acquiring dictionary that can yield sparse representation of desired signal by learning from training sig-nal, instead of using prespeci ed transformation basis. The prior arts of the dictionary learning algorithms include the method of optimal direction (MOD) and K-SVD. These methods have main drawback in computational complexity.By contrast, the ADMM, which we use to transform the complex problem, into simple update steps, yields lower computational complexity. We further proposed on inexact ADMM that can reduce the computational time, for scenarios with large dictionary size. Simulation results show that the proposed methods can successfully train the dictionary and yields promising performance for the image noise removal problem. In particular, the pro-posed ADMM method can be around 30 times faster than K-SVD, while inexact ADMM method can further reduce the computational time around 40 %.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Chia-Wei, and 吳家維. "Decentralized Frequency-Based Load Control by Alternating Direction Method of Multipliers." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/08787826934963527482.

Full text
Abstract:
碩士
國立臺灣科技大學
電子工程系
104
In this paper, we consider a distributed load control problem for achieving supply-demand balance and frequency regulation in a power system. While most of the distributed load control schemes require the loads to exchange in- formation through two-way communications, recent results have shown that it is possible to achieve fully distributed control by frequency-based imbalance estimation. However, the existing methods need the load disutility functions or the sum of them to be strongly convex. For a wider range of application scenarios, we propose a new load control algorithm based on the proximal Ja- cobi alternating direction method of multipliers (PJ-ADMM). The proposed algorithm works for arbitrary convex disutility functions. Moreover, we ex- tend the PJ-ADMM based load control algorithm to an asynchronous setup. Asynchronous updates can spare the loads from strict synchronization and are particularly useful when the number of loads is large. Simulation results are presented to show that the proposed algorithms regulate the frequency to the nominal value well, in both synchronous and asynchronous scenarios and with and without imbalance estimation errors.
APA, Harvard, Vancouver, ISO, and other styles
26

Singh, Aditya Vikram. "Theoretical and Algorithmic Aspects of Rigid Registration." Thesis, 2019. https://etd.iisc.ac.in/handle/2005/4727.

Full text
Abstract:
In this thesis, we consider the rigid registration problem, which arises in applications such as sensor network localization, multiview registration, and protein structure determination. The abstract setup for this problem is as follows. We are given a collection of N labelled points in d-dimensional Euclidean space. There are M observers, each of whom observes a subset of points and assigns coordinates to them in their local frame of reference. For each observer, we know which points they observe, and the (possibly noisy) local coordinates assigned to these points. Based on this information, we wish to infer the global coordinates of the N points. We investigate the following questions in this context: 1. Uniqueness: Suppose that the local coordinates are noiseless. In this case, we know that the true global coordinates are a solution of the problem. But is this the only solution? We use results from graph rigidity theory to give a necessary and sufficient condition for the problem to have a unique solution. In two-dimensional space, this leads to a particularly efficient connectivity-based test for uniqueness. 2. Tightness of a convex relaxation: In general, when the local coordinates are noisy, we use least squares fitting to estimate the global coordinates. After a suitable reduction, this can be posed as a rank-constrained semidefinite program (REG-SDP). Dropping the rank-constraint yields a convex relaxation, which has been empirically observed to solve REG-SDP when the noise is below a certain threshold. Motivated by an analysis of Bandeira et al [1], we offer an explanation of this phenomenon by analyzing the Lagrange dual of the relaxed problem. 3. Convergence of an iterative solver: Instead of working with a convex relaxation, we can try directly solving REG-SDP by appropriately splitting the constraint set, and formally applying the alternating direction method of multipliers (ADMM). Empirically, this nonconvex ADMM algorithm has been demonstrated to perform well in the context of multiview registration. We analyze convergence of the iterates generated by this algorithm, and show how noise in the measurements affects convergence behavior.
APA, Harvard, Vancouver, ISO, and other styles
27

Ahmed, Miraj S. K. "Multiview Registration Using Rank-Constrained Semide nite Programming." Thesis, 2018. https://etd.iisc.ac.in/handle/2005/4761.

Full text
Abstract:
We consider the problem of reconstructing a 3D surface from its multiview scans. Typically, the computational pipeline for this problem has two phases: (I) finding point-to-point correspondences between overlapping scans, and (II) registration of the scans based on the correspondences. The focus of this thesis is on phase II. In particular, we work with a global registration model, where the scans are registered in one-shot using rotations and translations. We consider a least-squares formulation of global registration, where the variables are the transforms associated with the scans. The present novelty is that we reduce this intrinsically nonconvex problem to an optimization over the positive semidefinite cone, where the objective is linear but the constraints are nevertheless nonconvex. We propose to solve this using variable splitting and the alternating direction methods of multipliers (ADMM). Due to the linear objective and the structure of constraints, the ADMM sub-problems turn out to be projections with closed-form solutions. In particular, for m scans, the per-iteration cost is the partial eigendecomposition of a 3m 3m matrix, and m􀀀1 singular value decompositions of 3 3 matrices. We empirically show that for appropriate parameter settings, the proposed solver has a large convergence basin and is stable under perturbations. This is in keeping with recent empirical results on the effectiveness of ADMM for nonconvex problems (the convergence theory is still in its infancy though). We use the proposed ADMM algorithm to align 3D scans, where we determine the pairwise correspondences (in phase I) using the standard ICP algorithm. We present results on simulated and real datasets to demonstrate the effectiveness of our method. A remarkable feature of our method is that it can tolerate heavy amount of outliers in the correspondences. In particular, our method has better noise robustness than existing methods, where by “noise” we mean both perturbations in measurements and correspondences. An interesting open problem in this regard is establishing convergence (optimality) for the ADMM iterations; this is not covered by exisiting results.
APA, Harvard, Vancouver, ISO, and other styles
28

(10716096), Shreyansh Rakeshkuma Shethia. "Fast Tracking ADMM for Distributed Optimization and Convergence under Time-Varying Networks." Thesis, 2021.

Find full text
Abstract:
Due to the increase in the advances in wireless communication, there has been an increase in the use of multi-agents systems to complete any given task. In various applications, multi-agent systems are required to solve an underlying optimization problem to obtain the best possible solution within a feasible region. Solving such multi-agent optimization problems in a distributed framework preferable over centralized frameworks as the former ensures scalability, robustness, and security. Further distributed optimization problem becomes challenging when the decision variables of the individual agents are coupled. In this thesis, a distributed optimization problem with coupled constraints is considered, where a network of agents aims to cooperatively minimize the sum of their local objective functions, subject to individual constraints. This problem setup is relevant to many practical applications like formation flying, sensor fusion, smart grids, etc. For practical scenarios, where agents can solve their local optimal solution efficiently and require fewer assumptions on objective functions, the Alternating Direction Method of Multipliers(ADMM)-based approaches are preferred over gradient-based approaches. For such a constraint coupled problem, several distributed ADMM algorithms are present that guarantee convergence to optimality but they do not discuss the complete analysis for the rate of convergence. Thus, the primary goal of this work is to improve upon the convergence rate of the existing state-of-the-art Tracking-ADMM (TADMM) algorithm to solve the above-distributed optimization problem. Moreover, the current analysis in literature does not discuss the convergence in the case of a time-varying communication network. The first part of the thesis focuses on improving the convergence rate of the Tracking-ADMM algorithm to solve the above-distributed optimization problem more efficiently. To this end, an upper bound on the convergence rate of the TADMM algorithm is derived in terms of the weight matrix of the network. To achieve faster convergence, the optimal weight matrix is computed using a semi-definite programming (SDP) formulation. The improved convergence rate of this Fast-TADMM (F-TADMM) is demonstrated with a simple yet illustrative, coupled constraint optimization problem. Then, the applicability of F-TADMM is demonstrated to the problem of distributed optimal control for trajectory generation of aircraft in formation flight. In the second part of the thesis, the convergence analysis for TADMM is extended while considering a time-varying communication network. The modified algorithm is named as Time-Varying Tracking (TV-TADMM). The formal guarantees on asymptotic convergence are provided with the help of control system analysis of a dynamical system that uses Lyapunov-like theory. The convergence of this TV-TADMM is demonstrated on a simple yet illustrative, coupled constraint optimization problem with switching topology and is compared with the fixed topology setting.
APA, Harvard, Vancouver, ISO, and other styles
29

Richter, Robin. "Cartoon-Residual Image Decompositions with Application in Fingerprint Recognition." Doctoral thesis, 2019. http://hdl.handle.net/21.11130/00-1735-0000-0005-12CB-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Su, Yu-Shun, and 蘇郁舜. "Application of Alternating Direction of Multiplier Method on dimensional reduction." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/5df46s.

Full text
Abstract:
碩士
國立中興大學
應用數學系所
106
Dimensionality reduction helps us not only reduce the use of capacity of computer disk, but also speed up the machine learning algorithms. Most importantly, dimensionality reduction helps us visualize the data so that we can find the structure in low rank structure and the underlying information lurking behind it. Among all the data visualization techniques, Principle Component Analysis(PCA) is the most popular method in uncovering the linear low rank structure. Maximum Variance Unfolding(MVU) and Locally Linear Embedding(LLE) are the classical methods for dimensionality reduction of nonlinear structures. In this paper, we use the concept of neighbor selection of MVU and LLE in the Alternating Direction method of Multipliers(ADMM) and accomplish dimensionality reduction by maximizing Frobenius norm of the data. The output of LLE can be used as an initial value of ADMM iteration to improve the result of distortion and lost distant by LLE to get a better result. In addition, we make a series of experiments to demonstrate their performance, including swissroll and nine-dimensional models, as well as exploring how to choose the better parameters beta of ADMM algorithm.
APA, Harvard, Vancouver, ISO, and other styles
31

Deng, Wei. "Recovering Data with Group Sparsity by Alternating Direction Methods." Thesis, 2012. http://hdl.handle.net/1911/64676.

Full text
Abstract:
Group sparsity reveals underlying sparsity patterns and contains rich structural information in data. Hence, exploiting group sparsity will facilitate more efficient techniques for recovering large and complicated data in applications such as compressive sensing, statistics, signal and image processing, machine learning and computer vision. This thesis develops efficient algorithms for solving a class of optimization problems with group sparse solutions, where arbitrary group configurations are allowed and the mixed L21-regularization is used to promote group sparsity. Such optimization problems can be quite challenging to solve due to the mixed-norm structure and possible grouping irregularities. We derive algorithms based on a variable splitting strategy and the alternating direction methodology. Extensive numerical results are presented to demonstrate the efficiency, stability and robustness of these algorithms, in comparison with the previously known state-of-the-art algorithms. We also extend the existing global convergence theory to allow more generality.
APA, Harvard, Vancouver, ISO, and other styles
32

Schomburg, Helen. "New Algorithms for Local and Global Fiber Tractography in Diffusion-Weighted Magnetic Resonance Imaging." Doctoral thesis, 2017. http://hdl.handle.net/11858/00-1735-0000-0023-3F8B-F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sathinarain, Melisha. "Numerical investigation of the parabolic mixed-derivative diffusion equation via alternating direction implicit methods." Thesis, 2013. http://hdl.handle.net/10539/13016.

Full text
Abstract:
A dissertation submitted to the Faculty of Science, University of the Witwatersrand, in fulfillment of the requirements for the degree of Master of Science, May 14, 2013.
In this dissertation, we investigate the parabolic mixed derivative diffusion equation modeling the viscous and viscoelastic effects in a non-Newtonian viscoelastic fluid. The model is analytically considered using Fourier and Laplace transformations. The main focus of the dissertation, however, is the implementation of the Peaceman-Rachford Alternating Direction Implicit method. The one-dimensional parabolic mixed derivative diffusion equation is extended to a two-dimensional analog. In order to do this, the two-dimensional analog is solved using a Crank-Nicholson method and implemented according to the Peaceman- Rachford ADI method. The behaviour of the solution of the viscoelastic fluid model is analysed by investigating the effects of inertia and diffusion as well as the viscous behaviour, subject to the viscosity and viscoelasticity parameters. The two-dimensional parabolic diffusion equation is then implemented with a high-order method to unveil more accurate solutions. An error analysis is executed to show the accuracy differences between the numerical solutions of the general ADI and high-order compact methods. Each of the methods implemented in this dissertation are investigated via the von-Neumann stability analysis to prove stability under certain conditions.
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Song-ming, and 楊松銘. "Improved Accuracy for Alternating Direction Methods for Parabolic Equations Based on Mixed Finite Element Procedures." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/75141617022757458217.

Full text
Abstract:
碩士
國立中山大學
應用數學系研究所
91
Classical alternating direction (AD) methods for parabolic equations, based on some standard implicit time stepping procedure such as Crank-Nicolson, can have errors associated with the AD perturbations that are much larger than the errors associated with the underlying time stepping procedure . We plan to show that minor modifications in the AD procedures can virtually eliminate the perturbation errors at an minor additional computational cost. A mixed finite element method is applied in the spactial variables. Similar to the finite difference and finite element methods in spactial variables, we plan to have the same accuracy in time. A convergence analysis can also be shown .
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography