Dissertations / Theses on the topic 'Splitting algorithm'

To see the other types of publications on this topic, follow the link: Splitting algorithm.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Splitting algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bot, Radu Ioan, Ernö Robert Csetnek, and Erika Nagy. "Solving systems of monotone inclusions via primal-dual splitting techniques." Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-108172.

Full text
Abstract:
In this paper we propose an algorithm for solving systems of coupled monotone inclusions in Hilbert spaces. The operators arising in each of the inclusions of the system are processed in each iteration separately, namely, the single-valued are evaluated explicitly (forward steps), while the set-valued ones via their resolvents (backward steps). In addition, most of the steps in the iterative scheme can be executed simultaneously, this making the method applicable to a variety of convex minimization problems. The numerical performances of the proposed splitting algorithm are emphasized through applications in average consensus on colored networks and image classification via support vector machines.
APA, Harvard, Vancouver, ISO, and other styles
2

Salinger, David H. "A splitting algorithm for multistage stochastic programming with application to hydropower scheduling /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6749.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hendrich, Christopher. "Proximal Splitting Methods in Nonsmooth Convex Optimization." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-149548.

Full text
Abstract:
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
APA, Harvard, Vancouver, ISO, and other styles
4

Birba, Delwende Eliane. "A Comparative study of data splitting algorithms for machine learning model selection." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287194.

Full text
Abstract:
Data splitting is commonly used in machine learning to split data into a train, test, or validation set. This approach allows us to find the model hyper-parameter and also estimate the generalization performance. In this research, we conducted a comparative analysis of different data partitioning algorithms on both real and simulated data. Our main objective was to address the question of how the choice of data splitting algorithm can improve the estimation of the generalization performance. Data splitting algorithms used in this study were variants of k-fold, Kennard-Stone, SPXY ( sample set partitioning based on joint x-y distance), and random sampling algorithm. Each algorithm divided the data into two subset, training/validation. The training set was used to fit the model and validation for the evaluation. We then analyzed the different data splitting algorithms based on the generalization performances estimated from the validation and the external test set. From the result, we noted that the important determinant for a good generalization is the size of the dataset. For all the data sample methods applied on small data set, the gap between the performance estimated on the validation and test set was significant. However, we noted that the gap reduced when there was more data in training or validation. Too many or few data in the training set can also lead to bad model performance. So it is importance to have a reasonable balance between the training/validation set sizes. In our study, KS and SPXY was the splitting algorithm with poor model performance estimation. Indeed these methods select the most representative samples to train the model, and poor representative samples are left for model performance estimation.
Datapartitionering används vanligtvis i maskininlärning för att dela data i en tränings, test eller valideringsuppsättning. Detta tillvägagångssätt gör det möjligt för oss att hitta hyperparametrar för modellen och även uppskatta generaliseringsprestanda. I denna forskning genomförde vi en jämförande analys av olika datapartitionsalgoritmer på både verkliga och simulerade data. Vårt huvudmål var att undersöka frågan om hur valet avdatapartitioneringsalgoritm kan förbättra uppskattningen av generaliseringsprestanda. Datapartitioneringsalgoritmer som användes i denna studie var varianter av k-faldig korsvalidering, Kennard-Stone (KS), SPXY (partitionering baserat på gemensamt x-y-avstånd) och bootstrap-algoritm. Varje algoritm användes för att dela upp data i två olika datamängder: tränings- och valideringsdata. Vi analyserade sedan de olika datapartitioneringsalgoritmerna baserat på generaliseringsprestanda uppskattade från valideringen och den externa testuppsättningen. Från resultatet noterade vi att det avgörande för en bra generalisering är storleken på data. För alla datapartitioneringsalgoritmer som använts på små datamängder var klyftan mellan prestanda uppskattad på valideringen och testuppsättningen betydande. Vi noterade emellertid att gapet minskade när det fanns mer data för träning eller validering. För mycket eller för litet data i träningsuppsättningen kan också leda till dålig prestanda. Detta belyser vikten av att ha en korrekt balans mellan storlekarna på tränings- och valideringsmängderna. I vår studie var KS och SPXY de algoritmer med sämst prestanda. Dessa metoder väljer de mest representativa instanserna för att träna modellen, och icke-representativa instanser lämnas för uppskattning av modellprestanda.
APA, Harvard, Vancouver, ISO, and other styles
5

Nagurney, Anna, and Alexander Eydeland. "A Splitting Equilibration Algorithm for the Computation of Large-Scale Constrained Matrix Problems; Theoretical Analysis and Applications." Massachusetts Institute of Technology, Operations Research Center, 1990. http://hdl.handle.net/1721.1/5316.

Full text
Abstract:
In this paper we introduce a general parallelizable computational method for solving a wide spectrum of constrained matrix problems. The constrained matrix problem is a core problem in numerous applications in economics. These include the estimation of input/output tables, trade tables, and social/national accounts, and the projection of migration flows over space and time. The constrained matrix problem, so named by Bacharach, is to compute the best possible estimate X of an unknown matrix, given some information to constrain the solution set, and requiring either that the matrix X be a minimum distance from a given matrix, or that X be a functional form of another matrix. In real-world applications, the matrix X is often very large (several hundred to several thousand rows and columns), with the resulting constrained matrix problem larger still (with the number of variables on the order of the square of the number of rows/columns; typically, in the hundreds of thousands to millions). In the classical setting, the row and column totals are known and fixed, and the individual entries nonnegative. However, in certain applications, the row and column totals need not be known a priori, but must be estimated, as well. Furthermore, additional objective and subjective inputs are often incorporated within the model to better represent the application at hand. It is the solution of this broad class of large-scale constrained matrix problems in a timely fashion that we address in this paper. The constrained matrix problem has become a standard modelling tool among researchers and practitioners in economics. Therefore, the need for a unifying, robust, and efficient computational procedure for solving constrained matrix problems is of importance. Here we introduce a.n algorithm, the splitting equilibration algorithm, for computing the entire class of constrained matrix problems. This algorithm is not only theoretically justiflid, hilt l'n fi,1 vl Pnitsf htnh thP lilnlprxing s-trlrtilre of thpCp !arop-Cspe mrnhlem nn the advantages offered by state-of-the-art computer architectures, while simultaneously enhancing the modelling flexibility. In particular, we utilize some recent results from variational inequality theory, to construct a splitting equilibration algorithm which splits the spectrum of constrained matrix problems into series of row/column equilibrium subproblems. Each such constructed subproblem, due to its special structure, can, in turn, be solved simultaneously via exact equilibration in closed form. Thus each subproblem can be allocated to a distinct processor. \We also present numerical results when the splitting equilibration algorithm is implemented in a serial, and then in a parallel environment. The algorithm is tested against another much-cited algorithm and applied to input/output tables, social accounting matrices, and migration tables. The computational results illustrate the efficacy of this approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Glaudin, Lilian. "Stratégies multicouche, avec mémoire, et à métrique variable en méthodes de point fixe pour l'éclatement d'opérateurs monotones et l'optimisation." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS119.

Full text
Abstract:
Plusieurs stratégies sans liens apparents coexistent pour mettre en œuvre les algorithmes de résolution de problèmes d'inclusion monotone dans les espaces hilbertiens. Nous proposons un cadre synthétique permettant d'englober diverses approches algorithmiques pour la construction de point fixe, clarifions et généralisons leur théorie asymptotique, et concevons de nouveaux schémas itératifs pour l'analyse non linéaire et l'optimisation convexe. Notre méthodologie, qui est ancrée sur un modèle de compositions de quasicontractions moyennées, nous permet de faire avancer sur plusieurs fronts la théorie des algorithmes de point fixe et d'impacter leurs domaines d'applications. Des exemples numériques sont fournis dans le contexte de la restauration d'image, où nous proposons un nouveau point de vue pour la formulation des problèmes variationnels
Several apparently unrelated strategies coexist to implement algorithms for solving monotone inclusions in Hilbert spaces. We propose a synthetic framework for fixed point construction which makes it possible to capture various algorithmic approaches, clarify and generalize their asymptotic behavior, and design new iterative schemes for nonlinear analysis and convex optimization. Our methodology, which is anchored on an averaged quasinonexpansive operator composition model, allows us to advance the theory of fixed point algorithms on several fronts, and to impact their application fields. Numerical examples are provided in the context of image restoration, where we propose a new viewpoint on the formulation of variational problems
APA, Harvard, Vancouver, ISO, and other styles
7

Uygur, Ahmet Bilge. "A Non-iterative Pressure Based Algorithm For The Computation Of Reacting Radiating Flows." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608274/index.pdf.

Full text
Abstract:
A non-iterative pressure based algorithm which consists of splitting the solution of momentum energy and species equations into a sequence of predictor-corrector stages was developed for the simulation of transient reacting radiating flows. A semi-discrete approach called the Method of Lines (MOL) which enables implicit time-integration at all splitting stages was used for the solution of conservation equations. The solution of elliptic pressure equation for the determination of pressure field was performed by a multi-grid solver (MUDPACK package). Radiation calculations were carried out by coupling previously developed gray and non-gray radiation models with the algorithm. A first order (global) reaction mechanism was employed to account for the chemistry. The predictions of the algorithm for the following test cases: i) non-isothermal turbulent pipe flow and ii) laminar methane-air diffusion flame
were benchmarked against experimental data and numerical solutions available in the literature and the capability of the code to predict transient solutions was demonstrated on these test cases. Favorable agreements were obtained for both test cases. The effect of radiation and non-gray treatment of the radiative properties were investigated on the second test case. It was found that incorporation of radiation has significant effect on Temeprature and velocity fields but its effect is limited in species predictions. Executions with both radiation models revealed that the non-gray radiation model considered in the present study produces similar results with the gray model at a considerably higher computational cost. The algorithm developed was found to be an efficient and versatile tool for the timedependent simulation of different flow scenarios constitutes the initial steps towards the computation of transient turbulent combustion.
APA, Harvard, Vancouver, ISO, and other styles
8

Karmouche, El Mostafa. "Modélisation complémentaire des signaux non-stationnaires." Nancy 1, 1993. http://www.theses.fr/1993NAN10015.

Full text
Abstract:
Le travail présenté dans ce mémoire concerne la modélisation des signaux non-stationnaires. Dans une première partie, les différentes méthodes de modélisation et d'analyse de ces signaux sont passées en revue. Ensuite, deux méthodes de détection des non-stationnarités sont rappelées et nous introduisons une nouvelle. Basée sur inter-corrélation entre les vecteurs de paramètres du même modèle AR, mais considérés à deux instants d'échantillonnage successifs, cette méthode donne de très bons résultats. La deuxième partie présente les méthodes non-paramétriques d'analyse des signaux non-stationnaires. Articulée autour de la représentation de Wigner-Ville et ses variantes, cette partie montre les limitations et les difficultés d'interprétation de ces méthodes. Une troisième partie est consacrée aux méthodes paramétriques où sont décrits différents algorithmes récursifs adaptés aux situations localement stationnaires et non-stationnaires. La modélisation complémentaire est la contribution originale de ce mémoire. Elle consiste à décomposer spectralement le signal initial en deux signaux sous-complémentaires, au moyen d'un filtre passe-bas et d'un filtre passe-haut, puis à modéliser ces deux signaux sous forme d'un modèle AR dont les paramètres sont estimés au moyen d'une version du filtre de Kalman; enfin, les deux étapes précédentes permettent la synthèse du modèle global du signal de départ. Cette méthode modélise au mieux les signaux présentant des changements rapides au cours du temps
APA, Harvard, Vancouver, ISO, and other styles
9

Hiltunen, Sonja. "Proximal Splitting Algorithms for Disparity Estimation Using Convex Relaxation." Thesis, KTH, Ljud- och bildbehandling, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-55342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Darapuram, Rajasekhar Venkata. "AN INVESTIGATION OF FLUX-SPLITTING ALGORITHMS FOR CHEMICALLY REACTING FLOWS." MSSTATE, 2001. http://sun.library.msstate.edu/ETD-db/theses/available/etd-04102001-104000/.

Full text
Abstract:
This paper presents an investigation of seven different flux splitting algorithms for the discretization of inviscid fluxes, which are the primary source for the non-linear behavior (eg. shocks, contact discontinuities). The aim of the present work is to enhance the accuracy and robustness of CHEM, a three-dimensional flow solver, which is capable of simulating a wide range of flow conditions, including chemical non-equilibrium. Five different test cases cases are considered and thoroughly analyzed. The overall goal is to find a numerical scheme that can meet some stringent specifications of efficiency, accuracy and robustness on the widest possible spectrum of flow conditions.
APA, Harvard, Vancouver, ISO, and other styles
11

Dean, Thomas Anthony. "A subsolutions approach to the analysis and implementation of splitting algorithms in rare event simulation." View abstract/electronic edition; access limited to Brown University users, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3318304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Belouadah, H. "Scheduling and sequencing : Branch and bound based on job splitting and Lagrangian relaxation." Thesis, University of Southampton, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Laitrakun, Seksan. "Distributed detection and estimation with reliability-based splitting algorithms in random-access networks." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53008.

Full text
Abstract:
We design, analyze, and optimize distributed detection and estimation algorithms in a large, shared-channel, single-hop wireless sensor network (WSN). The fusion center (FC) is allocated a shared transmission channel to collect local decisions/estimates but cannot collect all of them because of limited energy, bandwidth, or time. We propose a strategy called reliability-based splitting algorithm that enables the FC to collect local decisions/estimates in descending order of their reliabilities through a shared collision channel. The algorithm divides the transmission channel into time frames and the sensor nodes into groups based on their observation reliabilities. Only nodes with a specified range of reliabilities compete for the channel using slotted ALOHA within each frame. Nodes with the most reliable decisions/estimates attempt transmission in the first frame; nodes with the next most reliable set of decisions/estimates attempt in the next frame; etc. The reliability-based splitting algorithm is applied in three scenarios: time-constrained distributed detection; sequential distributed detection; and time-constrained estimation. Performance measures of interest - including detection error probability, efficacy, asymptotic relative efficiency, and estimator variance - are derived. In addition, we propose and analyze algorithms that exploit information from the occurrence of collisions to improve the performance of both time-constrained distributed detection and sequential distributed detection.
APA, Harvard, Vancouver, ISO, and other styles
14

Silveti, Falls Antonio. "First-order noneuclidean splitting methods for large-scale optimization : deterministic and stochastic algorithms." Thesis, Normandie, 2021. http://www.theses.fr/2021NORMC204.

Full text
Abstract:
Dans ce travail, nous développons et examinons deux nouveaux algorithmes d'éclatement du premier ordre pour résoudre des problèmes d'optimisation composites à grande échelle dans des espaces à dimensions infinies. Ces problèmes sont au coeur de nombres de domaines scientifiques et d'ingénierie, en particulier la science des données et l'imagerie. Notre travail est axé sur l'assouplissement des hypothèses de régularité de Lipschitz généralement requises par les algorithmes de fractionnement du premier ordre en remplaçant l'énergie euclidienne par une divergence de Bregman. Ces développements permettent de résoudre des problèmes ayant une géométrie plus exotique que celle du cadre euclidien habituel. Un des algorithmes développés est l'hybridation de l'algorithme de gradient conditionnel, utilisant un oracle de minimisation linéaire à chaque itération, avec méthode du Lagrangien augmenté, permettant ainsi la prise en compte de contraintes affines. L'autre algorithme est un schéma d'éclatement primal-dual incorporant les divergences de Bregman pour le calcul des opérateurs proximaux associés. Pour ces deux algorithmes, nous montrons la convergence des valeurs Lagrangiennes, la convergence faible des itérés vers les solutions ainsi que les taux de convergence. En plus de ces nouveaux algorithmes déterministes, nous introduisons et étudions également leurs extensions stochastiques au travers d'un point de vue d'analyse de stablité aux perturbations. Nos résultats dans cette partie comprennent des résultats de convergence presque sûre pour les mêmes quantités que dans le cadre déterministe, avec des taux de convergence également. Enfin, nous abordons de nouveaux problèmes qui ne sont accessibles qu'à travers les hypothèses relâchées que nos algorithmes permettent. Nous démontrons l'efficacité numérique et illustrons nos résultats théoriques sur des problèmes comme la complétion de matrice parcimonieuse de rang faible, les problèmes inverses sur le simplexe, ou encore les problèmes inverses impliquant la distance de Wasserstein régularisée
In this work we develop and examine two novel first-order splitting algorithms for solving large-scale composite optimization problems in infinite-dimensional spaces. Such problems are ubiquitous in many areas of science and engineering, particularly in data science and imaging sciences. Our work is focused on relaxing the Lipschitz-smoothness assumptions generally required by first-order splitting algorithms by replacing the Euclidean energy with a Bregman divergence. These developments allow one to solve problems having more exotic geometry than that of the usual Euclidean setting. One algorithm is hybridization of the conditional gradient algorithm, making use of a linear minimization oracle at each iteration, with an augmented Lagrangian algorithm, allowing for affine constraints. The other algorithm is a primal-dual splitting algorithm incorporating Bregman divergences for computing the associated proximal operators. For both of these algorithms, our analysis shows convergence of the Lagrangian values, subsequential weak convergence of the iterates to solutions, and rates of convergence. In addition to these novel deterministic algorithms, we introduce and study also the stochastic extensions of these algorithms through a perturbation perspective. Our results in this part include almost sure convergence results for all the same quantities as in the deterministic setting, with rates as well. Finally, we tackle new problems that are only accessible through the relaxed assumptions our algorithms allow. We demonstrate numerical efficiency and verify our theoretical results on problems like low rank, sparse matrix completion, inverse problems on the simplex, and entropically regularized Wasserstein inverse problems
APA, Harvard, Vancouver, ISO, and other styles
15

Garrett, Joseph Lee. "A comparison of flux-splitting algorithms for the Euler equations with equilibrium air chemistry." Thesis, Virginia Tech, 1989. http://hdl.handle.net/10919/44636.

Full text
Abstract:

The use of flux-splitting techniques on the Euler equations is considered for high Mach number, high temperature flows in which the fluid is assumed to be inviscid air in equilibrium. Three different versions of real gas extensions to the Stegerâ Warming and Van Leer flux-vector splitting, and four different versions of real gas extensions to the Roe flux-difference splitting, are compared with regard to general applicability and ease of implementation in existing perfect gas g algorithms. Test computations are performed for the M = 5, high temperature flow over a 10-degree wedge and the M = 24.5 flow over a blunt body. Although there were minor differences between the computed results for the three types of flux-splitting algorithms considered, little variation is observed between different versions of the same algorithm.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
16

Chouchane, Mathieu. "Optimisation spatio-temporelle d’efforts de recherche pour cibles manoeuvrantes et intelligentes." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4318.

Full text
Abstract:
Dans cette thèse, nous cherchons à répondre à une problématique formulée par la DGA Techniques navales pour surveiller une zone stratégique : planifier le déploiement spatial et temporel optimal d’un ensemble de capteurs de façon à maximiser les chances de détecter une cible mobile et intelligente. La cible est dite intelligente car elle est capable de détecter sous certaines conditions les menaces que représentent les capteurs et ainsi de réagir en adaptant son comportement. Les déploiements générés pouvant aussi avoir un coût élevé nous devons tenir compte de ce critère lorsque nous résolvons notre problématique. Il est important de noter que la résolution d’un problème de ce type requiert, selon les besoins, l’application d’une méthode d’optimisation mono-objectif voire multiobjectif. Jusqu’à présent, les travaux existants n’abordent pas la question du coût des déploiements proposés. De plus la plupart d’entre eux ne se concentrent que sur un seul aspect à la fois. Enfin, pour des raisons algorithmiques, les contraintes sont généralement discrétisées.Dans une première partie, nous présentons un algorithme qui permet de déterminer le déploiement spatio-temporel de capteurs le plus efficace sans tenir compte de son coût. Cette méthode est une application à l’optimisation de la méthode multiniveau généralisée.Dans la seconde partie, nous montrons d’abord que l’utilisation de la somme pondérée des deux critères permet d’obtenir des solutions sans augmenter le temps de calcul. Pour notre seconde approche, nous nous inspirons des algorithmes évolutionnaires d’optimisation multiobjectif et adaptons la méthode multiniveau généralisée à l’optimisation multiobjectif
In this work, we propose a solution to a problem issued by the DGA Techniques navales in order to survey a strategic area: determining the optimal spatio-temporal deployment of sensors that will maximize the detection probability of a mobile and smart target. The target is said to be smart because it is capable of detecting the threat of the sensors under certain conditions and then of adapting its behaviour to avoid it. The cost of a deployment is known to be very expensive and therefore it has to be taken into account. It is important to note that the wide spectrum of applications within this field of research also reflects the need for a highly complex theoretical framework based on stochastic mono or multi-objective optimisation. Until now, none of the existing works have dealt with the cost of the deployments. Moreover, the majority only treat one type of constraint at a time. Current works mostly rely on operational research algorithms which commonly model the constraints in both discrete space and time.In the first part, we present an algorithm which computes the most efficient spatio-temporal deployment of sensors, but without taking its cost into account. This optimisation method is based on an application of the generalised splitting method.In the second part, we first use a linear combination of the two criteria. For our second approach, we use the evolutionary multiobjective optimisation framework to adapt the generalised splitting method to multiobjective optimisation. Finally, we compare our results with the results of the NSGA-II algorithm
APA, Harvard, Vancouver, ISO, and other styles
17

Partimbene, Vincent. "Calcul haute performance pour la simulation d'interactions fluide-structure." Phd thesis, Toulouse, INPT, 2018. http://oatao.univ-toulouse.fr/20524/1/PARTIMBENE_Vincent.pdf.

Full text
Abstract:
Cette thèse aborde la résolution des problèmes d'interaction fluide-structure par un algorithme consistant en un couplage entre deux solveurs : un pour le fluide et un pour la structure. Pour assurer la cohérence entre les maillages fluide et structure, on considère également une discrétisation de chaque domaine par volumes finis. En raison des difficultés de décomposition du domaine en sous-domaines, nous considérons pour chaque environnement un algorithme parallèle de multi-splitting (ou multi-décomposition) qui correspond à une présentation unifiée des méthodes de sous-domaines avec ou sans recouvrement. Cette méthode combine plusieurs applications de points fixes contractantes et nous montrons que, sous des hypothèses appropriées, chaque application de points fixes est contractante dans des espaces de dimensions finies normés par des normes hilbertiennes et non-hilbertiennes. De plus, nous montrons qu'une telle étude est valable pour les résolutions parallèles synchrones et plus généralement asynchrones de grands systèmes linéaires apparaissant lors de la discrétisation des problèmes d'interaction fluide-structure et peut être étendue au cas où le déplacement de la structure est soumis à des contraintes. Par ailleurs, nous pouvons également considérer l’analyse de la convergence de ces méthodes de multi-splitting parallèles asynchrones par des techniques d’ordre partiel, lié au principe du maximum discret, aussi bien dans le cadre linéaire que dans celui obtenu lorsque les déplacements de la structure sont soumis à des contraintes. Nous réalisons des simulations parallèles pour divers cas test fluide-structure sur différents clusters, en considérant des communications bloquantes et non bloquantes. Dans ce dernier cas nous avons eu à résoudre une difficulté d'implémentation dans la mesure où une erreur irrécupérable survenait lors de l'exécution ; cette difficulté a été levée par introduction d’une méthode assurant la terminaison de toutes les communications non bloquantes avant la mise à jour du maillage. Les performances des simulations parallèles sont présentées et analysées. Enfin, nous appliquons la méthodologie présentée précédemment à divers contextes d'interaction fluide-structure de type industriel sur des maillages non structurés, ce qui constitue une difficulté supplémentaire.
APA, Harvard, Vancouver, ISO, and other styles
18

Handler, Matthew Dane. "Development of stable operator splitting numerical algorithms for phase-field modeling and surface diffusion applications." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35068.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, 2006.
Includes bibliographical references (leaves 35-37).
Implicit, explicit and spectral algorithms were used to create Allen-Cahn and Cahn-Hilliard phase field models. Individual terms of the conservation equations were approached by different methods using operator splitting techniques found in previous literature. In addition, dewetting of gold films due to surface diffusion was modeled to present the extendability and efficiency of the spectral methods derived. The simulations developed are relevant to many real systems and are relatively light in computational load because they take large time steps to drive the model into equilibrium. Results were analyzed by their relevancy to real world applications and further work in this field is outlined.
by Matthew Dane Handler.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
19

Wakeni, Mebratu Fenta. "Stable algorithms for generalized thermoelasticity based on operator-splitting and time-discontinuous Galerkin finite element methods." Doctoral thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/23017.

Full text
Abstract:
This thesis deals with the theoretical and numerical analysis of coupled problems in thermoelasticity. Of particular interest are models that support propagation of thermal energy as waves, rather than the usual mechanism by diffusion. The thesis consists of two parts. The first deals with the non-classical, linear thermoelastic model first proposed and developed by Green and Naghdi in the years between 1991 and 1995, as a possible alternative that potentially removes the shortcomings of the standard Fourier based model. The non-classical theory incorporates three models: the classical model based on Fourier's law of heat conduction, resulting in a hyperbolic-parabolic coupled system; a non-classical theory of a fully-hyperbolic extension; and a combination of the two. An efficient staggered time-stepping algorithm is proposed based on operator-splitting and the time-discontinuous Galerkin finite element method for the non-classical, linear thermoelastic model. The coupled problem is split into two contractive sub-problems, namely, the mechanical phase and thermal phase, on the basis of an entropy controlling mechanism. In the mechanical phase temperature is allowed to vary so as to ensure the entropy remains constant, while the thermal phase is a purely non-classical heat conduction problem in a fixed configuration. Each sub-problem is discretized using the time-discontinuous Galerkin finite element method, resulting in stable time-stepping sub-algorithms. A global stable algorithm is obtained by combining the algorithms for the sub-problems by way of a product method. A number of numerical examples are presented to demonstrate the performance and capability of the method. The second part of this work concerns the formulation of a thermodynamically consistent generalized model of nonlinear thermoelasticity, whose linearization about a natural reference configuration includes the theory of Green and Naghdi. The generalized model is based on the fundamental laws of continuum mechanics and thermodynamics, and is realized through two basic assumptions: The first is the inclusion into the state space of a vector field, which is known as the thermal displacement, and is a time primitive of the absolute temperature. The second is that the heat flux vector is additively split into two parts, which are referred to as the energetic and dissipative components of the heat flux vector. The application of the Coleman-Noll procedure leads to find constitutive relations for the stress, entropy, and energetic component of the heat flux as derivatives of the free energy function. Furthermore, a Clausius-Duhem-type inequality is assumed on a constitutive relation for the dissipative component of the heat flux vector to ensure thermodynamic consistency. A Lyapunov function is obtained for the generalized problem with finite strains; this serves as the basis for the stability analysis of the numerical methods designed for generalized thermoelasticity at finite strains. Due to the lack of convexity of the elastic potential in the finite strain case, a direct extension of the time-discontinuous formulation from the linear to the finite strain case does not guarantee stability. For this reason, various numerical formulations both in monolithic and staggered approaches with fully or partially time-discontinuity assumptions are presented in the framework of the space-time methods. The stability of each of the numerical algorithms is thoroughly analysed. The capability of the newly formulated generalized model of thermoelasticity in predicting various expected features of non-Fourier response is illustrated by a number of numerical examples. These also serve to demonstrate the performance of the space-time Galerkin method in capturing fine solution features.
APA, Harvard, Vancouver, ISO, and other styles
20

Cao, Yufei [Verfasser]. "Robust Numerical Algorithms Based on Corrected Operator Splitting for Two-Phase Flow in Porous Media / Yufei Cao." Aachen : Shaker, 2010. http://d-nb.info/110118406X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Frochte, Jörg. "Ein Splitting-Algorithmus höherer Ordnung für die Navier-Stokes-Gleichung auf der Basis der Finite-Element-Methode." kostenfrei, 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=977955710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lestang, Thibault. "Numerical simulation and rare events algorithms for the study of extreme fluctuations of the drag force acting on an obstacle immersed in a turbulent flow." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN049/document.

Full text
Abstract:
Cette thèse porte sur l'étude numérique des fluctuations extrêmes de la force de traînée exercée par un écoulement turbulent sur un corps immergé.Ce type d'évènement, très rare, est difficile à caractériser par le biais d'un échantillonnage direct, puisqu'il est alors nécessaire de simuler l'écoulement sur des durées extrêmement longues. Cette thèse propose une approche différente, basée sur l'application d'algorithmes d'échantillonnage d'événements rares. L'objectif de ces algorithmes, issus de la physique statistique, est de modifier la statistique d'échantillonnage des trajectoires d'un système dynamique, de manière à favoriser l'occurrence d'événements rares. Si ces techniques ont été appliquées avec succès dans le cas de dynamiques relativement simples, l'intérêt de ces algorithmes n'est à ce jour pas clair pour des dynamiques déterministes extrêmement complexes, comme c'est le cas pour les écoulement turbulents.Cette thèse présente tout d'abord une étude de la dynamique et de la statistique associée aux fluctuations extrêmes de la force de traînée sur un obstacle carré fixe immergé dans un écoulement turbulent à deux dimensions. Ce cadre simplifié permet de simuler la dynamique sur des durées très longues, permettant d'échantillonner un grand nombre de fluctuations dont l'amplitude est assez élevée pour être qualifiée d'extrême.Dans un second temps, l'application de deux algorithmes d’échantillonnage est présentée et discutée.Dans un premier cas, il est illustré qu'une réduction significative du temps de calcul d'extrêmes peut être obtenue. En outre, des difficultés liées à la dynamique de l'écoulement sont mises en lumière, ouvrant la voie au développement de nouveaux algorithmes spécifiques aux écoulements turbulents
This thesis discusses the numerical simulation of extreme fluctuations of the drag force acting on an object immersed in a turbulent medium.Because such fluctuations are rare events, they are particularly difficult to investigate by means of direct sampling. Indeed, such approach requires to simulate the dynamics over extremely long durations.In this work an alternative route is introduced, based on rare events algorithms.The underlying idea of such algorithms is to modify the sampling statistics so as to favour rare trajectories of the dynamical system of interest.These techniques recently led to impressive results for relatively simple dynamics. However, it is not clear yet if such algorithms are useful for complex deterministic dynamics, such as turbulent flows.This thesis focuses on the study of both the dynamics and statistics of extreme fluctuations of the drag experienced by a square cylinder mounted in a two-dimensional channel flow.This simple framework allows for very long simulations of the dynamics, thus leading to the sampling of a large number of events with an amplitude large enough so as they can be considered extreme.Subsequently, the application of two different rare events algorithms is presented and discussed.In the first case, a drastic reduction of the computational cost required to sample configurations resulting in extreme fluctuations is achieved.Furthermore, several difficulties related to the flow dynamics are highlighted, paving the way to novel approaches specifically designed to turbulent flows
APA, Harvard, Vancouver, ISO, and other styles
23

Bӑrbos, Andrei-Cristian. "Efficient high-dimension gaussian sampling based on matrix splitting : application to bayesian Inversion." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0002/document.

Full text
Abstract:
La thèse traite du problème de l’échantillonnage gaussien en grande dimension.Un tel problème se pose par exemple dans les problèmes inverses bayésiens en imagerie où le nombre de variables atteint facilement un ordre de grandeur de 106_109.La complexité du problème d’échantillonnage est intrinsèquement liée à la structure de la matrice de covariance. Pour résoudre ce problème différentes solutions ont déjà été proposées,parmi lesquelles nous soulignons l’algorithme de Hogwild qui exécute des mises à jour de Gibbs locales en parallèle avec une synchronisation globale périodique.Notre algorithme utilise la connexion entre une classe d’échantillonneurs itératifs et les solveurs itératifs pour les systèmes linéaires. Il ne cible pas la distribution gaussienne requise, mais cible une distribution approximative. Cependant, nous sommes en mesure de contrôler la disparité entre la distribution approximative est la distribution requise au moyen d’un seul paramètre de réglage.Nous comparons d’abord notre algorithme avec les algorithmes de Gibbs et Hogwild sur des problèmes de taille modérée pour différentes distributions cibles. Notre algorithme parvient à surpasser les algorithmes de Gibbs et Hogwild dans la plupart des cas. Notons que les performances de notre algorithme dépendent d’un paramètre de réglage.Nous comparons ensuite notre algorithme avec l’algorithme de Hogwild sur une application réelle en grande dimension, à savoir la déconvolution-interpolation d’image.L’algorithme proposé permet d’obtenir de bons résultats, alors que l’algorithme de Hogwild ne converge pas. Notons que pour des petites valeurs du paramètre de réglage, notre algorithme ne converge pas non plus. Néanmoins, une valeur convenablement choisie pour ce paramètre permet à notre échantillonneur de converger et d’obtenir de bons résultats
The thesis deals with the problem of high-dimensional Gaussian sampling.Such a problem arises for example in Bayesian inverse problems in imaging where the number of variables easily reaches an order of 106_109. The complexity of the sampling problem is inherently linked to the structure of the covariance matrix. Different solutions to tackle this problem have already been proposed among which we emphasizethe Hogwild algorithm which runs local Gibbs sampling updates in parallel with periodic global synchronisation.Our algorithm makes use of the connection between a class of iterative samplers and iterative solvers for systems of linear equations. It does not target the required Gaussian distribution, instead it targets an approximate distribution. However, we are able to control how far off the approximate distribution is with respect to the required one by means of asingle tuning parameter.We first compare the proposed sampling algorithm with the Gibbs and Hogwild algorithms on moderately sized problems for different target distributions. Our algorithm manages to out perform the Gibbs and Hogwild algorithms in most of the cases. Let us note that the performances of our algorithm are dependent on the tuning parameter.We then compare the proposed algorithm with the Hogwild algorithm on a large scalereal application, namely image deconvolution-interpolation. The proposed algorithm enables us to obtain good results, whereas the Hogwild algorithm fails to converge. Let us note that for small values of the tuning parameter our algorithm fails to converge as well.Not with standing, a suitably chosen value for the tuning parameter enables our proposed sampler to converge and to deliver good results
APA, Harvard, Vancouver, ISO, and other styles
24

Garrigos, Guillaume. "Descent dynamical systems and algorithms for tame optimization, and multi-objective problems." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS191/document.

Full text
Abstract:
Dans une première partie, nous nous intéressons aux systèmes dynamiques gradients gouvernés par des fonctions non lisses, mais aussi non convexes, satisfaisant l'inégalité de Kurdyka-Lojasiewicz. Après avoir obtenu quelques résultats préliminaires pour la dynamique de la plus grande pente continue, nous étudions un algorithme de descente général. Nous prouvons, sous une hypothèse de compacité, que tout suite générée par ce schéma général converge vers un point critique de la fonction. Nous obtenons aussi de nouveaux résultats sur la vitesse de convergence, tant pour les valeurs que pour les itérés. Ce schéma général couvre en particulier des versions parallélisées de la méthode forward-backward, autorisant une métrique variable et des erreurs relatives. Cela nous permet par exemple de proposer une version non convexe non lisse de l'algorithme Levenberg-Marquardt. Enfin, nous proposons quelques applications de ces algorithmes aux problèmes de faisabilité, et aux problèmes inverses. Dans une seconde partie, cette thèse développe une dynamique de descente associée à des problèmes d'optimisation vectoriels sous contrainte. Pour cela, nous adaptons la dynamique de la plus grande pente usuelle aux fonctions à valeurs dans un espace ordonné par un cône convexe fermé solide. Cette dynamique peut être vue comme l'analogue continu de nombreux algorithmes développés ces dernières années. Nous avons un intérêt particulier pour les problèmes de décision multi-objectifs, pour lesquels cette dynamique de descente fait décroitre toutes les fonctions objectif au cours du temps. Nous prouvons l'existence de trajectoires pour cette dynamique continue, ainsi que leur convergence vers des points faiblement efficients. Finalement, nous explorons une nouvelle dynamique inertielle pour les problèmes multi-objectif, avec l'ambition de développer des méthodes rapides convergeant vers des équilibres de Pareto
In a first part, we focus on gradient dynamical systems governed by non-smooth but also non-convex functions, satisfying the so-called Kurdyka-Lojasiewicz inequality.After obtaining preliminary results for a continuous steepest descent dynamic, we study a general descent algorithm. We prove, under a compactness assumption, that any sequence generated by this general scheme converges to a critical point of the function.We also obtain new convergence rates both for the values and the iterates. The analysis covers alternating versions of the forward-backward method, with variable metric and relative errors. As an example, a non-smooth and non-convex version of the Levenberg-Marquardt algorithm is detailed.Applications to non-convex feasibility problems, and to sparse inverse problems are discussed.In a second part, the thesis explores descent dynamics associated to constrained vector optimization problems. For this, we adapt the classic steepest descent dynamic to functions with values in a vector space ordered by a solid closed convex cone. It can be seen as the continuous analogue of various descent algorithms developed in the last years.We have a particular interest for multi-objective decision problems, for which the dynamic make decrease all the objective functions along time.We prove the existence of trajectories for this continuous dynamic, and show their convergence to weak efficient points.Then, we explore an inertial dynamic for multi-objective problems, with the aim to provide fast methods converging to Pareto points
APA, Harvard, Vancouver, ISO, and other styles
25

Guarnieri, Gabriele. "High dynamic range images: processing, display and perceptual quality assessment." Doctoral thesis, Università degli studi di Trieste, 2009. http://hdl.handle.net/10077/3121.

Full text
Abstract:
2007/2008
The intensity of natural light can span over 10 orders of magnitude from starlight to direct sunlight. Even in a single scene, the luminance of the bright areas can be thousands or millions of times greater than the luminance in the dark areas; the ratio between the maximum and the minimum luminance values is commonly known as dynamic range or contrast. The human visual system is able to operate in an extremely wide range of luminance conditions without saturation and at the same time it can perceive fine details which involve small luminance differences. Our eyes achieve this ability by modulating their response as a function of the local mean luminance with a process known as local adaptation. In particular, the visual sensation is not linked to the absolute luminance, but rather to its spatial and temporal variation. One consequence of the local adaptation capability of the eye is that the objects in a scene maintain their appearance even if the light source illuminating the scene changes significantly. On the other hand, the technologies used for the acquisition and reproduction of digital images are able to handle correctly a significantly smaller luminance range of 2 to 3 orders of magnitude at most. Therefore, a high dynamic range (HDR) image poses several challenges and requires the use of appropriate techniques. These elementary observations define the context in which the entire research work described in this Thesis has been performed. As indicated below, different fields have been considered; they range from the acquisition of HDR images to their display, from visual quality evaluation to medical applications, and include some developments on a recently proposed class of display equipment. An HDR image can be captured by taking multiple photographs with different exposure times or by using high dynamic range sensors; moreover, synthetic HDR images can be generated with computer graphics by means of physically-based algorithms which often involve advanced lighting simulations. An HDR image, although acquired correctly, can not be displayed on a conventional monitor. The white level of most devices is limited to a few hundred cd/m² by technological constraints, primarily linked to the power consumption and heat dissipation; the black level also has a non negligible luminance, in particular for devices based on the liquid crystal technology. However, thanks to the aforementioned properties of the human visual system, an exact reproduction of the luminance in the original scene is not strictly necessary in order to produce a similar sensation in the observer. For this purpose, dynamic range reduction algorithms have been developed which attenuate the large luminance variations in an image while preserving as far as possible the fine details. The most simple dynamic range reduction algorithms map each pixel individually with the same nonlinear function commonly known as tone mapping curve. One operator we propose, based on a modified logarithmic function, has a low computational cost and contains one single user-adjustable parameter. However, the methods belonging to this category can reduce the visibility of the details in some portions of the image. More advanced methods also take into account the pixel neighborhood. This approach can achieve a better preservation of the details, but the loss of one-to-one mapping from input luminances to display values can lead to the formation of gradient reversal effects, which typically appear as halos around the object boundaries. Different solutions to this problem have been attempted. One method we introduce is able to avoid the formation of halos and intrinsically prevents any clipping of the output display values. The method is formulated as a constrained optimization problem, which is solved efficiently by means of appropriate numerical methods. In specific applications, such as the medical one, the use of dynamic range reduction algorithms is discouraged because any artifacts introduced by the processing can lead to an incorrect diagnosis. In particular, a one-to-one mapping from the physical data (for instance, a tissue density in radiographic techniques) to the display value is often an essential requirement. For this purpose, high dynamic range displays, capable of reproducing images with a wide luminance range and possibly a higher bit depth, are under active development. Dual layer LCD displays, for instance, use two liquid crystal panels stacked one on top of the other over an enhanced backlight unit in order to achieve a dynamic range of 4 ÷ 5 orders of magnitude. The grayscale reproduction accuracy is also increased, although a “bit depth” can not be defined unambiguously because the luminance levels obtained by the combination of the two panels are partially overlapped and unevenly spaced. A dual layer LCD display, however, requires the use of complex splitting algorithms in order to generate the two images which drive the two liquid crystal panels. A splitting algorithm should compensate multiple sources of error, including the parallax introduced by the viewing angle, the gray-level clipping introduced by the limited dynamic range of the panels, the visibility of the reconstruction error, and glare effects introduced by an unwanted light scattering between the two panels. For these reasons, complex constrained optimization techniques are necessary. We propose an objective function which incorporates all the desired constraints and requirements and can be minimized efficiently by means of appropriate techniques based on multigrid methods. The quality assessment of high dynamic range images requires the development of appropriate techniques. By their own nature, dynamic range reduction algorithms change the luminance values of an image significantly and make most image fidelity metrics inapplicable. Some particular aspects of the methods can be quantified by means of appropriate operators; for instance, we introduce an expression which describes the detail attenuation introduced by a tone mapping curve. In general, a subjective quality assessment is preferably performed by means of appropriate psychophysical experiments. We conducted a set of experiments, targeted specifically at measuring the level of agreement between different users when adjusting the parameter of the modified logarithmic mapping method we propose. The experimental results show a strong correlation between the user-adjusted parameter and the image statistics, and suggest a simple technique for the automatic adjustment of this parameter. On the other hand, the quality assessment in the medical field is preferably performed by means of objective methods. In particular, task-based quality measures evaluate by means of appropriate observer studies the clinical validity of the image used to perform a specific diagnostic task. We conducted a set of observer studies following this approach, targeted specifically at measuring the clinical benefit introduced by a high dynamic range display based on the dual layer LCD technology over a conventional display with a low dynamic range and 8-bit quantization. Observer studies are often time consuming and difficult to organize; in order to increase the number of tests, the human observers can be partially replaced by appropriate software applications, known as model observers or computational observers, which simulate the diagnostic task by means of statistical classification techniques. This thesis is structured as follows. Chapter 1 contains a brief background of concepts related to the physiology of human vision and to the electronic reproduction of images. The description we make is by no means complete and is only intended to introduce some concepts which will be extensively used in the following. Chapter 2 describes the technique of high dynamic range image acquisition by means of multiple exposures. In Chapter 3 we introduce the dynamic range reduction algorithms, providing an overview of the state of the art and proposing some improvements and novel techniques. In Chapter 4 we address the topic of quality assessment in dynamic range reduction algorithms; in particular, we introduce an operator which describes the detail attenuation introduced by tone mapping curves and describe a set of psychophysical experiments we conducted for the adjustment of the parameter in the modified logarithmic mapping method we propose. In Chapter 5 we move to the topic of medical images and describe the techniques used to map the density data of radiographic images to display luminances. We point out some limitations of the current technical recommendation and propose an improvement. In Chapter 6 we describe in detail the dual layer LCD prototype and propose different splitting algorithms for the generation of the two images which drive the two liquid crystal panels. In Chapter 7 we propose one possible technique for the estimation of the equivalent bit depth of a dual layer LCD display, based on a statistical analysis of the quantization noise. Finally, in Chapter 8 we address the topic of objective quality assessment in medical images and describe a set of observer studies we conducted in order to quantify the clinical benefit introduced by a high dynamic range display. No general conclusions are offered; the breadth of the subjects has suggested to draw more focused comments at the end of the individual chapters.
XXI Ciclo
1982
APA, Harvard, Vancouver, ISO, and other styles
26

Yahiaoui, Ala-Eddine. "Selective vehicle routing problem : cluster and synchronization constraints." Thesis, Compiègne, 2018. http://www.theses.fr/2018COMP2449/document.

Full text
Abstract:
Le problème de tournées de véhicules (Vehicle Routing Problem - VRP) est un problème d'optimisation combinatoire utilisé généralement pour modéliser et résoudre des différents problèmes rencontrés dans les systèmes logistiques et de transport. Dans cette thèse, nous nous sommes intéressés à l'étude et la résolution d'une classe de problèmes du VRP appelée les problèmes de courses d'orientation (Team Orienteering Problem - TOP). Dans cette catégorie de problèmes, il est a priori impossible de visiter tous les clients en raison de ressources limitées. On associe plutôt un profit à chaque client qui représente sa valeur. Ce profit est collecté lorsque le client est visité par l'un des véhicules disponibles. L'objectif est donc de sélectionner un sous ensemble de clients à servir tout en maximisant le profit total collecté. Dans un premier temps, nous avons introduit une nouvelle généralisation pour le TOP que nous avons appelé le Clustered TOP ou CluTOP. Dans cette variante, les clients sont regroupés en sous-ensembles appelés clusters auxquels nous associons des profits. Pour résoudre cette variante, nous avons proposé un schéma exact basé sur l'approche des plans sécants avec des inégalités valides supplémentaires et des pré-traitements. Nous avons également conçu une méthode heuristique basée sur l'approche order first-cluster second. Cette heuristique hybride combine une heuristique de type Adaptive Large Neighborhood Search qui explore l'espace des solutions et une procédure de découpage qui explore l'espace de recherche des tours géants. De plus, la procédure de découpage est renforcée par une recherche locale afin de mieux explorer l'espace de recherche. Le deuxième problème traité dans ce travail s'appelle le Synchronized Team Orienteering Problem with Time Windows (STOPTW). Cette variante avait été initialement proposée afin de modéliser des scénarios liés à la protection des infrastructures stratégiques menacées par l'avancée des feux de forêts. En plus des contraintes de fenêtres de temps et des visites synchronisées, cette variante considère le cas d'une flotte de véhicules hétérogène. Pour résoudre ce problème, nous avons proposé une méthode heuristique basée sur l'approche GRASP×ILS qui est parvenue à dominer la seule approche existante dans la littérature. La dernière variante du TOP abordée dans cette thèse s'appelle le Set Orienteering Problem (SOP). Les clients dans cette variante sont regroupés en sous-ensembles appelés clusters. Un profit est associé à chaque groupe qui n'est obtenu que si au moins un client est desservi par le véhicule disponible. Nous avons proposé une méthode de coupes avec deux procédures de séparation pour séparer les contraintes d'élimination des sous-tours. Nous avons également proposé un algorithme Mémétique avec une procédure de découpage optimale calculée à l'aide de la programmation dynamique
The Vehicle Routing Problem (VRP) is a family of Combinatorial Optimization Problems generally used to solve different issues related to transportation systems and logistics. In this thesis, we focused our attention on a variant of the VRP called the Team Orienteering Problem (TOP). In this family of problems, it is a priory impossible to visit all the customers due to travel time limitation on vehicles. Instead, a profit is associated with each customer to represent its value and it is collected once the customer is visited by one of the available vehicles. The objective function is then to maximize the total collected profit with respect to the maximum travel time. Firstly, we introduced a new generalization for the TOP that we called the Clustered TOP (CluTOP). In this variant, the customers are grouped into subsets called clusters to which we associate profits. To solve this variant, we proposed an exact scheme based on the cutting plane approach with additional valid inequalities and pre-processing techniques. We also designed a heuristic method based on the order first-cluster second approach for the CluTOP. This Hybrid Heuristic combines between an ANLS heuristic that explores the solutions space and a splitting procedure that explores the giant tours search space. In addition, the splitting procedure is enhanced by local search procedure in order to enhance its coverage of search space. The second problem treated in this work is called the Synchronized Team Orienteering Problem with Time Windows (STOPTW). This variant was initially proposed in order to model scenarios related to asset protection during escaped wildfires. It considers the case of a heterogeneous fleet of vehicles along with time windows and synchronized visits. To solve this problem, we proposed a heuristic method based on the GRASP×ILS approach that led to a very outstanding results compared to the literature. The last variant of the TOP tackled in this thesis called the Set Orienteering Problem (SOP). Customers in this variant are grouped into subsets called clusters. Each cluster is associated with a profit which is gained if at least one customer is served by the single available vehicle. We proposed a Branch-and-Cut with two separation procedures to separate subtours elimination constraints. We also proposed a Memetic Algorithm with an optimal splitting procedure based on dynamic programming
APA, Harvard, Vancouver, ISO, and other styles
27

Loulidi, Sanae. "Modélisation stochastique en finance, application à la construction d’un modèle à changement de régime avec des sauts." Thesis, Bordeaux 1, 2008. http://www.theses.fr/2008BOR13675/document.

Full text
Abstract:
Le modèle de Blacket Scholes reste le modèle de référence sur les marchés des dérivés. Sa parcimonie et sa maniabilité sont certes attractives. Il ne faut cependant pas perdre de vue les hypothèses restrictives, voire simplistes, qui lui servent de base et qui limitent sa capacité à reproduire la dynamique du marché. Afin de refléter un peu mieux cette dynamique, nous introduisons un modèle d’évaluation des options à changement de régime avec sauts. Sous ce modèle, l’hypothèse de complétude des marchés n’est plus valable. Les sources d’incertitude sont plus nombreuses que les instruments disponibles à la couverture. On ne parle plus de réplication/couverture parfaite mais plutôt de réplication optimale dans un sens à définir. Dans cette thèse, on suppose que le marché peut être décrit par plusieurs «régimes» (ou encore par des «modes») re?étant l’état de l’économie, le comportement général des investisseurs et leurs tendances. Pour chacun de ces régimes, le sous-jacent est caractérisé par un niveau de volatilité et de rendement donné. Avec en plus, et a priori des discontinuités du prix du sous-jacent à chaque fois qu’une transition d’un régime à un autre a lieu. La thèse comprend trois parties: 1.Modélisation du problème et application de la théorie du contrôle stochastique. Par l’utilisation du principe de programmation dynamique et la considération des différents régimes de marché, on aboutit à un système de M (le nombre de régimes) équations de Hamilton Jacobi Bellman «HJB» couplées. 2.La résolution numérique de l’équation HJB pour l’évolution d’options, par différences finies généralisées. 3.L’estimation des paramètres du modèle par un filtre récursif, qui produit une estimation récursive d’un état inconnu au vu d’observation bruitée supposée continue, dans le cas où l’état inconnu serait modélisé par une chaîne de Markov à temps discret et espace d’état fini
Abstract
APA, Harvard, Vancouver, ISO, and other styles
28

Nassif, Roula. "Estimation distribuée adaptative sur les réseaux multitâches." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4118/document.

Full text
Abstract:
L’apprentissage adaptatif distribué sur les réseaux permet à un ensemble d’agents de résoudre des problèmes d’estimation de paramètres en ligne en se basant sur des calculs locaux et sur des échanges locaux avec les voisins immédiats. La littérature sur l’estimation distribuée considère essentiellement les problèmes à simple tâche, où les agents disposant de fonctions objectives séparables doivent converger vers un vecteur de paramètres commun. Cependant, dans de nombreuses applications nécessitant des modèles plus complexes et des algorithmes plus flexibles, les agents ont besoin d’estimer et de suivre plusieurs vecteurs de paramètres simultanément. Nous appelons ce type de réseau, où les agents doivent estimer plusieurs vecteurs de paramètres, réseau multitâche. Bien que les agents puissent avoir différentes tâches à résoudre, ils peuvent capitaliser sur le transfert inductif entre eux afin d’améliorer les performances de leurs estimés. Le but de cette thèse est de proposer et d’étudier de nouveaux algorithmes d’estimation distribuée sur les réseaux multitâches. Dans un premier temps, nous présentons l’algorithme diffusion LMS qui est une stratégie efficace pour résoudre les problèmes d’estimation à simple-tâche et nous étudions théoriquement ses performances lorsqu’il est mis en oeuvre dans un environnement multitâche et que les communications entre les noeuds sont bruitées. Ensuite, nous présentons une stratégie de clustering non-supervisé permettant de regrouper les noeuds réalisant une même tâche en clusters, et de restreindre les échanges d’information aux seuls noeuds d’un même cluster
Distributed adaptive learning allows a collection of interconnected agents to perform parameterestimation tasks from streaming data by relying solely on local computations and interactions with immediate neighbors. Most prior literature on distributed inference is concerned with single-task problems, where agents with separable objective functions need to agree on a common parameter vector. However, many network applications require more complex models and flexible algorithms than single-task implementations since their agents involve the need to estimate and track multiple objectives simultaneously. Networks of this kind, where agents need to infer multiple parameter vectors, are referred to as multitask networks. Although agents may generally have distinct though related tasks to perform, they may still be able to capitalize on inductive transfer between them to improve their estimation accuracy. This thesis is intended to bring forth advances on distributed inference over multitask networks. First, we present the well-known diffusion LMS strategies to solve single-task estimation problems and we assess their performance when they are run in multitask environments in the presence of noisy communication links. An improved strategy allowing the agents to adapt their cooperation to neighbors sharing the same objective is presented in order to attain improved learningand estimation over networks. Next, we consider the multitask diffusion LMS strategy which has been proposed to solve multitask estimation problems where the network is decomposed into clusters of agents seeking different
APA, Harvard, Vancouver, ISO, and other styles
29

白聖秋. "Performance improvment of dynamic tree splitting algorithm." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/66783640231047075394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

CHIU, SHIH-HAO, and 邱士豪. "The Algorithm of Splitting 2D Flood Model." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/b9f53w.

Full text
Abstract:
碩士
國立臺北科技大學
土木工程系土木與防災碩士班
108
“Integration platform for application of high-performance 2D inundation simulating”, the project of Water Resource Agency(WRA) in 2016, completed the 2D inundation simulating system in urban area in Taiwan Tainan and Taichung. Then completed the Taiwan Kaohsiung and Pingtung in next year. This system can provide the information of water flood forecasting in high frequency while typhoon coming. After receive the current rainfall, it can provide flood forecasting for next three hours in half an hour, to achieve inundation disaster prevention. Because of the calculation time in SOBEK 2D flood model, the model we use for inundation forecasting, is concerned about the value of simulation rainfall and the area of simulation case. If we want to make the simulation frequency achieve our goal, provide next three hours of forecasting per hour while typhoon coming, the model must be adjusted and divided by the profession. And it will be a huge personal and time cost. This study is going to create an algorithm to make the model dividing and adjusting automatically, make this work more efficiently also reduce the cost. By adjusting the area in 2D model case to control the calculation time, finally achieve the forecasting frequency. Using dividing, adjusting, integrating to make sure the result of adjusting model is same as the un-adjusting one. Establishing a computer algorithm processing to replace the work before, in additional it also sets up a standard for 2D flood model division. Take the Tainan and Pingtung model for studying, we limit the calculating time, and need to make the error meet our requirements, water depth error and inundation area. We use these two standards to value the error between adjusted model to un-adjusted, as a result, using this algorithm it only gets 3% of error. We also make it to a software, only have to select the coefficient and SOBEK model, and it will complete the model division in automatically. All the result while processing will record in statistics, it is easily for user to read and analysis.
APA, Harvard, Vancouver, ISO, and other styles
31

Ko, Fu-Yuan, and 柯富元. "A robust splitting algorithm for the distributed access system." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/73036238913458678400.

Full text
Abstract:
碩士
國立交通大學
電信工程系所
96
Distributed MAC protocols have long been used in the existing wireless communications for data transfer. In devising our protocol, both throughput and robustness against error-prone transmission are considered. In order to improve the throughput, our proposed protocol is based on two kinds of medium access control protocol, Slotted Aloha and splitting algorithm. In doing so, our proposed protocol supplements the drawbacks of these two protocols by achieving the throughput as high as the splitting tree algorithm whereas sustaining the robustness against the error-prone transmission. Besides, unlike the splitting tree algorithm, our protocol can be rendered into the highly distributed system. Due to the assumption that the number of active or so-called backlogged users needs to be known in our protocol a priori, we call it as “N is known splitting algorithm”. By analysis, we show that the maximum throughput of our proposed protocol is around 0.45. In addition, we enumerate certain critical cases and discuss the capability of our method against the erroneous transmission. We simulate our method in different error rate and verify their throughputs. According to our simulation results, we can find that our throughput of an error rate is similar to the throughput of no error minus the error rate when the error rate is small enough.
APA, Harvard, Vancouver, ISO, and other styles
32

Shiau, You-cheng, and 蕭友誠. "Energy-Efficient Tree Splitting Algorithm in Wireless Sensor Networks." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/h6p92s.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
95
In this thesis, we propose a power saving strategy based on tree splitting algorithm in wireless sensor network with multiple packet reception. We concentrate on the case that maximum queue size is 1. We derive both analytical results and simulation results. We use theory of Markov chain to analyze the evolution of the system state. In addition, we propose to use Renewal theory to calculate the throughput. Furthermore, we obtain the average system size, the packet blocking probability, and the average packet delay. Because the network model is distributed, we can’t understand the state of network all the time. So we use the length of last collision resolution cycle to predict the length of next cycle, and determine the sleeping time by the predicted length of next cycle to implement power saving. At last we will use the simulation result to show the performance of our power saving strategy.
APA, Harvard, Vancouver, ISO, and other styles
33

Yeh, Po-Shen, and 葉珀伸. "Mining Frequent Patterns under Limited Memory:A Novel Database Splitting Algorithm Approach." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/2t64up.

Full text
Abstract:
碩士
國立高雄應用科技大學
資訊工程系
102
Data mining has been highly concerned and researched for recent years. As frequent pattern mining is one of technical and important field of data mining researches, it is popular among data mining scholars, and it is also critical for some commercial business like malls or supermarkets. People can learn from frequent pattern to reveal hidden valuable information and take advantage of it to make more applications like merchandise placing, recommendation mechanism and etc.., frequent pattern mining has such potential and it is very useful in many ways. But as the time goes and the database grows sustainably where the memory of machines is still limited, we cannot mine frequent pattern in a usual way due to physical constraint. Therefore, we aimed at solving this issue in an efficient and in-core way.
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Ting-Fu, and 楊登福. "Efficient Finite Precision Arithmetic Codes based on Novel Index-splitting Algorithm." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/25407364175835707933.

Full text
Abstract:
碩士
義守大學
資訊工程學系
87
In the thesis we propose a novel high-efficient finite-precision arithmetic coding method to improve the feasibility of arithmetic coding, which is recognized as a better lossless compression technique than Huffman coding. In order to overcome the requirement of high precision multiplier/divider in encoder/decoder and the problem of error propagation in arithmetic codes, one can use finite-precision arithmetic codes. According to the information of coding symbols, we address an index-splitting method to construct efficient finite-precision arithmetic codes by splitting the index of the just-unbearable symbols into the current and subsequent codewords. The proposed arithmetic codes, which can limit the error propagation only in about one block, require neither a post-appended end-of-block (EOB) symbol nor pre-affixed side-information to characterize the number of encoded symbols. The proposed coding methods with high coding efficiency and limited error-propagation, which retains adaptive capability and regular procedure is suitable for applications in modern communication systems.
APA, Harvard, Vancouver, ISO, and other styles
35

"The Lions-Mercier splitting algorithm and the alternating direction method are instances of the proximal point algorithm." Massachusetts Institute of Technology, Operations Research Center, Laboratory for Information and Decision Systems, Intelligent Control Systems, 1988. http://hdl.handle.net/1721.1/3060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Shen, Yi-Nung, and 沈奕農. "Variable-Size Block Motion Estimation using Merging/Splitting Algorithm in H264 Codes." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/45583629396551264149.

Full text
Abstract:
碩士
國立成功大學
電機工程學系碩博士班
91
A merging/splitting procedure for variable-size block motion estimation is proposed to reduce the computation for block-size decision. A smaller block-size is initially used for motion estimation. An adaptive threshold, which depends on the information obtained from the motion estimation, quantization parameter, and rate distortion cost function is used to determine if the motion vectors of the neighboring blocks should be merged or not. If those blocks in a macroblock can not be merged, we then start edge classification. If the block contains edge, we split it into smaller sub-blocks according to edge information. The proposed merging/splitting procedure is applied to H.264 video encoders. Simulations show that the performance of the proposed method is close to that of H.264 JM2.0 when 16x16, 16x8, 8x16 and 8x8 block modes are enabled and the exhaustive search method is used to determine the block-sizes.
APA, Harvard, Vancouver, ISO, and other styles
37

"Applications of a splitting algorithm to decomposition in convex programming and variational inequalities." Massachusetts Institute of Technology, Laboratory for Information and Decision Systems], 1988. http://hdl.handle.net/1721.1/3103.

Full text
Abstract:
by Paul Tseng.
Cover title.
Includes bibliographical references.
Partially supported by the U.S. Army Research Office (Center for Intelligent Control Systems) DAAL03-86-K-0171 Partially supported by the National Science Foundation. NSF-ECS-8519058
APA, Harvard, Vancouver, ISO, and other styles
38

Wu, Hung-Yu, and 吳宏祐. "A Heuristic Master Planning Algorithm for Supply Chain with Fairness and Order Splitting." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/66339827095533114110.

Full text
Abstract:
碩士
國立臺灣大學
資訊管理學研究所
93
This study focuses on solving the problem related to the master planning of “Advanced planning and scheduling”. Given a supply chain network providing multiple final products, the fairness of treating different customers by splitting demands and the effectiveness of choosing the right vendors at the right time are all major issues considered in this study under the capacitated assumption. Two multiple-goal MIP models are proposed: Model_D_F_T, which minimizes the delay cost first, followed by maximizing the fairness among different orders, and finally minimizes the total cost; and Model_F_D_T, which maximizes fairness among different orders first, followed by maximizing the delay cost, and finally minimizes the total cost. The fairness goal in this study is defined as the number of orders whose minimum requirement is fulfilled at or before their corresponding due days. In additions, two kinds of order splitting situations are considered in the study: customer-side and supply members of supply chain as well as two kinds of fixed splitting cost included in the total cost of supply chain which are simultaneously optimized. It may take a lot of computing resource to solve the problems formulated as a MIP model if feasible solutions exist. However, the causes of infeasibility cannot be identified when the problems have no feasible solution. In order to improve the efficiency and effectiveness of the solution process, a heuristic algorithm, called Heuristic Order Splitting and Fairness Algorithm or HOSFA, is proposed. HOSFA first prepares the needed information for transforming all nodes in the network to the single function, searching all the sub-networks for different final products, and setting up the cost on each nodes of the supply chain network. It then sorts the orders by an initial order sorting algorithm proposed in this study, assigns the quota to all orders by considering the constraints of capacity, and finally plans the orders according to the allocated quota. If some orders are not fulfilled completely after the first phase planning, the second phase planning is evoked. It sorts the unfulfilled orders and plans them one-by-one by using the unused capacities or delaying orders if necessary. When all the orders are finally fulfilled, an adjustment algorithm is applied to find a better combination of vendors’ capacity usage. HOSFA results in the same optimal solution as the one provided by CPLEX of ILOG in 8 scenarios with no delay orders. In 32 scenarios with delayed orders, HOSFA outperforms Lin’s algorithm in terms of fairness and splitting cost for most of the scenarios. HOSFA is very efficient in solving the large scale master planning problem. It takes only 165 minutes to solve a large scale master planning problem with 3 final product and 2000 orders.
APA, Harvard, Vancouver, ISO, and other styles
39

Florêncio, Luís Filipe Pacheco. "A SearchCol Algorithm for the unrelated parallel machine scheduling problem with job splitting." Master's thesis, 2013. http://hdl.handle.net/1822/26850.

Full text
Abstract:
Dissertação de mestrado em Engenharia Industrial
In this dissertation, the unrelated parallel machine scheduling problem with job splitting and sequence independent setup times is addressed, implementing a method to solve it in a recently proposed framework, SearchCol, short for ‘Metaheuristic search by Column Generation’. The study of scheduling problems is of high relevance due to its real-world application in multiple fields, as documented in its vast literature, and also due to its high complexity derived from the diverse environments, variables, restrictions and the combinations of these in different systems. The problem consists in finding a scheduling plan for a set of independent jobs on a set of unrelated parallel machines, considering jobs and machines release dates, sequence independent setup times and the job splitting property, with due date related objectives. The introduction of setup times and job splitting properties in unrelated environments has not been extensively studied, even though its use can play an important role in scheduling. A mixed integer programming model is developed featuring the aforementioned properties, which is then decomposed by machine using the Dantzig-Wolfe decomposition. To solve the decomposed model a hybrid approach entitled SearchCol is applied, which results from the interaction between column generation and metaheuristics. Problem specific heuristics to use in the column generation component of the SearchCol are also developed and diverse alternatives within the global algorithm are tested. A problem specific algorithm for one of the main SearchCol components is also suggested. To evaluate the effectiveness of the model and the proposed algorithms, computational tests are performed and their solutions analysed for a set of test instances.
O trabalho que se apresenta nesta dissertação, aborda o problema de escalonamento em máquinas paralelas não relacionadas com dimensionamento de lotes e tempos de preparação independentes da sequência, recorrendo a uma ferramenta recentemente proposta, designada por SearchCol, abreviatura de ‘Metaheuristic Search by Column Generation’. O estudo de problemas de escalonamento revela-se de grande importância devido à sua aplicação em diferentes áreas, documentado na sua vasta literatura, e também devido à sua elevada complexidade decorrente das diversas configurações e tipos de máquinas, variáveis e restrições, bem como as combinações destas nos diversos sistemas. O problema consiste na determinação de um plano de produção para um conjunto de tarefas independentes em máquinas paralelas não relacionadas, considerando tempos de disponibilidade de tarefas e máquinas, tempos de preparação independentes da sequência e o dimensionamento de lotes. O estudo deste problema com incorporação de tempos de preparação e da propriedade de dimensionamento de lotes em máquinas paralelas não relacionadas não é comum na literatura, apesar de se revelar de extrema importância em problemas de escalonamento. Um modelo de programação inteira mista é desenvolvido para o problema e é também efectuada uma decomposição por máquina através da decomposição de Dantzig-Wolfe. Para resolver o problema, estuda-se uma abordagem híbrida que consiste na interação entre a técnica de geração de colunas e metaheurísticas, de seu nome SearchCol. São desenvolvidas heurísticas específicas para o problema, as quais são usadas na componente de geração de colunas do SearchCol, sendo testadas também diversas alternativas e ferramentas no contexto do algoritmo global. Além disso, um algoritmo específico para o problema é também sugerido, para incluir num dos principais componentes do SearchCol. Para avaliar o desempenho e qualidade dos modelos e algoritmos propostos, são realizados testes computacionais e analisadas as suas soluções para um conjunto de instâncias de teste.
Fundação para a Ciência e a Tecnologia (FCT) - Project ref. PTDC/EIA-EIA/100645/2008.
This work was partially funded by the FEDER through the Programme COMPETE.
APA, Harvard, Vancouver, ISO, and other styles
40

"Further applications of a splitting algorithm to decomposition in variational inequalities and convex programming." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1989. http://hdl.handle.net/1721.1/3125.

Full text
Abstract:
Paul Tseng.
Cover title.
Includes bibliographical references.
Partially supported by the U.S. Army Research Office (Center for Intelligent Control Systems) DAAL03-86-K-0171 Partially supported by the National Science Foundation. NSF-ECS-8519058
APA, Harvard, Vancouver, ISO, and other styles
41

"On the convergence of a matrix splitting algorithm for the symmetric linear complementarity problem." Massachusetts Institute of Technology, Laboratory for Information and Decision Systems], 1989. http://hdl.handle.net/1721.1/3139.

Full text
Abstract:
by Zhi-Quan Luo, Paul Tseng.
Cover title.
Includes bibliographical references (p. 45-48).
Partially supported by the U.S. Army Research Office (Center for Intelligent Control Systems) DAAL03-86-K-0171 Partially supported by the Office of Naval Research. N00014-84-K-0519 (NR64-003) Partially supported by the National Science Foundation. NSF-ECS-8519058
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Kuan-Mei, and 陳冠郿. "Predictive Multicast Polling and Tree Splitting Algorithm in Wireless Access Networks with Multipacket Reception." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/dx2j2r.

Full text
Abstract:
博士
國立中山大學
資訊工程學系研究所
97
In this dissertation, we propose using and analytically evaluate the predictive multicast polling scheme and the tree splitting algorithm for medium access control in interference dominating wireless access networks with random traffic and finite nodes. In an interference dominating wireless network, a receiver could simultaneously receive multiple packets from a variety of transmitters, as long as the signal-to-interference-plus-noise ratio exceeds a predetermined threshold. We concentrate on the case of in which the maximum queue size in a node is finite. We use discrete-time Markov chains, reward processes and regenerative processes to derive the throughput, the packet blocking probability, the average packet delay, and the average system size. We show that the system performance of the predictive multicast polling scheme can be significantly improved with a few additional buffers in the queues. Our study also shows that exact performance of the splitting algorithm depends on the total number of nodes in the networks. We verify our numerical results by rigorous mathematical proof and computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
43

"On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators." Massachusetts Institute of Technology, Laboratory for Information and Decision Systems], 1989. http://hdl.handle.net/1721.1/3160.

Full text
Abstract:
by Jonathan Eckstein, Dimitri P. Bertsekas.
Cover title. "This paper consists mainly of dissertation research results of the first author."--Cover.
Includes bibliographical references (p. 31-34).
Research supported in part by the Army Research Office. DAAL03-86-K-0171 Research supported in part by the National Science Foundation. ECS-8519058
APA, Harvard, Vancouver, ISO, and other styles
44

Felício, dos Reis João Miguel. "Stability of BDF-ADI Discretizations." Thesis, 2017. http://hdl.handle.net/10754/625356.

Full text
Abstract:
We present new results on absolute stability for BDF-ADI (Backward differentiation formula Alternating Direction Implicit) methods applied to a linear advection and diffusion equations. Unconditional absolute stability of the BDF2-ADI method is proven for advection and diffusion separately, as well as to the BDF3-ADI method for the purely-diffusive case. Conditional absolute stability of the BDF4-ADI is also proven for the purely-diffusive case, and stability regions for BDF3-ADI and BDF4- ADI are given in terms of the PDE coefficients and numerical parameters. Lastly, numerical experiments are presented to support the theoretical results and conjectures. These experiments also suggest future work.
APA, Harvard, Vancouver, ISO, and other styles
45

"Edge splitting-off and network design problems." 2009. http://library.cuhk.edu.hk/record=b5894006.

Full text
Abstract:
Yung, Chun Kong.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 121-129).
Abstracts in English and Chinese.
Chapter 1 --- Overview --- p.2
Chapter 2 --- Background --- p.7
Chapter 2.1 --- Graphs and Edge-connectivitv --- p.7
Chapter 2.1.1 --- Subgraphs --- p.9
Chapter 2.1.2 --- Cut and Edge-Connectivitv --- p.10
Chapter 2.1.3 --- Menger's Theorem --- p.12
Chapter 2.2 --- Edge Splitting-off --- p.13
Chapter 2.2.1 --- The Basics --- p.15
Chapter 2.2.1.1 --- Supermodular and Submodular Set Functions --- p.16
Chapter 2.2.1.2 --- Set Functions regarding Edge-Connectivity --- p.17
Chapter 2.2.1.3 --- Dangerous and Tight Sets --- p.18
Chapter 2.2.2 --- Proof of Mader's Theorem --- p.20
Chapter 2.2.3 --- Global Arc-Connectivity Setting --- p.23
Chapter 2.2.3.1 --- Local Arc-Connectivity Setting --- p.25
Chapter 2.2.4 --- Incorporating Additional Properties --- p.26
Chapter 2.2.4.1 --- Non-Admissibility Graph Method --- p.27
Chapter 2.3 --- Edge-Connectivity Problems --- p.29
Chapter 2.3.1 --- Degree Bounded Network Design Problems --- p.30
Chapter 2.3.1.1 --- Metric Cost Assumption --- p.31
Chapter 2.3.2 --- Edge-Connectivitv Augmentation Problems --- p.33
Chapter 2.3.2.1 --- Prank's Framework --- p.34
Chapter 2.3.2.2 --- Constrained Edge-Connectivity Augmentation Problems --- p.36
Chapter 2.3.3 --- Edge Splitting-off Problems --- p.39
Chapter 2.4 --- Edge Splitting-off Algorithms --- p.40
Chapter 2.4.1 --- Fastest Algorithms --- p.41
Chapter 2.4.2 --- An Intuitive Approach --- p.42
Chapter 2.4.3 --- Global Connectivity Settings --- p.42
Chapter 2.4.3.1 --- Legal Ordering --- p.43
Chapter 2.4.3.2 --- Edmonds' Arborescences --- p.44
Chapter 2.4.4 --- Local Edge-Connectivity Setting --- p.45
Chapter 3 --- Degree Bounded Network Design Problem with Metric Cost --- p.47
Chapter 3.1 --- Christofides'-like Algorithm --- p.49
Chapter 3.2 --- Simplicity-Preserving Edge Splitting-Off --- p.50
Chapter 3.2.1 --- Proof of Theorem 3.3 --- p.51
Chapter 3.3 --- Approximation Algorithms for Network Design Problems --- p.56
Chapter 3.3.1 --- Removing Redundant Edges --- p.57
Chapter 3.3.2 --- Perfect Matching --- p.58
Chapter 3.3.3 --- Edge Splitting-Off Restoring Simplicity --- p.59
Chapter 3.4 --- Results in Different Settings --- p.60
Chapter 3.4.1 --- Global Edge-Connectivity --- p.61
Chapter 3.4.2 --- Local Edge-Connectivity --- p.62
Chapter 4 --- Constrained Edge Splitting-off --- p.64
Chapter 4.1 --- Preliminaries --- p.66
Chapter 4.2 --- General Constrained Edge Splitting-off Lemma --- p.68
Chapter 4.3 --- Structural Properties of Non-Admissible Pairs --- p.69
Chapter 4.3.1 --- Some Useful Lemmas --- p.70
Chapter 4.3.2 --- An Upper Bound on \Dp\ --- p.71
Chapter 4.3.3 --- An Inductive Argument --- p.73
Chapter 4.4 --- Non-Admissibility Graph and Constraint Graph --- p.75
Chapter 4.4.1 --- Vertex Set Partition Constraint --- p.76
Chapter 4.4.2 --- Graph Simplicity Constraint --- p.77
Chapter 4.4.3 --- Simultaneous Graph Constraint --- p.78
Chapter 4.4.4 --- Tight Sufficient Conditions --- p.79
Chapter 4.5 --- Global Arc-Connectivity Setting --- p.79
Chapter 4.5.1 --- Proof of Lemma 4.15 --- p.81
Chapter 5 --- Constrained Edge-Connectivity Augmentation Problem --- p.83
Chapter 5.1 --- Preliminaries --- p.84
Chapter 5.2 --- Additive Approximation Algorithms --- p.87
Chapter 5.2.1 --- Edge-Connectivitv Augmentation Preserving Vertex Set Partition --- p.87
Chapter 5.2.2 --- Edge-Connectivity Augmentation Preserving Simplicity --- p.91
Chapter 5.2.3 --- Simultaneous-Graph Edge-Connectivity Augmentation --- p.93
Chapter 5.3 --- Global Arc-Connectivity Setting --- p.95
Chapter 5.3.1 --- Edge-Connectivity Augmentation Preserving Vertex Set Partition --- p.95
Chapter 5.3.2 --- Edge-Connectivity Augmentation Preserving Simplicity --- p.97
Chapter 5.3.3 --- Simultaneous Edge-Connectivity Augmentation --- p.98
Chapter 6 --- Efficient Edge Splitting-off Algorithm --- p.100
Chapter 6.l --- Preliminaries --- p.102
Chapter 6.1.1 --- Efficient Tools for Edge-Connectivity Problems --- p.103
Chapter 6.1.2 --- An Alternative Proof of Mader's Theorem --- p.104
Chapter 6.2 --- Framework for Complete Edge Splitting-off --- p.105
Chapter 6.2.1 --- Proof of Lemma 6.5 --- p.106
Chapter 6.3 --- Efficient Splitting-off Attempt --- p.108
Chapter 6.3.1 --- Indicator Vertex --- p.109
Chapter 6.3.2 --- Splitting-off to Capacity --- p.112
Chapter 6.4 --- Randomized and Parallelized Edge Splitting-off Algorithm --- p.113
Chapter 6.5 --- Deterministic Edge Splitting-off Algorithm --- p.114
Chapter 6.6 --- Algorithms in Other Settings --- p.115
Chapter 6.6.1 --- Edge Splitting-off in Network Design Problems --- p.115
Chapter 6.6.2 --- Constrained Edge Splitting-off --- p.116
Chapter 7 --- Concluding Remarks --- p.119
Bibliography --- p.121
APA, Harvard, Vancouver, ISO, and other styles
46

Lo, Ta-Wei, and 羅大為. "The Development of Splitting-Based Algorithms for RFID Anti-Collision." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/63732937520975484303.

Full text
Abstract:
碩士
國立屏東教育大學
資訊科學系碩士班
95
Nowadays, Radio Frequency Identification (RFID), a technique utilizes wireless radio-frequency links to transmit electronic data, is widely used and unceasingly developed with each passing day in various ways of domains. Applications such as warehouse management or automatic identification the tagged goods were adopted for years in many companies. When there exist multiple tags in the interrogation field of a transponder, the arbitration algorithm for RFID system is used to arbitrate all the tags to avoid the collision problem. A splitting algorithm which is called Binary Search Tree is well-known for multi-tags arbitration. In the study, splitting-based schemas called Merged Search Tree, Quadtree, and Split-Merged Tree algorithms are proposed to avoid collision. Performances of the proposed algorithms are compared with the original Binary Search Tree according to time and power consumed during the arbitration process. The results show that the proposed models can reduce searching time and power consumed to achieve a better performance arbitration. Keywords: RFID, Arbitration algorithm, Anti-collision, Binary Search Tree, Quadtree,Merged Search Tree, Split-Merged Tree
APA, Harvard, Vancouver, ISO, and other styles
47

Lee, Cheng-Wei, and 李政緯. "Approximation Algorithms for Directed Steiner Tree Problem in Graphs with Sparse Splitting Nodes." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/ejdqw6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Shou-Ting, and 李碩庭. "Blocking and semi-blocking algorithms on adaptive query splitting for RFID tag identification." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/31176720476943903275.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊管理系
96
In Radio Frequency Identification (RFID) system, tags are identified by the reader through transmitting their IDs over a shared wireless channel. When multiple tags respond their IDs simultaneously, their signals collide, increasing the identification delay. Thus, many previous anti-collision algorithms, including an adaptive query splitting algorithm (AQS) and an adaptive binary splitting algorithm (ABS), focused on solving this problem. This paper proposes two blocking algorithms, a blocking AQS algorithm (BA) and a semi-blocking AQS algorithm (SBA), based on AQS. BA and SBA all inherit the essence of AQS which uses the information of recognized tags obtained from the last process of tag identification. Moreover, the former further adopts a blocking technique which prevents all recognized tags from being collided by unrecognized tags, while the latter further uses a semi-blocking technique which permits the minority of unrecognized tags colliding with recognized tags to get a distributed probability so that can more correctly estimate the other majority of unrecognized tags.
APA, Harvard, Vancouver, ISO, and other styles
49

"Error bound and convergence analysis of matrix splitting algorithms for the affine variational inequality problem." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1990. http://hdl.handle.net/1721.1/3211.

Full text
Abstract:
by Paul Tseng and Zhi-Quan Luo.
Cover title.
Includes bibliographical references (p. 13-15).
Supported by the National Science Foundation. NSF-DDM-8903385 Supported by the U.S. Army Research Office (Center for Intelligent Control Systems) DAAL03-86-K-0171 Supported by a grant from the Science and Engineering Research Board of McMaster University.
APA, Harvard, Vancouver, ISO, and other styles
50

Ming-HuaTang and 湯明樺. "Fast DCT Angle-based Algorithms for H.265/HEVC Intra Prediction and Quad Tree Splitting." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/50320484257911228247.

Full text
Abstract:
碩士
國立成功大學
電腦與通信工程研究所
100
Intra prediction in current High Efficiency Video Coding (HEVC) takes more coding time than H.264/AVC, the main reason for this phenomenon is the greatly increasing of intra prediction directions (up to 34). Otherwise, in HEVC, a frame is divided into many largest coding units (LCU), and each LCU can be divided into several coding unit (CU) with different kinds of size to form the quad tree structure. Moreover, the processes in order to determine the CU partition and prediction modes for intra prediction take too much time. In this paper, a fast algorithm is proposed which can accelerate both intra prediction and coding units splitting. From the experimental result, the proposed method can speeded up the total encoding time with slightly bits increasing.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography