To see the other types of publications on this topic, follow the link: Decomposition (Mathematics).

Dissertations / Theses on the topic 'Decomposition (Mathematics)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Decomposition (Mathematics).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Burns, Brenda D. "The Staircase Decomposition for Reductive Monoids." NCSU, 2002. http://www.lib.ncsu.edu/theses/available/etd-20020422-102254.

Full text
Abstract:

Burns, Brenda Darlene. The Staircase Decomposition for Reductive Monoids. (Under the direction of Mohan Putcha.) The purpose of the research has been to develop a decomposition for the J-classes of a reductive monoid. The reductive monoid M(K) isconsidered first. A J-class in M(K) consists ofelements of the same rank. Lower and upper staircase matricesare defined and used to decompose a matrix x of rank r into theproduct of a lower staircase matrix, a matrix with a rank rpermutation matrix in the upper left hand corner, and an upperstaircase matrix, each of which is of rank r. The choice ofpermutation matrix is shown to be unique. The primary submatrix of a matrixis defined. The unique permutation matrix from the decompositionabove is seen to be the unique permutation matrix from Bruhat'sdecomposition for the primary submatrix. All idempotent elementsand regular J-classes of the lower and upper staircasematrices are determined. A decomposition for the upper and lowerstaircase matrices is given as well.The above results are then generalized to an arbitrary reductivemonoid by first determining the analogue of the components forthe decomposition above. Then the decomposition above is shown tobe valid for each J-class of a reductive monoid. Theanalogues of the upper and lower staircase matrices are shown tobe semigroups and all idempotent elements and regularJ-classes are determined. A decomposition for eachof them is discussed.

APA, Harvard, Vancouver, ISO, and other styles
2

Kwizera, Petero. "Matrix Singular Value Decomposition." UNF Digital Commons, 2010. http://digitalcommons.unf.edu/etd/381.

Full text
Abstract:
This thesis starts with the fundamentals of matrix theory and ends with applications of the matrix singular value decomposition (SVD). The background matrix theory coverage includes unitary and Hermitian matrices, and matrix norms and how they relate to matrix SVD. The matrix condition number is discussed in relationship to the solution of linear equations. Some inequalities based on the trace of a matrix, polar matrix decomposition, unitaries and partial isometies are discussed. Among the SVD applications discussed are the method of least squares and image compression. Expansion of a matrix as a linear combination of rank one partial isometries is applied to image compression by using reduced rank matrix approximations to represent greyscale images. MATLAB results for approximations of JPEG and .bmp images are presented. The results indicate that images can be represented with reasonable resolution using low rank matrix SVD approximations.
APA, Harvard, Vancouver, ISO, and other styles
3

Ngulo, Uledi. "Decomposition Methods for Combinatorial Optimization." Licentiate thesis, Linköpings universitet, Tillämpad matematik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-175896.

Full text
Abstract:
This thesis aims at research in the field of combinatorial optimization. Problems within this field often posses special structures allowing them to be decomposed into more easily solved subproblems, which can be exploited in solution methods. These structures appear frequently in applications. We contribute with both re-search on the development of decomposition principles and on applications. The thesis consists of an introduction and three papers.  In Paper I, we develop a Lagrangian meta-heuristic principle, which is founded on a primal-dual global optimality condition for discrete and non-convex optimization problems. This condition characterizes (near-)optimal solutions in terms of near-optimality and near-complementarity measures for Lagrangian relaxed solutions. The meta-heuristic principle amounts to constructing a weighted combination of these measures, thus creating a parametric auxiliary objective function (which is a close relative to a Lagrangian function), and embedding a Lagrangian heuristic in a search procedure in the space of the weight parameters. We illustrate and assess the Lagrangian meta-heuristic principle by applying it to the generalized assignment problem and to the set covering problem. Our computational experience shows that the meta-heuristic extension of a standard Lagrangian heuristic principle can significantly improve upon the solution quality.  In Paper II, we study the duality gap for set covering problems. Such problems sometimes have large duality gaps, which make them computationally challenging. The duality gap is dissected with the purpose of understanding its relationship to problem characteristics, such as problem shape and density. The means for doing this is the above-mentioned optimality condition, which is used to decompose the duality gap into terms describing near-optimality in a Lagrangian relaxation and near-complementarity in the relaxed constraints. We analyse these terms for numerous problem instances, including some large real-life instances, and conclude that when the duality gap is large, the near-complementarity term is typically large and the near-optimality term small. The large violation of complementarity is due to extensive over-coverage. Our observations have implications for the design of solution methods, especially for the design of core problems.  In Paper III, we study a bi-objective covering problem stemming from a real-world application concerning the design of camera surveillance systems for large-scale outdoor areas. It is prohibitively costly to surveil the entire area, and therefore relevant to be able to present a decision-maker with trade-offs between total cost and the portion of the area that is surveilled. The problem is stated as a set covering problem with two objectives, describing cost and portion of covering constraints that are fulfilled, respectively. Finding the Pareto frontier for these objectives is very computationally demanding and we therefore develop a method for finding a good approximate frontier in a reasonable computing time. The method is based on the ε−constraint reformulation, an established heuristic for set covering problems, and subgradient optimization.
Denna avhandling behandlar lösningsmetoder för stora och komplexa kombinatoriska optimeringsproblem. Sådana problem har ofta speciella strukturer som gör att de kan dekomponeras i en uppsättning mindre delproblem, vilket kan utnyttjas för konstruktion av effektiva lösningsmetoder. Avhandlingen omfattar både grundforskning inom utvecklingen av dekompositionsprinciper för kombinatorisk optimering och forskning på tillämpningar inom detta område. Avhandlingen består av en introduktion och tre artiklar.  I den första artikeln utvecklar vi en “Lagrange-meta-heuristik-princip”. Principen bygger på primal-duala globala optimalitetsvillkor för diskreta och icke-konvexa optimeringsproblem. Dessa optimalitetsvillkor beskriver (när)optimala lösningar i termer av när-optimalitet och när-komplementaritet för Lagrange-relaxerade lösningar. Den meta-heuristiska principen bygger på en ihopviktning av dessa storheter vilket skapar en parametrisk hjälpmålfunktion, som har stora likheter med en Lagrange-funktion, varefter en traditionell Lagrange-heuristik används för olika värden på viktparametrarna, vilka avsöks med en meta-heuristik. Vi illustrerar och utvärderar denna meta-heuristiska princip genom att tillämpa den på det generaliserade tillordningsproblemet och övertäckningsproblemet, vilka båda är välkända och svårlösta kombinatoriska optimeringsproblem. Våra beräkningsresultat visar att denna meta-heuristiska utvidgning av en vanlig Lagrange-heuristik kan förbättra lösningskvaliteten avsevärt.  I den andra artikeln studerar vi egenskaper hos övertäckningsproblem. Denna typ av optimeringsproblem har ibland stora dual-gap, vilket gör dem beräkningskrävande. Dual-gapet analyseras därför med syfte att förstå dess relation till problemegenskaper, såsom problemstorlek och täthet. Medlet för att göra detta är de ovan nämnda primal-duala globala optimalitetsvillkoren för diskreta och icke-konvexa optimeringsproblem. Dessa delar upp dual-gapet i två termer, som är när-optimalitet i en Lagrange-relaxation och när-komplementaritet i de relaxerade bivillkoren, och vi analyserar dessa termer för ett stort antal probleminstanser, däribland några storskaliga praktiska problem. Vi drar slutsatsen att när dualgapet är stort är vanligen den när-komplementära termen stor och den när-optimala termen liten. Vidare obseveras att när den när-komplementära termen är stor så beror det på en stor överflödig övertäckning. Denna förståelse för problemets inneboende egenskaper går att använda vid utformningen av lösningsmetoder för övertäckningsproblem, och speciellt för konstruktion av så kallade kärnproblem.  I den tredje artikeln studeras tvåmålsproblem som uppstår vid utformningen av ett kameraövervakningssystem för stora områden utomhus. Det är i denna tillämpning alltför kostsamt att övervaka hela området och problemet modelleras därför som ett övertäckningsproblem med två mål, där ett mål beskriver totalkostnaden och ett mål beskriver hur stor del av området som övervakas. Man önskar därefter kunna skapa flera lösningar som har olika avvägningar mellan total kostnad och hur stor del av området som övervakas. Detta är dock mycket beräkningskrävande och vi utvecklar därför en metod för att hitta bra approximationer av sådana lösningar inom rimlig beräkningstid.
APA, Harvard, Vancouver, ISO, and other styles
4

Hersh, Patricia (Patricia Lynn) 1973. "Decomposition and enumeration in partially ordered sets." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/85303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Samuelsson, Saga. "The Singular Value Decomposition Theorem." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-150917.

Full text
Abstract:
This essay will present a self-contained exposition of the singular value decomposition theorem for linear transformations. An immediate consequence is the singular value decomposition for complex matrices.
Denna uppsats kommer presentera en självständig exposition av singulärvärdesuppdelningssatsen för linjära transformationer. En direkt följd är singulärvärdesuppdelning för komplexa matriser.
APA, Harvard, Vancouver, ISO, and other styles
6

Simeone, Daniel. "Network connectivity: a tree decomposition approach." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18797.

Full text
Abstract:
We show that the gap between the least costly 3-edge-connected metric graph and the least costly 3-vertex-connected metric graph is at most $3$. The approach relies upon tree decompositions, and a degree limiting theorem of Bienstock et al. As well, we explore the tree decomposition approach for general k-edge and vertex-connected graphs, and demonstrate a large amount of the required background theory.
Nous démontrons que l'écart entre un graphe métrique 3-arête-connexe de coût minimum et un graphe métrique 3-sommet-connexe de coût minimum est au plus 3. Notre approche repose sur l'existence de décompositions arborescentes et sur un théorème de Bienstock et al qui limite les degrés des sommets. De plus, nous explorons la décomposition arborescente pour le cas plus général des graphes k-arête et sommet connexes et nous exposons en grande partie les résultats nécessaires pour accéder à notre travail.
APA, Harvard, Vancouver, ISO, and other styles
7

Riaz, Samia. "Domain decomposition method for variational inequalities." Thesis, University of Birmingham, 2014. http://etheses.bham.ac.uk//id/eprint/4815/.

Full text
Abstract:
Variational inequalities have found many applications in applied science. A partial list includes obstacles problems, fluid flow in porous media, management science, traffic network, and financial equilibrium problems. However, solving variational inequalities remain a challenging task as they are often subject to some set of complex constraints, for example the obstacle problem. Domain decomposition methods provide great flexibility to handle these types of problems. In our thesis we consider a general variational inequality, its finite element formulation and its equivalence with linear and quadratic programming. We will then present a non-overlapping domain decomposition formulation for variational inequalities. In our formulation, the original problem is reformulated into two subproblems such that the first problem is a variational inequality in subdomain Ω\(^i\) and the other is a variational equality in the complementary subdomain Ω\(^e\). This new formulation will reduce the computational cost as the variational inequality is solved on a smaller region. However one of the main challenges here is to obtain the global solution of the problem, which is to be coupled through an interface problem. Finally, we validate our method on a two dimensional obstacle problem using quadratic programming.
APA, Harvard, Vancouver, ISO, and other styles
8

Korey, Michael Brian. "A decomposition of functions with vanishing mean oscillation." Universität Potsdam, 2001. http://opus.kobv.de/ubp/volltexte/2008/2592/.

Full text
Abstract:
A function has vanishing mean oscillation (VMO) on R up(n) if its mean oscillation - the local average of its pointwise deviation from its mean value - both is uniformly bounded over all cubes within R up(n) and converges to zero with the volume of the cube. The more restrictive class of functions with vanishing lower oscillation (VLO) arises when the mean value is replaced by the minimum value in this definition. It is shown here that each VMO function is the difference of two functions in VLO.
APA, Harvard, Vancouver, ISO, and other styles
9

Wilson, Michelle Marie Lucy. "A survey of primary decomposition using Gröbner bases." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/37005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jung, Kyomin. "Approximate inference : decomposition methods with applications to networks." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/50595.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2009.
Includes bibliographical references (p. 147-151).
Markov random field (MRF) model provides an elegant probabilistic framework to formulate inter-dependency between a large number of random variables. In this thesis, we present a new approximation algorithm for computing Maximum a Posteriori (MAP) and the log-partition function for arbitrary positive pair-wise MRF defined on a graph G. Our algorithm is based on decomposition of G into appropriately chosen small components; then computing estimates locally in each of these components and then producing a good global solution. We show that if either G excludes some finite-sized graph as its minor (e.g. planar graph) and has a constant degree bound, or G is a polynoinially growing graph, then our algorithm produce solutions for both questions within arbitrary accuracy. The running time of the algorithm is linear on the number of nodes in G, with constant dependent on the accuracy. We apply our algorithm for MAP computation to the problem of learning the capacity region of wireless networks. We consider wireless networks of nodes placed in some geographic area in an arbitrary manner under interference constraints. We propose a polynomial time approximate algorithm to determine whether a, given vector of end-to-end rates between various source-destination pairs can be supported by the network through a combination of routing and scheduling decisions. Lastly, we investigate the problem of computing loss probabilities of routes in a stochastic loss network, which is equivalent to computing the partition function of the corresponding MR.F for the exact stationary distribution.
(cont.) We show that the very popular Erlang approximation provide relatively poor performance estimates, especially for loss networks in the critically loaded regime. Then we propose a novel algorithm for estimating the stationary loss probabilities, which is shown to always converge, exponentially fast, to the asymptotically exact results.
by Kyomin Jung.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Oyinsan, Sola. "Primary decomposition of ideals in a ring." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3289.

Full text
Abstract:
The concept of unique factorization was first recognized in the 1840s, but even then, it was still fairly believed to be automatic. The error of this assumption was exposed largely through attempts to prove Pierre de Fermat's, 1601-1665, last theorem. Once mathematicians discovered that this property did not always hold, it was only natural for them to try to search for the strongest available alternative. Thus began the attempt to generalize unique factorization. Using the ascending chain condition on principle ideals, we will show the conditions under which a ring is a unique factorization domain.
APA, Harvard, Vancouver, ISO, and other styles
12

Gu, Fangqing. "Many objective optimization: objective reduction and weight design." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/315.

Full text
Abstract:
Many-objective optimization problems (MaOPs), in which the number of objectives is greater than three, are common in various applications, and have drawn many scholars' attention. Evolutionary multiobjective optimization (EMO) algorithms have been successfully applied to solve bi- and tri-objective optimization problems. However, MaOPs are more challenging compared with the bi- and tri-objective optimization problems. The performances of most existing classical EMO algorithms generally deteriorate over the number of objectives. Thus, this thesis presents a weight design method to modify classical decomposition-based EMO algorithms for solving MaOPs, and a novel objective extraction method to transform the MaOP into a problem with few objectives.;Additionally, performance metrics play an important role in understanding the strengths and weaknesses of an algorithm. To the best of our knowledge, there is no direct performance metric for the objective reduction algorithms. Their performance can only be indirectly evaluated by the metrics, such as IGD-metric and H-metric, of the solutions obtained by an EMO algorithm equipped with the objective reduction method. This thesis presents a direct performance metric featuring the simplicity and usability of the objective reduction algorithms. Meanwhile, we propose a novel framework for many-objective test problems, which features both simple and complicated Pareto set shape, and is scalable in terms of the numbers of the objectives and the essential objectives. Also, we can control the importance of essential objectives.;As some MaOPs may have redundant or correlated objectives, it is desirable to reduce the number of the objectives in such circumstances. However, the Pareto solution of the reduced problem obtained by most existing objective reduction methods may not be the Pareto solution of the original MaOP. Thus, this thesis proposes an objective extraction method for MaOPs. It formulates the reduced objective as a linear combination of the original objectives to maximize the conflict between the reduced objectives. Subsequently, the Pareto solution of the reduced problem obtained by the proposed algorithm is that of the original MaOP, and the proposed algorithm can preserve the non-dominant relation as much as possible. We compare the proposed objective extraction method with three objective reduction methods, i.e., REDGA, L-PCA and NL-MVU-PCA. The numerical studies show the effectiveness and robustness of the proposed approach.;The decomposition-based EMO algorithms, e.g. MOEA/D, M2M, have demonstrated the effectiveness in dealing with MaOPs. Nevertheless, these algorithms need to design the weight vectors, which has significant effects on the algorithms' performance. Especially, when the Pareto front of the problem is incomplete, these algorithms cannot obtain a set of uniform solutions by using the conventional weight design methods. Not only can self-organizing map (SOM) preserve the topological properties of the input data by using the neighborhood function, but also its display is more uniform than the probability density of the input data. This phenomenon is advantageous to generate a set of uniform weight vectors based on the distribution of the individuals. Therefore, we propose a novel weight design method based on SOM, which can be integrated with most of the decomposition-based EMO algorithms. In this thesis, we choose the existing M2M algorithm as an example for such integration. This integrated algorithm is then compared with the original M2M and two state-of-the-art algorithms, i.e. MOEA/D and NSGA-II on eleven redundancy problems and eight non-redundancy problems. The experimental results show the effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
13

Kammogne, Kamgaing Rodrigue. "Domain decomposition methods for reaction-diffusion systems." Thesis, University of Birmingham, 2014. http://etheses.bham.ac.uk//id/eprint/4599/.

Full text
Abstract:
Domain Decomposition (DD) methods have been successfully used to solve elliptic problems, as they deal with them in a more elegant and efficient way than other existing numerical methods. This is achieved through the division of the domain into subdomains, followed by the solving of smaller problems within these subdomains which leads to the solution. Furthermore DD-techniques can incorporate in their implementation not only the physics of the different phenomena associated with the modeling, but also the enhancement of parallel computing. They can be divided into two major categories: with and without overlapping. The most important factor in both cases is the ability to solve the interface problem referred to as the Steklov-Poincaré problem. There are two existing approaches to solving the interface problem. The first approach consists of approximating the interface problem by solving a sequence of subproblems within the subdomains, while the second approach aims to tackle the interface problem directly. The solution method presented in this thesis falls into the latter category. This thesis presents a non-overlapping domain decomposition (DD) method for solving reaction-diffusion systems. This approach addresses the problem directly on the interface which allows for the presentation and analysis of a new type of interface preconditioner for the arising Schur complement problem. This thesis will demonstrate that the new interface preconditioner leads to a solution technique independent of the mesh parameter. More precisely, the technique, when used effectively, exploits the fact that the Steklov-Poincaré operators arising from a non-overlapping DD-algorithm are coercive and continuous, with respect to Sobolev norms of index 1/2, in order to derive a convergence analysis for a DD-preconditioned GMRES algorithm. This technique is the first of its kind that presents a class of substructuring methods for solving reaction diffusion systems and analyzes their behaviour using fractional Sobolev norms.
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Weigang. "Atomic decomposition of H1 spaces and exponential square classes." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=41682.

Full text
Abstract:
Let $u sb0$ be the harmonic extension of the function f (defined on $R sp{n})$ to $R sbsp{+}{n+1}$. It is well known that $f in H sp{p}(R sp{n}), 00, C sp{ prime prime}< infty,$ which depend only on $ alpha$ and the dimension n. We also get a new proof for an L-harmonic version.
APA, Harvard, Vancouver, ISO, and other styles
15

Loisel, Sébastien. "Optimal and optimized domain decomposition methods on the sphere." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=85572.

Full text
Abstract:
The numerical solution of partial differential equations and boundary value problems is one of the most important tools of modern science. For various reasons (parellelizing, improving condition numbers, finding good preconditioners, etc...) it is desirable to turn a boundary value problem over a large domain O into a set of boundary value problems over domains O1,...,O n such that ∪kO k; this is the domain decomposition method. The solutions u1,...,un of the local problems rarely glue together into a solution u of the global problem, hence we must use an iteration whereby we repeatedly solve the local problems. Between each iteration, some information is exchanged between the subdomains, so that the local solutions at the next iteration better approximate the global solution. The method of Schwarz exchanges Dirichlet data along subdomain boundaries, but other methods exist. We recall a construction of nonlocal operators that lead to iterations that converge in 2d + 1 steps, where d is the diameter of the connectivity graph of the domain decomposition, if this graph is a tree. We discuss a graph algorithm linked to these operators in the general case. For the Laplacian on the sphere, we also give local approximations to these optimal nonlocal operators. We also discuss its application for solving the shallow water equations on the sphere as a model for numerical weather prediction.
APA, Harvard, Vancouver, ISO, and other styles
16

Li, Zhi Xiong. "A revision of adaptive Fourier decomposition." Thesis, University of Macau, 2012. http://umaclib3.umac.mo/record=b2590642.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kardamis, Joseph R. "Audio watermarking techniques using singular value decomposition /." Online version of thesis, 2007. http://hdl.handle.net/1850/4493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chapovalova, Valentina. "Decomposition of Certain C[Sn]-modules into Specht Modules." Thesis, Uppsala universitet, Matematiska institutionen, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-120505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ortiz, Marcos A. "Convex decomposition techniques applied to handlebodies." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1713.

Full text
Abstract:
Contact structures on 3-manifolds are 2-plane fields satisfying a set of conditions. The study of contact structures can be traced back for over two-hundred years, and has been of interest to mathematicians such as Hamilton, Jacobi, Cartan, and Darboux. In the late 1900's, the study of these structures gained momentum as the work of Eliashberg and Bennequin described subtleties in these structures that could be used to find new invariants. In particular, it was discovered that contact structures fell into two classes: tight and overtwisted. While overtwisted contact structures are relatively well understood, tight contact structures remain an area of active research. One area of active study, in particular, is the classification of tight contact structures on 3-manifolds. This began with Eliashberg, who showed that the standard contact structure in real three-dimensional space is unique, and it has been expanded on since. Some major advancements and new techniques were introduced by Kanda, Honda, Etnyre, Kazez, Matić, and others. Convex decomposition theory was one product of these explorations. This technique involves cutting a manifold along convex surfaces (i.e. surfaces arranged in a particular way in relation to the contact structure) and investigating a particular set on these cutting surfaces to say something about the original contact structure. In the cases where the cutting surfaces are fairly nice, in some sense, Honda established a correspondence between information on the cutting surfaces and the tight contact structures supported by the original manifold. In this thesis, convex surface theory is applied to the case of handlebodies with a restricted class of dividing sets. For some cases, classification is achieved, and for others, some interesting patterns arise and are investigated.
APA, Harvard, Vancouver, ISO, and other styles
20

Shankar, Jayashree. "Analysis of a nonhierarchical decomposition algorithm." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-09192009-040336/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cho, Young Jin. "Effects of decomposition level on the intrarater reliability of multiattribute alternative evaluation." Diss., This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-06062008-171537/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Niyobuhungiro, Japhet. "Optimal Decomposition in Real Interpolation and Duality in Convex Analysis." Licentiate thesis, Linköpings universitet, Matematik och tillämpad matematik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94506.

Full text
Abstract:
This thesis is devoted to the study of mathematical properties of exact minimizers for the K–,L–, and E– functionals of the theory of real interpolation. Recently exact minimizers for these functionals have appeared in important results in image processing. In the thesis, we present a geometry of optimal decomposition for L– functional for the couple (ℓ2, X), where space ℓ2 is defined by the standard Euclidean norm ǁ · ǁ2 and where X is a Banach space on Rn. The well known ROF denoising model is a special case of an L– functional for the couple (L2, BV) where L2 and BV stand for the space of square integrable functions and the space of functions with bounded variation on a rectangular domain respectively. We provide simple proofs and geometrical interpretation of optimal decomposition by following ideas by Yves Meyer who has used a duality approach to characterize optimal decomposition for ROF denoising model. The operation of infimal convolution is a very important and non–trivial tool in functional analysis and is also very well–known within the context of convex analysis. The L–, K– and E– functionals can be regarded as an infimal convolution of two well defined functions but unfortunately tools from convex analysis can not be applied in a straigtforward way in this context of couples of spaces. We have considered infimal convolution on Banach couples and by using a theorem due to Attouch and Brezis, we have established sufficient conditions for an infimal convolution on a given Banach couple to be subdifferentiable, which turns out to be the most important requirement that an infimal convolution would satisfy for a decomposition to be optimal. We have also provided a lemma that we have named Key Lemma, which characterizes optimal decomposition for an infimal convolution in general. The main results concerning mathematical properties of optimal decomposition for L–, K– and E– functionals for the case of general regular Banach couples are presented. We use a duality approach which can be summarized in three steps: First we consider the concerned functional as an infimal convolution and reformulate the infimal convolution at hand as a minimization of a sum of two specific functions on the intersection of the couple. Then we prove that it is subdifferentiable and finally use the characterizaton of its optimal decomposition. We have also investigated how powerful our approach is by applying it to two well–known optimization problems, namely convex and linear programming. As a result we have obtained new proofs for duality theorems which are central for these problems.
APA, Harvard, Vancouver, ISO, and other styles
23

Azzato, Jeffrey Donald. "Linked tree-decompositions of infinite represented matroids : a thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Master of Science in Mathematics /." ResearchArchive@Victoria e-Thesis, 2008. http://hdl.handle.net/10063/322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Williams, Adrian Leonard. "Some more decomposition numbers for modular representations of symmetric groups." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Turner, James Anthony. "Application of domain decomposition methods to problems in topology optimisation." Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/5842/.

Full text
Abstract:
Determination of the optimal layout of structures can be seen in everyday life, from nature to industry, with research dating back to the eighteenth century. The focus of this thesis involves investigation into the relatively modern field of topology optimisation, where the aim is to determine both the optimal shape and topology of structures. However, the inherent large-scale nature means that even problems defined using a relatively coarse finite element discretisation can be computationally demanding. This thesis aims to describe alternative approaches allowing for the practical use of topology optimisation on a large scale. Commonly used solution methods will be compared and scrutinised, with observations used in the application of a novel substructuring domain decomposition method for the subsequent large-scale linear systems. Numerical and analytical investigations involving the governing equations of linear elasticity will lead to the development of three different algorithms for compliance minimisation problems in topology optimisation. Each algorithm will involve an appropriate preconditioning strategy incorporating a matrix representation of a discrete interpolation norm, with numerical results indicating mesh independent performance.
APA, Harvard, Vancouver, ISO, and other styles
26

Ahiati, Veroncia Sitsofe. "Cardinal spline wavelet decomposition based on quasi-interpolation and local projection." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/2580.

Full text
Abstract:
Thesis (MSc (Mathematics))--University of Stellenbosch, 2009.
Wavelet decomposition techniques have grown over the last two decades into a powerful tool in signal analysis. Similarly, spline functions have enjoyed a sustained high popularity in the approximation of data. In this thesis, we study the cardinal B-spline wavelet construction procedure based on quasiinterpolation and local linear projection, before specialising to the cubic B-spline on a bounded interval. First, we present some fundamental results on cardinal B-splines, which are piecewise polynomials with uniformly spaced breakpoints at the dyadic points Z/2r, for r ∈ Z. We start our wavelet decomposition method with a quasi-interpolation operator Qm,r mapping, for every integer r, real-valued functions on R into Sr m where Sr m is the space of cardinal splines of order m, such that the polynomial reproduction property Qm,rp = p, p ∈ m−1, r ∈ Z is satisfied. We then give the explicit construction of Qm,r. We next introduce, in Chapter 3, a local linear projection operator sequence {Pm,r : r ∈ Z}, with Pm,r : Sr+1 m → Sr m , r ∈ Z, in terms of a Laurent polynomial m solution of minimally length which satisfies a certain Bezout identity based on the refinement mask symbol Am, which we give explicitly. With such a linear projection operator sequence, we define, in Chapter 4, the error space sequence Wr m = {f − Pm,rf : f ∈ Sr+1 m }. We then show by solving a certain Bezout identity that there exists a finitely supported function m ∈ S1 m such that, for every r ∈ Z, the integer shift sequence { m(2 · −j)} spans the linear space Wr m . According to our definition, we then call m the mth order cardinal B-spline wavelet. The wavelet decomposition algorithm based on the quasi-interpolation operator Qm,r, the local linear projection operator Pm,r, and the wavelet m, is then based on finite sequences, and is shown to possess, for a given signal f, the essential property of yielding relatively small wavelet coefficients in regions where the support interval of m(2r · −j) overlaps with a Cm-smooth region of f. Finally, in Chapter 5, we explicitly construct minimally supported cubic B-spline wavelets on a bounded interval [0, n]. We also develop a corresponding explicit decomposition algorithm for a signal f on a bounded interval. ii Throughout Chapters 2 to 5, numerical examples are provided to graphically illustrate the theoretical results.
APA, Harvard, Vancouver, ISO, and other styles
27

Palansuriya, Charaka Jeewana. "Domain decomposition based algorithms for some inverse problems." Thesis, University of Greenwich, 2000. http://gala.gre.ac.uk/8226/.

Full text
Abstract:
The work presented in this thesis develop algorithms to solve inverse problems where source terms are unknown. The algorithms are developed 011frameworks provided by domain decomposition methods and the numerical schemes use finite volume and finite difference discretisations. Three algorithms are developed in the context of a metal cutting problem. The algorithms require measurement data within the physical body in order to retrieve the temperature field and the unknown source terms. It is shown that the algorithms can retrieve both the temperature field and the unknown source accurately. Applicability of the algorithms to other problems is shown by using one of the algorithms to solve a welding problem. Presence of untreated noisy measurement data can severely affect the accuracy of the retrieved source. It is illustrated that a simple noise treatment procedure such as a least squares method can remedy this situation. The algorithms are implemented 011parallel computing platforms to reduce the execution time. By exploiting domain and data parallelism within the algorithms significant performance improvements are achieved. It is also shown that by exploiting mathematical properties such as change of nonlinearity further performance improvements can be made.
APA, Harvard, Vancouver, ISO, and other styles
28

Ozkan, Sibel. "Hamilton decompositions of graphs with primitive complements." Auburn, Ala., 2007. http://repo.lib.auburn.edu/2007%20Spring%20Dissertations/OZKAN_SIBEL_27.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Chapovalova, Valentina. "Decomposition of Certain C[Sn]-modules into Specht Modules." Thesis, Uppsala University, Department of Mathematics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-120505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Stalvey, Harrison. "Weak Primary Decomposition of Modules Over a Commutative Ring." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/math_theses/84.

Full text
Abstract:
This paper presents the theory of weak primary decomposition of modules over a commutative ring. A generalization of the classic well-known theory of primary decomposition, weak primary decomposition is a consequence of the notions of weakly associated prime ideals and nearly nilpotent elements, which were introduced by N. Bourbaki. We begin by discussing basic facts about classic primary decomposition. Then we prove the results on weak primary decomposition, which are parallel to the classic case. Lastly, we define and generalize the Compatibility property of primary decomposition.
APA, Harvard, Vancouver, ISO, and other styles
31

Muiny, Somaya. "Primary Decomposition in Non Finitely Generated Modules." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/math_theses/70.

Full text
Abstract:
In this paper, we study primary decomposition of any proper submodule N of a module M over a noetherian ring R. We start by briefly discussing basic facts about the very well known case where M is a finitely generated module over a Noetherian ring R, then we proceed to discuss the general case where M is any module over a Noetherian ring R. We put a lot of focus on the associated primes that occur with the primary decomposition, essentially studying their uniqueness and their relation to the associated primes of M/N.
APA, Harvard, Vancouver, ISO, and other styles
32

Tomczuk, Randal Wade. "Autocorrelation and decomposition methods in combinational logic design." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq21952.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Culver, Chance. "Decompositions of the Complete Mixed Graph by Mixed Stars." Digital Commons @ East Tennessee State University, 2020. https://dc.etsu.edu/etd/3782.

Full text
Abstract:
In the study of mixed graphs, a common question is: What are the necessary and suffcient conditions for the existence of a decomposition of the complete mixed graph into isomorphic copies of a given mixed graph? Since the complete mixed graph has twice as many arcs as edges, then an obvious necessary condition is that the isomorphic copies have twice as many arcs as edges. We will prove necessary and suffcient conditions for the existence of a decomposition of the complete mixed graphs into mixed stars with two edges and four arcs. We also consider some special cases of decompositions of the complete mixed graph into partially oriented stars with twice as many arcs as edges. We employ difference methods in most of our constructions when showing suffciency. 2
APA, Harvard, Vancouver, ISO, and other styles
34

Culver, Chance. "Decompositions of the Complete Mixed Graph by Mixed Stars." Digital Commons @ East Tennessee State University, 2008. https://dc.etsu.edu/etd/3782.

Full text
Abstract:
In the study of mixed graphs, a common question is: What are the necessary and suffcient conditions for the existence of a decomposition of the complete mixed graph into isomorphic copies of a given mixed graph? Since the complete mixed graph has twice as many arcs as edges, then an obvious necessary condition is that the isomorphic copies have twice as many arcs as edges. We will prove necessary and suffcient conditions for the existence of a decomposition of the complete mixed graphs into mixed stars with two edges and four arcs. We also consider some special cases of decompositions of the complete mixed graph into partially oriented stars with twice as many arcs as edges. We employ difference methods in most of our constructions when showing suffciency. 2
APA, Harvard, Vancouver, ISO, and other styles
35

Jagadeesh, Vasudevamurthy. "On the testability-preserving decomposition and factorization of Boolean expressions." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=74653.

Full text
Abstract:
This thesis presents a new concurrent method for the decomposition and factorization of Boolean expressions based on two simple objects: two-literal single-cube divisors, and double-cube divisors along with their complements. It is proved that the presence of common multiple-cube algebraic divisors, from a set of Boolean expressions, can be found by analyzing the set of double-cube divisors. It is also shown that in order to find the duality relations that may exist between various objects, only a subset of two-literal single-cube and double-cube divisors needs to be analyzed. Since the number of these objects grows polynomially with the size of the network, the number of objects that are to be analyzed for finding common algebraic divisors, and for finding the duality relations between them, is much less than the set of all algebraic divisors. Also, since the duality relations between these objects are exploited along with DeMorgan's laws, these objects constitute a richer set of divisors than the strictly algebraic divisors.
It is also proved that the transformations based on these simple objects preserve testability. This result implies that if the input Boolean network before decomposition and factorization is 100% testable for single stuck-at faults by a test set T, then the area optimized output network will also be 100% testable for single stuck-at faults, and can be tested by the same test set T. These results are proved using the concepts of corresponding faults in the circuits and relations between complete test sets. Since the method assumes that the initial network is only single stuck-at fault testable, and because single stuck-at fault testability is maintained through the transformations, the method can be applied to a large class of irredundant two-level and multi-level circuits to synthesize fully testable circuits.
Experimental results are presented and compared with various logic synthesis systems to demonstrate the efficiency and effectiveness of the method.
APA, Harvard, Vancouver, ISO, and other styles
36

Siahaan, Antony. "Defect correction based domain decomposition methods for some nonlinear problems." Thesis, University of Greenwich, 2011. http://gala.gre.ac.uk/7144/.

Full text
Abstract:
Defect correction schemes as a class of nonoverlapping domain decomposition methods offer several advantages in the ways they split a complex problem into several subdomain problems with less complexity. The schemes need a nonlinear solver to take care of the residual at the interface. The adaptive-∝ solver can converge locally in the ∞-norm, where the sufficient condition requires a relatively small local neighbourhood and the problem must have a strongly diagonal dominant Jacobian matrix with a very small condition number. Yet its advantage can be of high signicance in the computational cost where it simply needs a scalar as the approximation of Jacobian matrix. Other nonlinear solvers employed for the schemes are a Newton-GMRES method, a Newton method with a finite difference Jacobian approximation, and nonlinear conjugate gradient solvers with Fletcher-Reeves and Pollak-Ribiere searching direction formulas. The schemes are applied to three nonlinear problems. The first problem is a heat conduction in a multichip module where there the domain is assembled from many components of different conductivities and physical sizes. Here the implementations of the schemes satisfy the component meshing and gluing concept. A finite difference approximation of the residual of the governing equation turns out to be a better defect equation than the equality of normal derivative. Of all the nonlinear solvers implemented in the defect correction scheme, the nonlinear conjugate gradient method with Fletcher-Reeves searching direction has the best performance. The second problem is a 2D single-phase fluid flow with heat transfer where the PHOENICS CFD code is used to run the subdomain computation. The Newton method with a finite difference Jacobian is a reasonable interface solver in coupling these subdomain computations. The final problem is a multiphase heat and moisture transfer in a porous textile. The PHOENICS code is also used to solve the system of partial differential equations governing the multiphase process in each subdomain while the coupling of the subdomain solutions is taken care of with some FORTRAN codes by the defect correction schemes. A scheme using a modified-∝ method fails to obtain decent solutions in both single and two layers case. On the other hand, the scheme using the above Newton method produces satisfying results for both cases where it can lead an initially distant interface data into a good convergent solution. However, it is found that in general the number of nonlinear iteration of the defect correction schemes increases with the mesh refinement.
APA, Harvard, Vancouver, ISO, and other styles
37

Toal, David J. J. "Proper orthogonal decomposition & kriging strategies for design." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/72023/.

Full text
Abstract:
The proliferation of surrogate modelling techniques have facilitated the application of expensive, high fidelity simulations within design optimisation. Taking considerably fewer function evaluations than direct global optimisation techniques, such as genetic algorithms, surrogate models attempt to construct a surrogate of an objective function from an initial sampling of the design space. These surrogates can then be explored and updated in regions of interest. Kriging is a particularly popular method of constructing a surrogate model due to its ability to accurately represent complicated responses whilst providing an error estimate of the predictor. However, it can be prohibitively expensive to construct a kriging model at high dimensions with a large number of sample points due to the cost associated with the maximum likelihood optimisation. The following thesis aims to address this by reducing the total likelihood optimisation cost through the application of an adjoint of the likelihood function within a hybridised optimisation algorithm and the development of a novel optimisation strategy employing a reparameterisation of the original design problem through proper orthogonal decomposition.
APA, Harvard, Vancouver, ISO, and other styles
38

Pfister, Noah. "Using Empirical Mode Decomposition to Study Periodicity and Trends in Extreme Precipitation." ScholarWorks @ UVM, 2015. http://scholarworks.uvm.edu/graddis/366.

Full text
Abstract:
Classically, we look at annual maximum precipitation series from the perspective of extreme value statistics, which provides a useful statistical distribution, but does not allow much flexibility in the context of climate change. Such distributions are usually assumed to be static, or else require some assumed information about possible trends within the data. For this study, we treat the maximum rainfall series as sums of underlying signals, upon which we perform a decomposition technique, Empirical Mode Decomposition. This not only allows the study of non-linear trends in the data, but could give us some idea of the periodic forces that have an effect on our series. To this end, data was taken from stations in the New England area, from different climatological regions, with the hopes of seeing temporal and spacial effects of climate change. Although results vary among the chosen stations the results show some weak signals and in many cases a trend-like residual function is determined.
APA, Harvard, Vancouver, ISO, and other styles
39

Ho, Io Tong. "Experiments in relation to adaptive decomposition of signals into mono-components." Thesis, University of Macau, 2008. http://umaclib3.umac.mo/record=b1943004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lawlor, Matthew. "Tensor Decomposition by Modified BCM Neurons Finds Mixture Means Through Input Triplets." Thesis, Yale University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3580742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Garay, Jose. "Asynchronous Optimized Schwarz Methods for Partial Differential Equations in Rectangular Domains." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/510451.

Full text
Abstract:
Mathematics
Ph.D.
Asynchronous iterative algorithms are parallel iterative algorithms in which communications and iterations are not synchronized among processors. Thus, as soon as a processing unit finishes its own calculations, it starts the next cycle with the latest data received during a previous cycle, without waiting for any other processing unit to complete its own calculation. These algorithms increase the number of updates in some processors (as compared to the synchronous case) but suppress most idle times. This usually results in a reduction of the (execution) time to achieve convergence. Optimized Schwarz methods (OSM) are domain decomposition methods in which the transmission conditions between subdomains contain operators of the form \linebreak $\partial/\partial \nu +\Lambda$, where $\partial/\partial \nu$ is the outward normal derivative and $\Lambda$ is an optimized local approximation of the global Steklov-Poincar\'e operator. There is more than one family of transmission conditions that can be used for a given partial differential equation (e.g., the $OO0$ and $OO2$ families), each of these families containing a particular approximation of the Steklov-Poincar\'e operator. These transmission conditions have some parameters that are tuned to obtain a fast convergence rate. Optimized Schwarz methods are fast in terms of iteration count and can be implemented asynchronously. In this thesis we analyze the convergence behavior of the synchronous and asynchronous implementation of OSM applied to solve partial differential equations with a shifted Laplacian operator in bounded rectangular domains. We analyze two cases. In the first case we have a shift that can be either positive, negative or zero, a one-way domain decomposition and transmission conditions of the $OO2$ family. In the second case we have Poisson's equation, a domain decomposition with cross-points and $OO0$ transmission conditions. In both cases we reformulate the equations defining the problem into a fixed point iteration that is suitable for our analysis, then derive convergence proofs and analyze how the convergence rate varies with the number of subdomains, the amount of overlap, and the values of the parameters introduced in the transmission conditions. Additionally, we find the optimal values of the parameters and present some numerical experiments for the second case illustrating our theoretical results. To our knowledge this is the first time that a convergence analysis of optimized Schwarz is presented for bounded subdomains with multiple subdomains and arbitrary overlap. The analysis presented in this thesis also applies to problems with more general domains which can be decomposed as a union of rectangles.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
42

Volzer, Joseph R. "An Invariant Embedding Approach to Domain Decomposition." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1396522159.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Yan Bo. "Adaptive decomposition of signals into mono-components." Thesis, University of Macau, 2010. http://umaclib3.umac.mo/record=b2489954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hogan, Ian. "The Brauer Complex and Decomposition Numbers of Symplectic Groups." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1489766963453771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Seater, Robert. "Minkowski sum decompositions of convex polygons." Diss., Connect to the thesis, 2002. http://hdl.handle.net/10066/1479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Wen, Mi. "An investigation on H∞ control in relation to adaptive decomposition of signal." Thesis, University of Macau, 2008. http://umaclib3.umac.mo/record=b1780627.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bilinski, Mark. "Approximating the circumference of 3-connected claw-free graphs." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26516.

Full text
Abstract:
Thesis (Ph.D)--Mathematics, Georgia Institute of Technology, 2009.
Committee Chair: Yu, Xingxing; Committee Member: Duke, Richard; Committee Member: Tetali, Prasad; Committee Member: Thomas, Robin; Committee Member: Vigoda, Eric. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
48

Eren, Levent. "Bearing damage detection via wavelet packet decomposition of stator current /." free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3074397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lewenczuk, Janice Gail. "Decomposition, Packings and Coverings of Complete Digraphs with a Transitive-Triple and a Pendant Arc." Digital Commons @ East Tennessee State University, 2007. https://dc.etsu.edu/etd/2053.

Full text
Abstract:
In the study of design theory, there are eight orientations of the complete graph on three vertices with a pendant edge, K3∪{e}. Two of these are the 3-circuit with a pendant arc and the other six are transitive triples with a pendant arc. Necessary and sufficient conditions are given for decompositions, packings and coverings of the complete digraph with each of the six transitive triples with a pendant arc.
APA, Harvard, Vancouver, ISO, and other styles
50

Mathews, Chad Ullery William D. "Mixed groups with decomposition bases and global k-groups." Auburn, Ala., 2006. http://repo.lib.auburn.edu/2006%20Summer/Theses/MATHEWS_CHAD_59.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography