Dissertations / Theses on the topic 'Decomposition'

To see the other types of publications on this topic, follow the link: Decomposition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Decomposition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sykes, Martin Lewis. "Metal carbonyl decomposition and carbon decomposition in the A.G.R." Thesis, University of Newcastle Upon Tyne, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ek, Christoffer. "Singular Value Decomposition." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-21481.

Full text
Abstract:
Digital information och kommunikation genom digitala medier är ett växande område. E-post och andra kommunikationsmedel används dagligen över hela världen. Parallellt med att området växer så växer även intresset av att hålla informationen säker. Transmission via antenner är inom signalbehandling ett välkänt område. Transmission från en sändare till en mottagare genom fri rymd är ett vanligt exempel. I en tuff miljö som till exempel ett rum med reflektioner och oberoende elektriska apparater kommer det att finnas en hel del distorsion i systemet och signalen som överförs kan, på grund av systemets egenskaper och buller förvrängas.Systemidentifiering är ett annat välkänt begrepp inom signalbehandling. Denna avhandling fokuserar på systemidentifiering i en tuff miljö med okända system. En presentation ges av matematiska verktyg från den linjära algebran samt en tillämpning inom signalbehandling. Denna avhandling grundar sig främst på en matrisfaktorisering känd som Singular Value Decomposition (SVD). SVD’n används här för att lösa komplicerade matrisinverser och identifiera system.Denna avhandling utförs i samarbete med Combitech AB. Deras expertis inom signalbehandling var till stor hjälp när teorin praktiserades. Med hjälp av ett välkänt programmeringsspråk känt som LabView praktiserades de matematiska verktygen och kunde synkroniseras med diverse instrument som användes för att generera signaler och system.
Digital information transmission is a growing field. Emails, videos and so on are transmitting around the world on a daily basis. Along the growth of using digital devises there is in some cases a great interest of keeping this information secure. In the field of signal processing a general concept is antenna transmission. Free space between an antenna transmitter and a receiver is an example of a system. In a rough environment such as a room with reflections and independent electrical devices there will be a lot of distortion in the system and the signal that is transmitted might, due to the system characteristics and noise be distorted. System identification is another well-known concept in signal processing. This thesis will focus on system identification in a rough environment and unknown systems. It will introduce mathematical tools from the field of linear algebra and applying them in signal processing. Mainly this thesis focus on a specific matrix factorization called Singular Value Decomposition (SVD). This is used to solve complicated inverses and identifying systems. This thesis is formed and accomplished in collaboration with Combitech AB. Their expertise in the field of signal processing was of great help when putting the algorithm in practice. Using a well-known programming script called LabView the mathematical tools were synchronized with the instruments that were used to generate the systems and signals.
APA, Harvard, Vancouver, ISO, and other styles
3

LeBlanc, Andrew Roland. "Engineering design decomposition." Thesis, Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/16044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schäfer, Mark. "Advanced STG decomposition." kostenfrei, 2008. http://d-nb.info/992317746/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Buth, Gerrit J. "Decomposition and primary production in salt marshes = Decompositie en primaire produktie in schorren /." [S.l. : s.n.], 1993. http://www.gbv.de/dms/bs/toc/131131834.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Westling, Lauren. "Underwater decomposition: an examination of factors surrounding freshwater decomposition in eastern Massachusetts." Thesis, Boston University, 2012. https://hdl.handle.net/2144/12670.

Full text
Abstract:
Thesis (M.S.)--Boston University
This study investigated the decomposition of three pig (Sus scrofa) carcasses in the same body of water under lentic and lotic conditions and at variable depths in a temperate mixed forest in the Outdoor Research Facility (ORF) in Holliston, Massachusetts in the summer months of June and July. Data were collected on the invertebrate activity, scavenger activity, water and ambient temperature, stages ofbody decomposition, and the rate of decomposition for each set of remains. Accumulated degree days (ADD) and total body scores (TBS) were used to determine two equations, differentiated by their microhabitat, with the potential use of estimating the postmortem submergence interval (PMSI) in death investigations under similar conditions. The aquatic remains reached skeletonization in 45 days and the terrestrial control remains in 14. Terrestrial and aquatic invertebrate activity was extensive both above and below the waterline with 42 families from 17 orders collected and identified. Through the use of motion detector cameras the researcher was able to view the activities performed around the remains by a blue heron, a coyote, a raccoon, multiple black vultures, multiple turkey vultures, multiple squirrels, and multiple American bullfrogs.
APA, Harvard, Vancouver, ISO, and other styles
7

Bülükbaşi, Güven. "Aspectual decomposition of transactions." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=101839.

Full text
Abstract:
The AspectOPTIMA project aims to build an aspect-oriented framework that provides run-time support for transactions. The previously established decomposition of the ACID (atomicity, consistency, isolation, durability) properties into ten well-defined reusable aspects had one limitation: it didn't take into account the concerns of transaction life-time support, resulting in the creation of a cross-cutting concern among the aspects. This thesis removes the cross-cutting concern by integrating the transactional life cycle management issues such as determining the transaction boundaries, maintaining a well-defined state and managing the involvement of the participants. The integration process results in the creation of new aspects that serve as building blocks for various transactional models. The thesis also demonstrates how these base aspects can be configured and composed in different ways to design customized transaction models with different concurrency control and recovery strategies.
APA, Harvard, Vancouver, ISO, and other styles
8

Le, Monnier Francis. "Seaweed decomposition in soil." Thesis, Imperial College London, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hamilton, Daniel. "Decomposition and diet problems." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/3798.

Full text
Abstract:
The purpose of this thesis is to efficiently solve real life problems. We study LPs. We study an NLP and an MINLP based on what is known as the generalised pooling problem (GPP), and we study an MIP that we call the cattle mating problem. These problems are often very large or otherwise difficult to solve by direct methods, and are best solved by decomposition methods. During the thesis we introduce algorithms that exploit the structure of the problems to decompose them. We are able to solve row-linked, column-linked and general LPs efficiently by modifying the tableau simplex method, and suggest how this work could be applied to the revised simplex method. We modify an existing sequential linear programming solver that is currently used by Format International to solve GPPs, and show the modified solver takes less time and is at least as likely to find the global minimum as the old solver. We solve multifactory versions of the GPP by augmented Lagrangian decomposition, and show this is more efficient than solving the problems directly. We introduce a decomposition algorithm to solve a MINLP version of the GPP by decomposing it into NLP and ILP subproblems. This is able to solve large problems that could not be solved directly. We introduce an efficient decomposition algorithm to solve the MIP cattle mating problem, which has been adopted for use by the Irish Cattle Breeding Federation. Most of the solve methods we introduce are designed only to find local minima. However, for the multifactory version of the GPP we introduce two methods that give a good chance of finding the global minimum, both of which succeed in finding the global minimum on test problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Imbert, F. E. "Aspects of alkane decomposition." Thesis, Swansea University, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Feldman, Jacob 1965. "Perceptual decomposition as inference." Thesis, Massachusetts Institute of Technology, 1990. http://hdl.handle.net/1721.1/13693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kwizera, Petero. "Matrix Singular Value Decomposition." UNF Digital Commons, 2010. http://digitalcommons.unf.edu/etd/381.

Full text
Abstract:
This thesis starts with the fundamentals of matrix theory and ends with applications of the matrix singular value decomposition (SVD). The background matrix theory coverage includes unitary and Hermitian matrices, and matrix norms and how they relate to matrix SVD. The matrix condition number is discussed in relationship to the solution of linear equations. Some inequalities based on the trace of a matrix, polar matrix decomposition, unitaries and partial isometies are discussed. Among the SVD applications discussed are the method of least squares and image compression. Expansion of a matrix as a linear combination of rank one partial isometries is applied to image compression by using reduced rank matrix approximations to represent greyscale images. MATLAB results for approximations of JPEG and .bmp images are presented. The results indicate that images can be represented with reasonable resolution using low rank matrix SVD approximations.
APA, Harvard, Vancouver, ISO, and other styles
13

Sripadham, Shankar B. "Semantic Decomposition By Covering." Thesis, Virginia Tech, 2000. http://hdl.handle.net/10919/34335.

Full text
Abstract:
This thesis describes the implementation of a covering algorithm for semantic decomposition of sentences of technical patents. This research complements the ASPIN project that has a long term goal of providing an automated system for digital system synthesis from patents. In order to develop a prototype of the system explained in a patent, a natural language processor (sentence-interpreter) is required. These systems typically attempt to interpret a sentence by syntactic analysis (parsing) followed by semantic analysis. Quite often, the technical narrative contains grammatical errors, incomplete sentences, anaphoric references and typological errors that can cause the grammatical parse to fail. In such situations, an alternate method that uses a repository of pre-compiled, simple sentences (called frames) to analyze the sentences of the patent can be a useful back up. By semantically decomposing the sentences of patents to a set of frames whose meanings are fully understood, the meaning of the patent sentences can be interpreted. This thesis deals with the semantic decomposition of sentences using a branch and bound covering algorithm. The algorithm is implemented in C++. A number of experiments were conducted to evaluate the performance of this algorithm. The covering algorithm uses a standard branch and bound algorithm to semantically decompose sentences. The algorithm is fast, flexible and can provide good (100 % coverage for some sentences) coverage results. The system covered 67.68 % of the sentence tokens using 3459 frames in the repository. 54.25% of the frames identified by the system in covers for sentences, were found to be semantically correct. The experiments suggest that the performance of the system can be improved by increasing the number of frames in the repository.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Gunn, Jeffrey Thomas 1960. "STOCHASTIC DECOMPOSITION (PROGRAMMING, NETWORKS)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/291770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Makur, Anuran. "Information contraction and decomposition." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122692.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: Sc. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 327-350).
Information contraction is one of the most fundamental concepts in information theory as evidenced by the numerous classical converse theorems that utilize it. In this dissertation, we study several problems aimed at better understanding this notion, broadly construed, within the intertwined realms of information theory, statistics, and discrete probability theory. In information theory, the contraction of f-divergences, such as Kullback-Leibler (KL) divergence, X²-divergence, and total variation (TV) distance, through channels (or the contraction of mutual f-information along Markov chains) is quantitatively captured by the well-known data processing inequalities.
These inequalities can be tightened to produce "strong" data processing inequalities (SDPIs), which are obtained by introducing appropriate channel-dependent or source-channel-dependent "contraction coefficients." We first prove various properties of contraction coefficients of source-channel pairs, and derive linear bounds on specific classes of such contraction coefficients in terms of the contraction coefficient for X²-divergence (or the Hirschfeld-Gebelein-Rényi maximal correlation). Then, we extend the notion of an SDPI for KL divergence by analyzing when a q-ary symmetric channel dominates a given channel in the "less noisy" sense. Specifically, we develop sufficient conditions for less noisy domination using ideas of degradation and majorization, and strengthen these conditions for additive noise channels over finite Abelian groups.
Furthermore, we also establish equivalent characterizations of the less noisy preorder over channels using non-linear operator convex f-divergences, and illustrate the relationship between less noisy domination and important functional inequalities such as logarithmic Sobolev inequalities. Next, adopting a more statistical and machine learning perspective, we elucidate the elegant geometry of SDPIs for X²-divergence by developing modal decompositions of bivariate distributions based on singular value decompositions of conditional expectation operators. In particular, we demonstrate that maximal correlation functions meaningfully decompose the information contained in categorical bivariate data in a local information geometric sense and serve as suitable embeddings of this data into Euclidean spaces.
Moreover, we propose an extension of the well-known alternating conditional expectations algorithm to estimate maximal correlation functions from training data for the purposes of feature extraction and dimensionality reduction. We then analyze the sample complexity of this algorithm using basic matrix perturbation theory and standard concentration of measure inequalities. On a related but tangential front, we also define and study the information capacity of permutation channels. Finally, we consider the discrete probability problem of broadcasting on bounded indegree directed acyclic graphs (DAGs), which corresponds to examining the contraction of TV distance in Bayesian networks whose vertices combine their noisy input signals using Boolean processing functions.
This generalizes the classical problem of broadcasting on trees and Ising models, and is closely related to results on reliable computation using noisy circuits, probabilistic cellular automata, and information flow in biological networks. Specifically, we establish phase transition phenomena for random DAGs which imply (via the probabilistic method) the existence of DAGs with logarithmic layer size where broadcasting is possible. We also construct deterministic DAGs where broadcasting is possible using expander graphs in deterministic quasi-polynomial or randomized polylogarithmic time in the depth. Lastly, we show that broadcasting is impossible for certain two-dimensional regular grids using techniques from percolation theory and coding theory.
by Anuran Makur.
Sc. D.
Sc.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
16

Chung, William S. W. "New decomposition methods for economic equilibrium models with applications to decomposition by region." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0004/NQ44754.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Jackson, Leroy A. "Facility location using cross decomposition /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA320150.

Full text
Abstract:
Thesis (M.S. in Operations Research) Naval Postgraduate School, December 1995.
Thesis advisor(s): Robert F. Dell. "December 1995." Includes bibliographical references (p. 51-52). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
18

Carlier, Louis. "Objective combinatorics through decomposition spaces." Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/667374.

Full text
Abstract:
Aquesta tesi proveeix construccions generals en el context d'espais de descomposici, generalitzant els resultats clàssics de la combinatòria al context homotòpic. Això requereix desenvolupar eines generals en la teoria d'espais de descomposició i noves perspectives, que siguin d'interès general, independentment de les aplicacions a la combinatòria. Al primer capítol, resumim la teoria de l'homotopia i la combinatòria de la 2-categoria de grupoides. Continuem amb una revisió de les nocions necessàries de la teoria de categories d'ordre infinit. A continuació, resumim la teoria dels espais de descomposició. Al segon capítol, identifiquem les estructures que tenen bi(co)mòduls d'incidència: són certs espais de Segal dobles augmentats subjectes a unes condicions d'exactitud. Establim un principi d'inversió de Möbius per a (co)mòduls i una fórmula de Rota per a certes estructures més implicades anomenades configuracions de bicomòduls de Möbius. La instància més important d'aquesta última noció sorgeix com cilindres d'aplicació d'adjuncions d'ordre infinit, o més generalment d'adjuncions entre espais de descomposició de Möbius, amb l'esperit de la fórmula original de Rota. Al tercer capítol, presentem eines per proveir situacions en què s'aplica la fórmula generalitzada de Rota. Com a exemple, calculem la funció de Möbius de l'espai de descomposició dels conjunts parcialment ordenats finits i l'explotem per obtenir també una fórmula per a l'àlgebra d'incidència de qualsevol espècie de restricció dirigida, operad lliure, o més generalment monada lliure sobre una monada polinòmica finitària. Al quart capítol, mostrem que les espècies hereditàries de Schmitt indueixen espais de descomposició monoidals i exhibim la construcció de biàlgebra de Schmitt com a instància de la construcció general de biàlgebra en un espai de descomposició monoidal. A més, mostrem que aquesta estructura de biàlgebra coactua sobre l'estructura de biàlgebra de les espècies restringides subjacent, per formar una biàlgebra en comòduls. Finalment, mostrem que les espècies hereditàries indueixen a una nova família d'exemples de categories operàdiques en el sentit de Batanin i Markl. Al cinquè capítol, que representa un treball conjunt amb Joachim Kock, introduïm una noció d'antípoda per a espais de descomposició (complets) monoidals, que indueixen una noció d'antípoda feble per a les seves bialgebres d'incidència. En el cas connectat, recuperem la noció habitual d'antípoda per a les àlgebres de Hopf. En el cas no connectat expressa un principi d'inversió d'abast més limitat, però sempre suficient per calcular la funció de Möbius com μ = ζ ◦ S, tal com per a les àlgebres de Hopf. Al nivell de les espais de descomposició, l'antípoda feble pren la forma d'una diferència formal d'endofunctors lineals Seven − Sodd, i és un refinament de la construcció general d'inversió de Möbius de Gálvez–Kock–Tonks, però explotant l'estructura monoidal.
This thesis provides general constructions in the context of decomposition spaces, generalising classical results from combinatorics to the homotopical setting. This requires developing general tools in the theory of decomposition spaces and new viewpoints, which are of general interest, independently of the applications to combinatorics. In the first chapter, we summarise the homotopy theory and combinatorics of the 2-category of groupoids. We continue with a review of needed notions from the theory of ∞-categories. We then summarise the theory of decomposition spaces. In the second chapter, we identify the structures that have incidence bi(co)modules: they are certain augmented double Segal spaces subject to some exactness conditions. We establish a Möbius inversion principle for (co)modules, and a Rota formula for certain more involved structures called Möbius bicomodule configurations. The most important instance of the latter notion arises as mapping cylinders of infinity adjunctions, or more generally of adjunctions between Möbius decomposition spaces, in the spirit of Rota’s original formula. In the third chapter, we present some tools for providing situations where the generalised Rota formula applies. As an example of this, we compute the Möbius function of the decomposition space of finite posets, and exploit this to derive also a formula for the incidence algebra of any directed restriction species, free operad, or more generally free monad on a finitary polynomial monad. In the fourth chapter, we show that Schmitt's hereditary species induce monoidal decomposition spaces, and exhibit Schmitt's bialgebra construction as an instance of the general bialgebra construction on a monoidal decomposition space. We show furthermore that this bialgebra structure coacts on the underlying restriction-species bialgebra structure so as to form a comodule bialgebra. Finally, we show that hereditary species induce a new family of examples of operadic categories in the sense of Batanin and Markl. In the fifth chapter, representing joint work with Joachim Kock, we introduce a notion of antipode for monoidal (complete) decomposition spaces, inducing a notion of weak antipode for their incidence bialgebras. In the connected case, this recovers the usual notion of antipode in Hopf algebras. In the non-connected case it expresses an inversion principle of more limited scope, but still sufficient to compute the Möbius function as μ = ζ ◦ S, just as in Hopf algebras. At the level of decomposition spaces, the weak antipode takes the form of a formal difference of linear endofunctors S_even - S_odd, and it is a refinement of the general Möbius inversion construction of Gálvez--Kock--Tonks, but exploiting the monoidal structure.
APA, Harvard, Vancouver, ISO, and other styles
19

Piceno, Marie Ely. "Data analysis through graph decomposition." Doctoral thesis, Universitat Politècnica de Catalunya, 2020. http://hdl.handle.net/10803/669758.

Full text
Abstract:
This work is developed within the field of data mining and data visualization. Under the premise that many of the algorithms give as result huge amounts of data impossible to handle for the users, we work with the decomposition of Gaifman graphs and its variations as an option for data visualization. In fact, we apply the decomposition method based on the so-called 2-structures. This decomposition method has been theoretical developed but has not any practical application yet in this field, being this part of our contribution. Thus, from a dataset we construct the Gaifman graph (and possible variations of it) that represents information about co-occurrence patterns. More precisely, the construction of the Gaifman graphs from the dataset is based on co-occurrence, or lack of it, of items in the dataset. That is, those pair of items that appear together in any transaction are connected and those items that never appear together are disconnected. We may do the natural completion of the graph adding the absent edges with a different kind of edges, in this way we get a complete graph with two equivalence classes on its edges. Now, think of the graph where the kind of edges are determined by the multiplicity of the items that they connect, that is, by the number of transactions that contains the pair of items that the edge connects. In this case we have as many equivalence relations as different multiplicities, and we may apply some discretization methods on them to get different variations of the graphs. All these variations can be seen as 2-structures. The application of the 2-structure decomposition method produces as result a hierarchical visualization of the co-occurrences on data. In fact, the decomposition method is based on clan decomposition. Given a 2-structure defined on U, a set of vertices C, C subset of U, is a clan if, for each z not in C, z may not distinguish among the elements of C. We connect this decomposition with an associated closure space, developing this intuition by introducing a construction of implication sets, named clan implications. Based on the definition of a clan, let x, y be elements of any clan C, if there is z such that sees in a different way x and y, that is the edges (x,z) and (x,y) are in different equivalence classes, so z in C; this is equivalent to C logically entails the implication xy then z. Throughout the thesis, in order to explain our work in a constructive way, we first work with the case of having only two equivalence classes and its corresponding nomenclature (modules), and then we extend the theory to work with more equivalence classes. Our main contributions are: an algorithm (with its full implementation) for the clan decomposition method; the theorems that support our approach, and examples of its application to demonstrate its usability.
Este trabajo está desarrollado dentro del área de Mineria de Datos y Visualización de Datos. Bajo la premisa de que muchos algoritmos dan como resultados un gran número de datos imposibles de manejar por los usuarios, proponemos trabajar con la descomposición de grafos de Gaifman y sus variantes como una opción para visualizar datos. De hecho, aplicamos un método de descomposición basado en las llamadas 2-structures. Este método de descomposición ha sido teóricamente desarrollado pero hasta ahora no había tenido una aplicación práctica en esta área, siendo ésta parte de nuestra contribución. Así, partiendo de la base de datos contruimos un grafo de Gaifman (y posiblemente variantes de él) que representa información sobre los patrones de co-ocurrencias. Esto es, aquellos pares de items que aparecen juntos en cualquier transacción son conectados, mientras que aquellos que nunca aparecen juntos están desconectados. Podemos completar naturalmente el grafo añadiendo las aristas ausentes como un diferente tipo de arista, en este sentido, obtenemos un grafo completo con dos clases de equivalencia sobre sus aristas. Ahora, piense en el grafo donde el tipo de aristas está determinado por la multiplicidad de los items que las aristas conectan, esto es, el número de transacciones que contienen el par de items que la arista conecta. En este caso tenemos tantas relaciones de equivalencia como diferentes multiplicidades, podemos aplicar algunos métodos de discretización sobre ellos para así obtener diferentes variantes de grafos, todas estas variaciones pueden ser vistas como 2-structures. La aplicación del método de descomposición de 2-structures produce como resultados una visualización jerárquica de las co-ocurrencias de los datos. De hecho, el método de descomposición está basado en la descomposición de clanes. Dada una 2-structure definida sobre U, un conjunto de vertices C, C subconjunto de U, es un clan si para cada z que no está en C, z no distingue los elementos de C, esto es, z está conectado a los elementos de C con el mismo tipo de aristas. En nuestro trabajo, conectamos esta descomposición con un espacio de cerrados asociado, desarrollamos esta parte del trabajo introduciendo una construcción de un conjunto de implicaciones, llamado clan implications. Basándonos en la definición de clan, sea x, y elemento de cualquier clan C, si existe z tal que las aristas (x,z) y (y,z) están en diferentes clases de equivalencia, z deber estar en C; esto es equivalente a que C lógicamente ocasiona la implicación xy entonces z. A lo largo de esta tesis, con el fin de explicar nuestro trabajo de una manera constructiva, primero trabajamos con sólo dos clases de equivalencia y su nomenclatura correspondiente (modules, en lugar de clanes), para después extender la teoría a más clases de equivalencia. Nuestras contribuciones principales son: un algoritmo (que implementamos) para le método de descomposición de clanes; los teoremas que respaldan nuestro trabajo, y ejemplos de sus aplicaciones con el fin de ilustrar su usabilidad.
APA, Harvard, Vancouver, ISO, and other styles
20

Qin, Feng. "Thermocatalytic decomposition of vulcanized rubber." Diss., Texas A&M University, 2005. http://hdl.handle.net/1969.1/4781.

Full text
Abstract:
Used vulcanized rubber tires have caused serious trouble worldwide. Current disposal and recycling methods all have undesirable side effects, and they generally do not produce maximum benefits. A thermocatalytic process using aluminum chloride as the main catalyst was demonstrated previously from 1992 to 1995 in our laboratory to convert used rubber tire to branched and ringed hydrocarbons. Products fell in the range of C4 to C8. Little to no gaseous products or fuel oil hydrocarbons of lower value were present. This project extended the previous experiments to accumulate laboratory data, and provide fundamental understanding of the thermocatalytic decomposition reaction of the model compounds including styrene-butadiene copolymers (SBR), butyl, and natural rubber. The liquid product yields of SBR and natural rubber consistently represented 20 to 30% of the original feedstock by weight. Generally, approximately 1 to 3% of the feedstock was converted to naphtha, while the remainder was liquefied petroleum gas. The liquid yields for butyl rubber were significantly higher than for SBR and natural rubber, generally ranging from 30 to 40% of the feedstock. Experiments were conducted to separate the catalyst from the residue by evaporation. Temperatures between 400 °C and 500 °C range are required to drive off significant amounts of catalyst. Decomposition of the catalyst also occurred in the recovery process. Reports in the literature and our observations strongly suggest that the AlCl3 forms an organometallic complex with the decomposing hydrocarbons so that it becomes integrated into the residue. Catalyst mixtures also were tested. Both AlCl3/NaCl and AlCl3/KCl mixtures had very small AlCl3 partial pressures at temperatures as high as 250 °C, unlike pure AlCl3 and AlCl3/MgCl2 mixtures. With the AlCl3/NaCl mixtures, decomposition of the rubber was observed at temperatures as low as 150 °C, although the reaction rates were considerably slower at lower temperatures. The amount of naphtha produced by the reaction also increased markedly, as did the yields of aromatics and cyclic paraffin. Recommendations are made for future research to definitively determine the economic and technical feasibility of the proposed thermocatalytic depolymerization process.
APA, Harvard, Vancouver, ISO, and other styles
21

Johansson, Öjvind. "Graph Decomposition Using Node Labels." Doctoral thesis, KTH, Numerical Analysis and Computer Science, NADA, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Rasolzadah, Kawah. "Morse Theory and Handle Decomposition." Thesis, Uppsala universitet, Algebra och geometri, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-343297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Blake, R. Melvin. "Photometric decomposition of NGC 6166." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq22790.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kulkarni, Dattatraya H. "CDA, computation decomposition and alignment." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0008/NQ27983.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Flynn, Elizabeth. "The catalysed decomposition of chlorosulfides." Thesis, Queen Mary, University of London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Murahidy, Analeise Clare. "The microbial decomposition of seeds." Thesis, University of Canterbury. Microbiology, 2001. http://hdl.handle.net/10092/6860.

Full text
Abstract:
The seed component of plant litter and its associated nutrients has been largely ignored in litter fall, decomposition and ecosystem nutrient budget studies. The exclusion of this fraction potentially underestimates the transfer of energy and nutrients within the ecosystem. This study investigated the seed substrate characteristics and the microbial decomposition of 10 species. A combination of microcosm and in situ experiments were used to manipulate rate-regulating factors of decomposition and measure their influence on the variables of mass loss and net nitrogen mineralisation. Mass loss from whole seeds, decomposed under controlled conditions for 180 days, varied between 0.7% (Sophora microphylla) and 77.7% (Triticum aestivum) with the net nitrogen mineralisation varying between 0.1% (Quercus robur) and 67.1% (Ulex europaeus). The greatest amount of inter-species variation in the decomposition rate could be explained by the proportional allocation of seed mass to the seed coat fraction (r=-0.7058, P=0.0001). Alterations to the integrity of the seed coat by artificial treatments, such as scarification, heat, or grinding, and natural mechanisms, such as seed immaturity or insect damage, accelerated the rate of decay in the initial 90 days of incubation. The decomposition of seeds in combination with leaf and wood litters resulted in significant non-additive effects. The mass loss from litter mixtures exhibited both synergistic and antagonistic effects. A reduced release of nitrogen occurred in all litter mixtures. Simulated freezing and desiccation events decreased the rate of seed decomposition. The estimated mean time for 95% mass loss from whole seeds was extended from 3.6 years under constant conditions to 4.5 years and 9.0 years when exposed to cyclical wet-and-dry and freeze-and-thaw conditions respectively. The net nitrogen mineralisation was generally reduced under cyclical conditions, with the exception of the Nothofagus species. A mean mass loss of 43 ± 8% and nitrogen loss of 42 ±12% was measured from seeds incubated in situ. The effect of soil microbes on decomposition was investigated by incubating seeds under standardised temperature and moisture conditions in 3 different soils. The rate of decomposition generally declined with an increase in soil acidity. The results of this study illustrate that the seed component of plant litter comprises a high quality substrate for microorganisms. The nutrients of seeds are generally readily mobilised and available for utilisation by other components of the ecosystem.
APA, Harvard, Vancouver, ISO, and other styles
27

Joshi, Sameer. "CONCEPT LEARNING BY EXAMPLE DECOMPOSITION." Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2742.

Full text
Abstract:
For efficient understanding and prediction in natural systems, even in artificially closed ones, we usually need to consider a number of factors that may combine in simple or complex ways. Additionally, many modern scientific disciplines face increasingly large datasets from which to extract knowledge (for example, genomics). Thus to learn all but the most trivial regularities in the natural world, we rely on different ways of simplifying the learning problem. One simplifying technique that is highly pervasive in nature is to break down a large learning problem into smaller ones; to learn the smaller, more manageable problems; and then to recombine them to obtain the larger picture. It is widely accepted in machine learning that it is easier to learn several smaller decomposed concepts than a single large one. Though many machine learning methods exploit it, the process of decomposition of a learning problem has not been studied adequately from a theoretical perspective. Typically such decomposition of concepts is achieved in highly constrained environments, or aided by human experts. In this work, we investigate concept learning by example decomposition in a general probably approximately correct (PAC) setting for Boolean learning. We develop sample complexity bounds for the different steps involved in the process. We formally show that if the cost of example partitioning is kept low then it is highly advantageous to learn by example decomposition. To demonstrate the efficacy of this framework, we interpret the theory in the context of feature extraction. We discover that many vague concepts in feature extraction, starting with what exactly a feature is, can be formalized unambiguously by this new theory of feature extraction. We analyze some existing feature learning algorithms in light of this theory, and finally demonstrate its constructive nature by generating a new learning algorithm from theoretical results.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
APA, Harvard, Vancouver, ISO, and other styles
28

Jackson, Leroy A. "Facility Location Using Cross Decomposition." Thesis, Monterey, California. Naval Postgraduate School, 1995. http://hdl.handle.net/10945/30739.

Full text
Abstract:
The views expressed in this thesis are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
Determining the best base stationing for military units can be modeled as a capacitated facility location problem with sole sourcing and multiple resource categories. Computational experience suggests that cross decomposition, a unification of Benders Decomposition and Lagrangean relaxation, is superior to other contemporary methods for solving capacitated facility location problems. Recent research extends cross decomposition to pure integer prograrnming problems with explicit application to capacitated facility location problems with sole sourcing; however, this research offers no computational experience. This thesis implements two cross decomposition algorithms for the capacitated facility location problem with sole sourcing and compares these decomposition algorithms with branch and bound methods. For some problems tested, cross decomposition obtains better solutions in less time; however, cross decomposition does not always perform better man branch and bound due to the time required to obtain the cross decomposition bound that is theoretically superior to other decomposition bounds.
APA, Harvard, Vancouver, ISO, and other styles
29

Smith, David McCulloch. "Regression using QR decomposition methods." Thesis, University of Kent, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.303532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

OLIVEIRA, DENISE DE. "DECOMPOSITION OF HILBERT-SPACE CONTRACTIONS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1995. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8151@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
O problema de decomposição de contrações em espaços de Hilbert é motivado pelo problema do subespaço invariante, o qual é um famoso problema em aberto em Teoria de Operadores. Se T (pertence) B [H] é uma contração, define- se o operador A como o limite forte da seqüência { T* n Tn (pertence) B [H]; n > ou = 1}. Este operador caracteriza as isometrias, uma vez que T é uma isometria se e somente se A = I. A decomposição de Von Neumann-Wold para isometrias estabelece que toda isometria é a soma direta ortogonal de um Shift unilateral com um operador unitário. O presente trabalho estende a decomposição de Von Neumann-Wold para contrações tais que o operador A é uma projeção ortogonal arbitrária. Através desta decomposição, conclui-se que se uma contração não possui subespaço invariante próprio, então T (pertence) C00 U C01 U C10. uma análise abrangente do efeito dessa nova decomposição é desenvolvida, interceptando a classe de contrações em questão com as classes dos operadores compactos, normais, quasinormais, subnormais, hiponormais e normalóides. Como se conclui que o operador A é uma projeção ortogonal apenas até a classe das contrações quasinormais, também é analisado o quanto o operador A referente a uma contração subnormal não-quasinormal pode se afastar de uma projeção ortogonal. Além disso, estabelece-se para contrações hipornormais o subespaço onde A é uma projeção ortogonal.
Decomposition of Hilbert-space contractions is motivated the invariant subspace problem, which is a famous open problem in Operator Theory. If T (pertenc) B [H] is a contraction, {T*n Tn (pertenc) B [H]; n > = 1} converger strongly. Let the operator A be its (strongly) limit. T is a isometry if and only if A = I. The von Neumann-Wold decomposition for isometries says that a isometry is the direct orthogonal sum of a unilateral shift and a unitary operator. The present work extends the von Neumann-Wold decomposition to a contrataction for wich A is an orthogonal projection. According to such a decomposition it is established that a contractin with no nontrivial invariant subspace is such that T (pertenc) C00 U C01 U C10. it follows a detailed investigation n the impact of such a new decomposition on several classes of operators; viz. compact, normal, quasinormal, subnormal, hyponormal and normaloid. It is verified that the operator A is an orthogonal projection up to the class of all quasinormal contraction T, but not for every subnormal contraction. Thus it is investigated how the operator A, for a susbnormal contraction T, can distanciate from an orthogonal projection, for hyponormal contraction T, is exhibited as well
APA, Harvard, Vancouver, ISO, and other styles
31

de, Araujo Ana Rita Fraga. "Pyrolytic decomposition of lignocellulosic materials." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/47751.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wilson, David. "Advances in cylindrical algebraic decomposition." Thesis, University of Bath, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636529.

Full text
Abstract:
Since their conception by Collins in 1975, Cylindrical Algebraic Decompositions (CADs) have been used to analyse the real algebraic geometry of systems of polynomials. Applications for CAD technology range from quantifier elimination to robot motion planning. Although of great use in practice, the CAD algorithm was shown to have doubly exponential complexity with respect to the number of variables for the problem, which limits its use for large examples. Due to the high complexity of CAD, much work has been done to improve its performance. In this thesis new advances will be discussed that improve the practical efficiency of CAD for a variety of problems, with a new complexity result for one set of algorithms. A new invariance condition, truth table invariance (TTICAD), and two algorithms to construct TTICADs are given and shown to be highly efficient. The idea of restricting the output of CADs, allowing for greater efficiency, is formalised as sub-decompositions and two particular ideas are investigated in depth. Efficient selection of various formulation choices for a CAD problem are discussed, with a collection of heuristics investigated and machine learning applied to assist in choosing an optimal heuristic. The mathematical expression of a problem is shown to be of great importance, with preconditioning and reformulation investigated. Finally, these advances are collected together in a general framework for applying CAD in an efficient manner to a given problem. It is shown that their combination is not cumulative and care must be taken. To this end, a prototype software CADassistant is described to help users take advantage of the advances without knowledge of the underlying theory. The effects of the various advances are demonstrated through a guiding example originally considered by Solotareff, which describes the approximation of a cubic polynomial by a linear function. Naïvely applying CAD to the problem takes 916.1 seconds of construction (from which a solution can easily be derived), which is reduced to 20.1 seconds by combining various advances from this thesis.
APA, Harvard, Vancouver, ISO, and other styles
33

Paluska, Justin Mazzola 1981. "Structured decomposition of adaptive applications." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/71275.

Full text
Abstract:
Thesis (Elec. E. in Computer Science)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 57-58).
We describe an approach to automate certain high-level implementation decisions in a pervasive application, allowing them to be postponed until run time. Our system enables a model in which an application programmer can specify the behavior of an adaptive application as a set of open-ended decision points. We formalize decision points as Goals, each of which may be satisfied by a set of scripts called Techniques. The set of Techniques vying to satisfy any Goal is additive and may be extended at runtime without needing to modify or remove any existing Techniques. Our system provides a framework in which Techniques may compete and interoperate at runtime in order to maintain an adaptive application. Technique development may be distributed and incremental, providing a path for the decentralized evolution of applications. Benchmarks show that our system imposes reasonable overhead during application startup and adaptation.
by Justin Mazzola Paluska.
Elec.E.in Computer Science
APA, Harvard, Vancouver, ISO, and other styles
34

Carmichael, C. S. J. "Decomposition of the lactose operon." Thesis, University of Edinburgh, 1992. http://hdl.handle.net/1842/13315.

Full text
Abstract:
The immediate aims of the project, as set out in the introduction, were 1) to separate the lacZ and lacY genes of the lactose operon such that they could be controlled/induced independently 2) to maintain the expression construct in the E.coli chromosome. The lacY gene was subcloned into plasmid PBN372 downstream of the S.marsescens trp promoter. The flanking E.coli trp genes were exploited to integrate the construct into the E.coli chromosome at the trpB locus via homologous recombination. Homologous recombinants should be trp{-}. Two approaches were employed to achieve integration: 1) transformation of a recD strain (which was also a lacY deletion) 2) transduction with phage lambda. The first method was unsuccessful, only spontaneous trp- mutations were isolated. The second method yielded several integrants, one of which was used in subsequent growth experiments. Since the constructed strain was rendered trp-, the internal tryptophan concentration could be influenced by the concentration of tryptophan in the medium and the level of induction could be set by the addition of differing amounts of the antirepressor, IAA. The growth rate of the constructed strain in minimal lactose media was comparable with that of wild type E.coli. The permease activity of the constructed strain was seen to vary when assayed in the presence of varying amounts of IAA. The expression of permease was also demonstrated to be independent of galactosidase activity. The constructed strain therefore met all the initial requirements for the experimental system set out in this thesis. Difficulties were encountered during the analysis of permease induction in batch culture. This was due to the competition between antirepressor (IAA) and corepressor (tryptophan). These difficulties would have been minimal had the analysis been carried out in chemostat experiments. It would then have been possible to maintain a constant, low level of tryptophan throughout the experiment. In batch culture, the tryptophan level is constantly changing, initially being relatively high and becoming depleted as the experiment progresses.
APA, Harvard, Vancouver, ISO, and other styles
35

Harczuk, Ignat. "Atomic decomposition of molecular properties." Doctoral thesis, KTH, Teoretisk kemi och biologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187168.

Full text
Abstract:
In this thesis, new methodology of computing properties aimed for multipleapplications is developed. We use quantum mechanics to compute propertiesof molecules, and having these properties as a basis, we set up equations basedon a classical reasoning. These approximations are shown to be quite good inmany cases, and makes it possible to calculate linear and non-linear propertiesof large systems.The calculated molecular properties are decomposed into atomic propertiesusing the LoProp algorithm, which is a method only dependent on the overlapmatrix. This enables the expression of the molecular properties in the two-site atomic basis, giving atomic, and bond-centric force-fields in terms of themolecular multi-pole moments and polarizabilities. Since the original LoProptransformation was formulated for static fields, theory is developed which makesit possible to extract the frequency-dependent atomic properties as well. Fromthe second-order perturbation of the electron density with respect to an externalfield, LoProp is formulated to encompass the first order hyperpolarizability.The original Applequist formulation is extended into a quadratic formula-tion, which produces the second-order shift in the induced dipole moments of thepoint-dipoles from the hyperpolarizability. This enables the calculation of a to-tal hyperpolarizability in systems consisting of interacting atoms and molecules.The first polarizability α and the first hyperpolarizability β obtained via theLoProp transformation are used to calculate this response with respect to anexternal field using the quadratic Applequist equations.In the last part, the implemented analytical response LoProp procedureand the quadratic Applequist formalism is applied to various model systems.The polarizable force-field that is obtained from the decomposition of the staticmolecular polarizability α is tested by studying the one-photon absorption spec-trum of the green fluorescent protein. From the frequency dispersion of thepolarizability α(ω), the effect of field perturbations is evaluated in classicaland QM/MM applications. Using the dynamical polarizabilities, the Rayleigh-scattering of aerosol clusters consisting of water and cis–pinonic acid moleculesis studied. The LoProp hyperpolarizability in combination with the quadraticApplequist equations is used to test the validity of the model on sample wa-ter clusters of varying sizes. Using the modified point-dipole model developedby Thole, the hyper-Rayleigh scattering intensity of a model collagen triple-helix is calculated. The atomic dispersion coefficients are calculated from thedecomposition of the real molecular polarizability at imaginary frequencies. Fi-nally, using LoProp and a capping procedure we demonstrate how the QM/MMmethodology can be used to compute x-ray photoelectron spectra of a polymer.
I denna avhandling utvecklas ny metodik för beräkningar av egenskaper medolika tillämpningar. Vi använder kvantmekanik för att beräkna egenskaper hosmolekyler, och använder sedan dessa egenskaper som bas i klassiska ekvationer.Dessa approximationer visas vara bra i flera sammanhang, vilket gör det direktmöjligt att beräkna linjära och icke-linjära egenskaper i större system.De beräknade molekylära egenskaperna delas upp i atomära bidrag genomLoProp transformationen, en metod endast beroende av den atomära överlapps-matrisen. Detta ger möjligheten att representera en molekyls egenskaper i entvåatomsbasis, vilket ger atomära, och bindningscentrerade kraftfält tagna frånde molekylära multipoler och polarisabiliteter.Eftersom att den originella LoProp transformationen var formulerad medstatiska fält, så utvecklas och implementeras i denna avhandling LoProp meto-den ytterligare för frekvensberoende egenskaper. Genom den andra ordnin-gens störning med avseende på externa fält, så formuleras LoProp så att di-rekt bestämning av första ordningens hyperpolariserbarhet för atomära po-sitioner blir möjlig. De ursprungliga Applequist ekvationerna skrivs om tillen kvadratisk representation för att göra det möjligt att beräkna den andraordningens induktion av dipolmomenten för punktdipoler med hjälp av denförsta hyperpolariserbarheten. Detta gör det möjligt att beräkna den totalahyperpolariserbarheten för större system. Här används den statiska polariser-barheten och hyperpolariserbarheten framtagna via LoProp transformationenför att beräkna ett systems egenskaper då det utsätts av ett externt elektrisktfält via Applequists ekvationer till andra ordningen.Tillämpningar presenteras av den implementerade LoProp metodiken medden utvecklade andra ordnings Applequist ekvationer för olika system. Detpolariserbara kraftfältet som fås av lokalisering av α testas genom studier avabsorptionsspektrat för det gröna fluorescerande proteinet. Via beräkningar avden lokala frekvensavhängande polariserbarheten α(ω), testas effekten av de ex-terna störningar på klassiska och blandade kvant-klassiska egenskaper. Genomden linjära frekvensberoende polariserbarheten så studeras även Rayleigh sprid-ning av atmosfärs partiklar. Via LoProp transformationen av hyperpolariser-barheten i kombination med de kvadratiska Applequist ekvationerna så un-dersöks modellens rimlighet för vattenkluster av varierande storlek. Genom attanvända Tholes exponentiella dämpningsschema så beräknas hyper-Rayleighspridningen för kollagen. Den atomära dispersionskoefficienten beräknas via delokala bidragen till den imaginära delen av den linjära polariserbarheten. Slutli-gen visar vi hur LoProp tekniken tillsammans med en s.k. inkapslingsmetod kananvändas i QM/MM beräkningar av Röntgenfotoelektron spektra av polymerer.

QC 20160517

APA, Harvard, Vancouver, ISO, and other styles
36

Gurguri, Jefferson LourenÃo. "Polyhedral Study of Tree Decomposition." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=17121.

Full text
Abstract:
CoordenaÃÃo de AperfeÃoamento de Pessoal de NÃvel Superior
The concept of treewidth was introduced by Robertson and Seymour. Treewidth may be defined as the size of the largest vertex set in a tree decomposition. Recent results show that several NP-Complete problems can be solved in polynomial time, or linear, when restricted to graphs with small treewidth. In our bibliographic research, we focus attention on the calculation of lower bounds for the treewidth and we described, in our dissertation, some of the principal results already available in the literature. We realize that linear-integer formulations for determining the treewidth are very limited in the literature and there are no studies available on the polyhedra associated with them. The Elimination Order Formulation (EOF) has been proposed by Koster and Bodlaender. It is based on orderly disposal of vertices and the relationship between the treewidth of a graph and its chordalizations. As a result of our study, we present a simplification of EOF formulation, we show that the polyhedron associated with this simplification is affine isomorphic to the EOF formulation. We determine the dimension of the polyhedron associated with the simplification, we briefly present a set of very simple facets and we introduce, analyse and demonstrate be a facet, some more complex inequalities.
O conceito de largura em Ãrvore (âtreewidthâ) foi introduzido por Robertson e Seymour. A largura em Ãrvore de um grafo G à o mÃnimo k tal que G pode ser decomposto em uma DecomposiÃÃo em Ãrvore (DEA) com cada subconjunto de vÃrtice com no mÃximo k+1 vÃrtices. Resultados recentes demonstram que vÃrios problemas NP-Completos podem ser resolvidos em tempo polinomial, ou ainda linear, quando restritos a grafos com largura em Ãrvore pequena. Em nossa pesquisa bibliogrÃfica, focamos a atenÃÃo no cÃlculo de limites inferiores para a largura em Ãrvore e descrevemos, em nossa dissertaÃÃo, alguns dos resultados jà disponÃveis na literatura. NÃs percebemos que formulaÃÃes lineares-inteiras para a determinaÃÃo da largura em Ãrvore sÃo limitadas na literatura e nÃo hà estudos disponÃveis sobre os poliedros associados a elas. A formulaÃÃo por ordem de eliminaÃÃo (EOF) foi proposta por Koster e Bodlaender. Ela à baseada na eliminaÃÃo ordenada de vÃrtices e na relaÃÃo entre a largura em Ãrvore de um grafo e suas cordalizaÃÃes. Como resultado de nosso estudo, apresentamos uma simplificaÃÃo da formulaÃÃo EOF, demonstramos que o poliedro associado a simplificaÃÃo à afim-isomÃrfico ao da formulaÃÃo EOF, verificamos a dimensÃo do poliedro associado à simplificaÃÃo, apresentamos brevemente um rol de facetas muito simples desse poliedro e, em seguinte, introduzimos, analisamos e demonstramos ser faceta algumas desigualdades mais complexas.
APA, Harvard, Vancouver, ISO, and other styles
37

Girometti, Laura. "Automatic texture-cartoon image decomposition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24486/.

Full text
Abstract:
La decomposizione di un'immagine nelle sue componenti più significative, come la struttura, la texture e il rumore, svolge un ruolo chiave nel poter processare al meglio l'immagine stessa. L'obiettivo di questa tesi è di proporre un modello a due fasi per decomporre un'immagine in tre componenti, ovvero cartoon-texture-rumore, utilizzando un approccio variazionale, che consiste nel minimizzare un funzionale costituito da più termini energia, ognuno adatto ad estrarre una specifica componente, bilanciati da diversi parametri. Lo scopo è riuscire a meglio separare la texture dal rumore, data la natura oscillante di entrambe le componenti che le rende difficilmente distinguibili. Inoltre, viene proposto e analizzato numericamente un principio di cross-correlation per settare automaticamente il parametro che bilancia i termini nel funzionale energia, data la sua influenza sulla qualità della decomposizione finale.
APA, Harvard, Vancouver, ISO, and other styles
38

Fidalgo-Marijuan, Arkaitz. "Normal-Coordinate Structural Decomposition Engine." Revista de Química, 2017. http://repositorio.pucp.edu.pe/index/handle/123456789/123959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Samuelsson, Saga. "The Singular Value Decomposition Theorem." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-150917.

Full text
Abstract:
This essay will present a self-contained exposition of the singular value decomposition theorem for linear transformations. An immediate consequence is the singular value decomposition for complex matrices.
Denna uppsats kommer presentera en självständig exposition av singulärvärdesuppdelningssatsen för linjära transformationer. En direkt följd är singulärvärdesuppdelning för komplexa matriser.
APA, Harvard, Vancouver, ISO, and other styles
40

Sandell, Magnus. "Spatial decomposition of ultrasonic echoes." Licentiate thesis, Luleå tekniska universitet, 1994. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-25963.

Full text
Abstract:
The pulse-echo method is one of the most important in ultrasonic imaging. In many areas, including medical applications and nondestructive evaluation, it constitutes one of the fundamental principles for aquiring information about the examined object. An ultrasonic pulse is transmitted into a medium and the reflected pulse is recorded, often by the same transducer. In the area of 3-dimensional imaging, or surface profiling, the distance between the object and the transducer is estimated to be proportional to the time-of-flight (TOF) of the pulse. If the transducer is then moved in a plane parallell to the object, a surface profile can be obtained. Usually some sort of correlation between echoes is performed to estimate their relative difference in TOF. However, this assumes that the shape of the echoes are the same. This is not the case as the shape is dependent on the surface in the neighbourhood of the transducer's symmetry axis and this shape will vary as the transducer is moved across the surface. The change in signal shape will reduce the accuracy of the TOF estimation. A simple example is when the surface has a step. The resulting echo consists of the superposition of two echoes; one from the "top" and one from the "bottom". The TOF estimate will then be almost arbitrary. Another difficulty with pulse-echo imaging is the lateral resolution. The ultrasonic beam is not infinitesimally thin but has a non-neglectable spatial extent, even for focused transducers. This means that two point reflectors separated laterally with only a small distance can not be resolved by ultrasound. The spatial decompostion of the ultrasonic echoes suggested in this licentiate thesis can be used to extract information from the pulse deformation and to reduce the lateral resolution in the following way: * In surface profiling, the surface is modelled as piecewise plane, i.e. the reflected pulse stems from a local plane and perpendicular object. If we instead model the part of the surface that reflects the ultrasonic pulse as a sloping plane there are two advantages. If we can estimate both the distance to, and the slope of, the surface, we can either increase the accuracy or decrease the number of scanning points while maintaining the same accuracy. * To increase the lateral resolution we have to take into account how points off the symmetry axis contribute to the total echo. If we know this, some kind of inverse spatial filter or other method can be constructed in order to improve the resolution. This thesis is comprised of the following five parts: Part A1: (Magnus Sandell and Anders Grennberg)"Spatial decomposition of the ultrasonic echo using a tomographic approach. Part A: The regularization method"We conclude that since the pulse-echo system can be considered linear, i.e.\ the echo from an arbitrary object can be thought of as the sum of the echoes from the contributing points on the surface, it would be very useful to know the echo from a point reflector. By doing this spatial decomposition we can simulate the echo from any object. It is, however, not possible practically to measure the {\em single point echo} (SPE) directly. If the reflector is to be considered pointlike, its size has to be so small that the echo will dissappear in the background noise. If it is increased, there will be spatial smoothing. Instead, we propose an indirect method that uses echoes from sliding halfplanes. This results in measurements with far better SNR and by modifying methods from tomography we can obtain the SPE. An error analysis is performed for the calculated SPE and simulated echoes from sloping halfplanes, using the obtained SPE, are compared with measured ones. Part A2 : (Anders Grennberg and Magnus Sandell)"Experimental determination of the single point echo of an ultrasonic transducer using a tomographic approach"The main ideas of Part A1 are presented in this conference paper. It was presented at the Conference of the IEEE Engineering in Medicine and Biology Society in Paris, France in October 1992. Part B1 : (Anders Grennberg and Magnus Sandell)"Spatial decomposition of the ultrasonic echo using a tomographic approach. Part B: The singular system method"In this part we continue the approach of spatially decomposing the ultrasonic echo. The SPE is again determined from echoes from sliding halfplanes. Here we interpret the SPE and the halfplane echoes to belong to two different weighted Hilbert spaces. These are chosen with regard to the properties of the SPE and the measured echoes. The SPE is supposed to belong to one of these spaces and is mapped by an integral operator to the other space. This is measured but the measurements also contain additive noise. A continuous inverse to this operator does not exist so the problem is ill-posed. A pseudo-inverse to this operator is constructed by using a singular value decomposition (SVD). By decomposing the halfplane echoes with N basis functions from the SVD, the SPE can be found. The spatial decomposition made in this part can be useful to obtain the long-term goals of estimating the slope of a tilted plane and to increase the lateral resolution. Part B2 : (Anders Grennberg and Magnus Sandell)"Experimental determination of the ultrasonic echo from a pointlike reflector using a tomographic approach"This is a contribution to the IEEE 1992 Ultrasonic Symposium in Tucson, USA. It is an extract of Part B1 and deals with the SVD-based inversion of the halfplane echoes. Part C : (Anders Grennberg and Magnus Sandell)"Estimation of subsample time delay differences in narrowbanded ultrasonic echoes using the Hilbert transform correlation"This part deals with a method for increased axial resolution. Using the fact that airborne ultrasonic pulses are narrowbanded, a new algorithm for estimating small time-delay is described. This method can be used in conjuction with a normal TOF-estimator. The latter can make a robust and rough (i.e. within a few samples) estimate and the remaining small time-delay is estimated using our proposed method. Another area of application is an improved averaging algorithm. Airborne ultrasound suffers from a jitter which is caused by air movement and temperature gradients. This jitter can be modelled as a small random time shift. A straightforward averaging will then be a summing of pulses that are not aligned in time which results in a pulse deformation. By estimating the time shift caused by the jitter, all echoes can be time aligned and no pulse deformation will occur when summing them.
Godkänd; 1994; 20080401 (ysko)
APA, Harvard, Vancouver, ISO, and other styles
41

Garg, Nupur. "Code Decomposition: A New Hope." DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1759.

Full text
Abstract:
Code decomposition (also known as functional decomposition) is the process of breaking a larger problem into smaller subproblems so that each function implements only a single task. Although code decomposition is integral to computer science, it is often overlooked in introductory computer science education due to the challenges of teaching it given limited resources. Earthworm is a tool that generates unique suggestions on how to improve the decomposition of provided Python source code. Given a program as input, Earthworm presents the user with a list of suggestions to improve the functional decomposition of the program. Each suggestion includes the lines of code that can be refactored into a new function, the arguments that must be passed to this function and the variables returned from the function. The tool is intended to be used in introductory computer science courses to help students learn more about decomposition. Earthworm generates suggestions by converting Python source code into a control flow graph. Static analysis is performed on the control flow graph to direct the generation of suggestions based on code slices.
APA, Harvard, Vancouver, ISO, and other styles
42

Strozecki, Yann. "Enumeration complexity and matroid decomposition." Paris 7, 2010. http://www.theses.fr/2010PA077178.

Full text
Abstract:
Ce travail comporte deux parties principales, d'une part l'étude des algorithmes d'énumération et leur complexité et d'autres part la vérification pour des hypergraphes et des matroïdes décomposés de propriétés exprimés en logique monadique du second ordre. L'énumération est d'abord étudié d'un point de vue structurel : on donne les définitions des classes de complexité les plus naturelles et leur relations sont étudiées. On tente d'expliquer le rôle de l'ordre dans cette problématique ainsi que l'effet d'opérations ensemblistes sur les solutions. Puis on donne une série de résultats sur l'énumération des monômes de polynômes donnés soit comme des boîtes noires, soit comme des circuits. Les algorithmes développés peuvent ensuite être utilisés pour résoudre des problèmes combinatoires plus classiques, tel que l'énumération des hyperarbres couvrants d'un hypergraphe 3-uniforme. Dans la deuxième partie, on donne une représentation arborescente alternative des matroïdes de largeur de branche bornée. Cela permet d'exprimer localement la propriété d'indépendance et ainsi de décider en temps linéaire des propriétés monadiques du second ordre sur ces structures. On obtient également une énumération à délai linéaire des objets définissables en LMSO, par exemple les circuits d'un matroïde. On montre que cette décomposition s'étend facilement à d'autres classes de matroïdes et en poussant l'abstraction plus loin à des hypergraphes décomposables
This thesis is made of two parts, on the one hand the study of enumeration algorithms and their complexity and in the other hand the model checking of Monadic second order properties over decomposable matroids. The enumeration is studied first from a structural point of view: natural complexity classes are defined and their relation studied. We also try to explain the effect of ordering in enumeration and of some set operations over the solutions. Then, we present several algorithms to enumerate the monomials of polynomials given either as black boxes or circuits. They can be used to solve more classical combinatoric problems such as the enumeration of spanning hypertrees of a 3-uniform hypergraph. In the second part, we present an alternative tree decomposition of representable matroids of bounded branch-width. It enables to locally express the dependency property and thus to give a linear time algorithm to check MSO properties over these structures. We also obtain a linear delay enumeration algorithm of the objects definable in MSO, such as the circuits of a matroid. This decomposition can easily extended to other classes and even by further abstraction to hypergaphs
APA, Harvard, Vancouver, ISO, and other styles
43

Ngulo, Uledi. "Decomposition Methods for Combinatorial Optimization." Licentiate thesis, Linköpings universitet, Tillämpad matematik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-175896.

Full text
Abstract:
This thesis aims at research in the field of combinatorial optimization. Problems within this field often posses special structures allowing them to be decomposed into more easily solved subproblems, which can be exploited in solution methods. These structures appear frequently in applications. We contribute with both re-search on the development of decomposition principles and on applications. The thesis consists of an introduction and three papers.  In Paper I, we develop a Lagrangian meta-heuristic principle, which is founded on a primal-dual global optimality condition for discrete and non-convex optimization problems. This condition characterizes (near-)optimal solutions in terms of near-optimality and near-complementarity measures for Lagrangian relaxed solutions. The meta-heuristic principle amounts to constructing a weighted combination of these measures, thus creating a parametric auxiliary objective function (which is a close relative to a Lagrangian function), and embedding a Lagrangian heuristic in a search procedure in the space of the weight parameters. We illustrate and assess the Lagrangian meta-heuristic principle by applying it to the generalized assignment problem and to the set covering problem. Our computational experience shows that the meta-heuristic extension of a standard Lagrangian heuristic principle can significantly improve upon the solution quality.  In Paper II, we study the duality gap for set covering problems. Such problems sometimes have large duality gaps, which make them computationally challenging. The duality gap is dissected with the purpose of understanding its relationship to problem characteristics, such as problem shape and density. The means for doing this is the above-mentioned optimality condition, which is used to decompose the duality gap into terms describing near-optimality in a Lagrangian relaxation and near-complementarity in the relaxed constraints. We analyse these terms for numerous problem instances, including some large real-life instances, and conclude that when the duality gap is large, the near-complementarity term is typically large and the near-optimality term small. The large violation of complementarity is due to extensive over-coverage. Our observations have implications for the design of solution methods, especially for the design of core problems.  In Paper III, we study a bi-objective covering problem stemming from a real-world application concerning the design of camera surveillance systems for large-scale outdoor areas. It is prohibitively costly to surveil the entire area, and therefore relevant to be able to present a decision-maker with trade-offs between total cost and the portion of the area that is surveilled. The problem is stated as a set covering problem with two objectives, describing cost and portion of covering constraints that are fulfilled, respectively. Finding the Pareto frontier for these objectives is very computationally demanding and we therefore develop a method for finding a good approximate frontier in a reasonable computing time. The method is based on the ε−constraint reformulation, an established heuristic for set covering problems, and subgradient optimization.
Denna avhandling behandlar lösningsmetoder för stora och komplexa kombinatoriska optimeringsproblem. Sådana problem har ofta speciella strukturer som gör att de kan dekomponeras i en uppsättning mindre delproblem, vilket kan utnyttjas för konstruktion av effektiva lösningsmetoder. Avhandlingen omfattar både grundforskning inom utvecklingen av dekompositionsprinciper för kombinatorisk optimering och forskning på tillämpningar inom detta område. Avhandlingen består av en introduktion och tre artiklar.  I den första artikeln utvecklar vi en “Lagrange-meta-heuristik-princip”. Principen bygger på primal-duala globala optimalitetsvillkor för diskreta och icke-konvexa optimeringsproblem. Dessa optimalitetsvillkor beskriver (när)optimala lösningar i termer av när-optimalitet och när-komplementaritet för Lagrange-relaxerade lösningar. Den meta-heuristiska principen bygger på en ihopviktning av dessa storheter vilket skapar en parametrisk hjälpmålfunktion, som har stora likheter med en Lagrange-funktion, varefter en traditionell Lagrange-heuristik används för olika värden på viktparametrarna, vilka avsöks med en meta-heuristik. Vi illustrerar och utvärderar denna meta-heuristiska princip genom att tillämpa den på det generaliserade tillordningsproblemet och övertäckningsproblemet, vilka båda är välkända och svårlösta kombinatoriska optimeringsproblem. Våra beräkningsresultat visar att denna meta-heuristiska utvidgning av en vanlig Lagrange-heuristik kan förbättra lösningskvaliteten avsevärt.  I den andra artikeln studerar vi egenskaper hos övertäckningsproblem. Denna typ av optimeringsproblem har ibland stora dual-gap, vilket gör dem beräkningskrävande. Dual-gapet analyseras därför med syfte att förstå dess relation till problemegenskaper, såsom problemstorlek och täthet. Medlet för att göra detta är de ovan nämnda primal-duala globala optimalitetsvillkoren för diskreta och icke-konvexa optimeringsproblem. Dessa delar upp dual-gapet i två termer, som är när-optimalitet i en Lagrange-relaxation och när-komplementaritet i de relaxerade bivillkoren, och vi analyserar dessa termer för ett stort antal probleminstanser, däribland några storskaliga praktiska problem. Vi drar slutsatsen att när dualgapet är stort är vanligen den när-komplementära termen stor och den när-optimala termen liten. Vidare obseveras att när den när-komplementära termen är stor så beror det på en stor överflödig övertäckning. Denna förståelse för problemets inneboende egenskaper går att använda vid utformningen av lösningsmetoder för övertäckningsproblem, och speciellt för konstruktion av så kallade kärnproblem.  I den tredje artikeln studeras tvåmålsproblem som uppstår vid utformningen av ett kameraövervakningssystem för stora områden utomhus. Det är i denna tillämpning alltför kostsamt att övervaka hela området och problemet modelleras därför som ett övertäckningsproblem med två mål, där ett mål beskriver totalkostnaden och ett mål beskriver hur stor del av området som övervakas. Man önskar därefter kunna skapa flera lösningar som har olika avvägningar mellan total kostnad och hur stor del av området som övervakas. Detta är dock mycket beräkningskrävande och vi utvecklar därför en metod för att hitta bra approximationer av sådana lösningar inom rimlig beräkningstid.
APA, Harvard, Vancouver, ISO, and other styles
44

Kim, Donggeon. "Least squares mixture decomposition estimation." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-02132009-171622/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Xie, Min. "Signal decomposition for nonstationary processes." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-06062008-162359/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Saka, Paul. "Lexical decomposition in cognitive semantics." Diss., The University of Arizona, 1991. http://hdl.handle.net/10150/185592.

Full text
Abstract:
This dissertation formulates, defends, and exemplifies a semantic approach that I call Cognitive Decompositionism. Cognitive Decompositionism is one version of lexical decompositionism, which holds that the meaning of lexical items are decomposable into component parts. Decompositionism comes in different varieties that can be characterized in terms of four binary parameters. First, Natural Decompositionism contrasts with Artful Decompositionism. The former views components as word-like, the latter views components more abstractly. Second, Convenient Decompositionism claims that components are merely convenient fictions, while Real Decompositionism claims that components are psychologically real. Third, Truth-conditional Decompositionism contrasts with various non-truth-conditional theories, in particular with Quantum Semantics. And fourth, Holistic Decompositionism assumes that decompositions are circular, as opposed to Atomistic Decompositionism, which assumes that some primitive basis ultimately underlies semantic components. Cognitive Decompositionism is the conjunction of the following theses: decomposition is Artful (chapter 2), Psychologically Real (chapter 3), Quantum (chapter 4), and Atomistic (chapter 5). As I substantiate these claims, I will be responding to the anti-decompositionist theories of Fodor, Davidson, and Quine.
APA, Harvard, Vancouver, ISO, and other styles
47

Farhana, Sharmeen. "Thermal decomposition of struvite : a novel approach to recover ammonia from wastewater using struvite decomposition products." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54180.

Full text
Abstract:
Ammonia recovery technology, utilizing newberyite (MgHPO₄.3H₂O) as a struvite (MgNH₄PO₄.6H₂O) decomposition product, is gaining interest as usage of newberyite can significantly reduce the cost of commercial reagents, by providing readily available magnesium and phosphate for struvite reformation. In this study, the efficiency of ammonia removal from struvite, and a transformation process of struvite to newberyite, were investigated through performing oven dry, bench-scale and pilot-scale experiments. In the oven dry experiments, the structural and compositional changes of synthetic struvite, upon decomposition, were evaluated. Around 60-70% ammonia removal efficiency was achieved through struvite thermal decomposition above 60º±0.5ºC, with up to 71.1ºC with prolonged heating. The 2D amorphous layered structure, present in the decomposed solid phase, entrapped around 30-40% residual ammonia between the layers of magnesium and phosphate, inhibiting further ammonia removal. Subsequently, bench-scale experiments were conducted based on the hypothesis that humid air can prevent the formation of a layered structure including dittmarite (MgNH₄PO₄.H₂O) and an amorphous 2D layered structure. Struvite pellets of different sources and sizes were heated in a fluidized bed reactor in the presence of hot air and steam. Introduction of steam resulted in complete transformation of struvite pellets (<1mm) into newberyite at 80ºC, 95% relative humidity and 2 hours of heating. Finally, pilot-scale experiments were carried out to further optimize the operating conditions for industrial application. The smaller and softer pellets (size <1mm, hardness 300-500 g) were the best suited for struvite-to-newberyite conversion. The process was optimized further by narrowing down the relative humidity from 95% to 85% and reducing the heating duration from 2 to 1.5 hours. The operating cost of the pilot-scale process was estimated, which can be reduced through recycling the heat and moist air over the cycle. The number of cycles for which the decomposed product can be effectively reused depends on the required overall N-recovery efficiency, as well as the performance of the struvite recrystallization stage. The greatest advantage of the proposed technology, over other recovery methods, is that the operating costs can be turned into revenue by utilizing the recovered product as fertilizer or energy source.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
48

Lien, Jyh-Ming. "Approximate convex decomposition and its applications." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wardle, Mason B. "A PAM Decomposition of Weak CPM." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd868.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Vyskocil, Pavel. "Decomposition and coarsening in Ni-Ti /." [S.l.] : [s.n.], 1994. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=10396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography