Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Matroid Constraints.

Rozprawy doktorskie na temat „Matroid Constraints”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Matroid Constraints”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Reimers, Arne Cornelis [Verfasser]. "Metabolic Networks, Thermodynamic Constraints, and Matroid Theory / Arne C. Reimers". Berlin : Freie Universität Berlin, 2014. http://d-nb.info/1058587331/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Harini, Desiraju Harini. "Matrix models and Virasoro constraints". Thesis, Uppsala universitet, Teoretisk fysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-276090.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Flieger, Wojciech. "Constraints on neutrino mixing from matrix theory". Doctoral thesis, Katowice : Uniwersytet Śląski, 2021. http://hdl.handle.net/20.500.12128/21721.

Pełny tekst źródła
Streszczenie:
Jeden z kluczowych problemów współczesnej fizyki cząstek elementarnych dotyczy liczby zapachów neutrin występujących w naturze. Do tej pory udało się ustalić, ze istnieją trzy rodzaje neutrin aktywnych. Istotnym problemem jest ustalenie, czy istnieją inne dodatkowe stany neutrinowe. Neutrina takie nazywamy sterylnymi ze względu na fakt, ze ich oddziaływanie słabe ze znaną materią jest jak do tej pory poniżej eksperymentalnego progu detekcji. Niemniej jednak neutrina sterylne mogą się mieszać z neutrinami aktywnymi pozostawiając tym samym ślady swojego istnienia na poziomie Modelu Standardowego w postaci nieunitarności macierzy mieszania neutrin. Z tego powodu badanie nieunitarności macierzy mieszania jest tak istotne dla pełnego zrozumienia fizyki neutrin. W rozprawie przedstawiamy nową metodę analizy macierzy mieszania neutrin opartą na teorii macierzy. Fundament naszego podejścia do badania macierzy mieszania neutrin stanowią pojęcia wartości osobliwych oraz kontrakcji. Dzięki tym pojęciom zdefiniowaliśmy obszar fizycznie dopuszczalnych macierzy mieszania jako powłokę wypukłą rozpiętą na trójwymiarowych unitarnych macierzach mieszania wyznaczonych na podstawie danych eksperymentalnych. W rozprawie badamy geometryczne własności tego obszaru wyznaczając jego objętość wyrażoną poprzez miarę Haara rozkładu na wartości osobliwe oraz studiując jego strukturę wewnętrzną zależną od minimalnej liczby dodatkowych sterylnych neutrin. Stosując teorię unitarnej dylatacji pokazujemy jak wartości osobliwe pozwalają zidentyfikować nieunitarne macierze mieszania oraz jak tworzyć ich rozszerzenia do pełnej macierzy unitarnej wymiaru większego niż trzy, opisującej kompletną teorię zawierającą neutrina sterylne. Na tej podstawie wyznaczamy nowe ograniczenia w modelach gdzie aktywne neutrina mieszają się z jednym dodatkowym neutrinem sterylnym.
Style APA, Harvard, Vancouver, ISO itp.
4

Lecharlier, Loïc. "Blind inverse imaging with positivity constraints". Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209240.

Pełny tekst źródła
Streszczenie:
Dans les problèmes inverses en imagerie, on suppose généralement connu l’opérateur ou matrice décrivant le système de formation de l’image. De façon équivalente pour un système linéaire, on suppose connue sa réponse impulsionnelle. Toutefois, ceci n’est pas une hypothèse réaliste pour de nombreuses applications pratiques pour lesquelles cet opérateur n’est en fait pas connu (ou n’est connu qu’approximativement). On a alors affaire à un problème d’inversion dite “aveugle”. Dans le cas de systèmes invariants par translation, on parle de “déconvolution aveugle” car à la fois l’image ou objet de départ et la réponse impulsionnelle doivent être estimées à partir de la seule image observée qui résulte d’une convolution et est affectée d’erreurs de mesure. Ce problème est notoirement difficile et pour pallier les ambiguïtés et les instabilités numériques inhérentes à ce type d’inversions, il faut recourir à des informations ou contraintes supplémentaires, telles que la positivité qui s’est avérée un levier de stabilisation puissant dans les problèmes d’imagerie non aveugle. La thèse propose de nouveaux algorithmes d’inversion aveugle dans un cadre discret ou discrétisé, en supposant que l’image inconnue, la matrice à inverser et les données sont positives. Le problème est formulé comme un problème d’optimisation (non convexe) où le terme d’attache aux données à minimiser, modélisant soit le cas de données de type Poisson (divergence de Kullback-Leibler) ou affectées de bruit gaussien (moindres carrés), est augmenté par des termes de pénalité sur les inconnues du problème. La stratégie d’optimisation consiste en des ajustements alternés de l’image à reconstruire et de la matrice à inverser qui sont de type multiplicatif et résultent de la minimisation de fonctions coût “surrogées” valables dans le cas positif. Le cadre assez général permet d’utiliser plusieurs types de pénalités, y compris sur la variation totale (lissée) de l’image. Une normalisation éventuelle de la réponse impulsionnelle ou de la matrice est également prévue à chaque itération. Des résultats de convergence pour ces algorithmes sont établis dans la thèse, tant en ce qui concerne la décroissance des fonctions coût que la convergence de la suite des itérés vers un point stationnaire. La méthodologie proposée est validée avec succès par des simulations numériques relatives à différentes applications telle que la déconvolution aveugle d'images en astronomie, la factorisation en matrices positives pour l’imagerie hyperspectrale et la déconvolution de densités en statistique.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Style APA, Harvard, Vancouver, ISO itp.
5

Strabic, Natasa. "Theory and algorithms for matrix problems with positive semidefinite constraints". Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/theory-and-algorithms-for-matrix-problems-with-positive-semidefinite-constraints(5c8ac15f-9666-4682-9297-73d976bed63e).html.

Pełny tekst źródła
Streszczenie:
This thesis presents new theoretical results and algorithms for two matrix problems with positive semidefinite constraints: it adds to the well-established nearest correlation matrix problem, and introduces a class of semidefinite Lagrangian subspaces. First, we propose shrinking, a method for restoring positive semidefiniteness of an indefinite matrix $M_0$ that computes the optimal parameter $\a_*$ in a convex combination of $M_0$ and a chosen positive semidefinite target matrix. We describe three algorithms for computing $\a_*$, and then focus on the case of keeping fixed a positive semidefinite leading principal submatrix of an indefinite approximation of a correlation matrix, showing how the structure can be exploited to reduce the cost of two algorithms. We describe how weights can be used to construct a natural choice of the target matrix and that they can be incorporated without any change to computational methods, which is in contrast to the nearest correlation matrix problem. Numerical experiments show that shrinking can be at least an order of magnitude faster than computing the \ncm\ and so is preferable in time-critical applications. Second, we focus on estimating the distance in the Frobenius norm of a symmetric matrix $A$ to its nearest correlation matrix $\Ncm(A)$ without first computing the latter. The goal is to enable a user to identify an invalid correlation matrix relatively cheaply and to decide whether to revisit its construction or to compute a replacement. We present a few currently available lower and upper bounds for $\dcorr(A) = \normF{A - \Ncm(A)}$ and derive several new upper bounds, discuss the computational cost of all the bounds, and test their accuracy on a collection of invalid correlation matrices. The experiments show that several of our bounds are well suited to gauging the correct order of magnitude of $\dcorr(A)$, which is perfectly satisfactory for practical applications. Third, we show how Anderson acceleration can be used to speed up the convergence of the \apm\ for computing the \ncm, and that the acceleration remains effective when it is applied to the variants of the nearest correlation matrix problem in which specified elements are fixed or a lower bound is imposed on the smallest eigenvalue. This is particularly significant for the nearest correlation matrix problem with fixed elements because no Newton method with guaranteed convergence is available for it. Moreover, alternating projections is a general method for finding a point in the intersection of several sets and this appears to be the first demonstration that these methods can benefit from Anderson acceleration. Finally, we introduce semidefinite Lagrangian subspaces, describe their connection to the unique positive semidefinite solution of an algebraic Riccati equation, and show that these subspaces can be represented by a subset $\mathcal{I} \subseteq \{1,2,\dots, n\}$ and a Hermitian matrix $X\in\mathbb{C}^{n\times n}$ that is a generalization of a quasidefinite matrix. We further obtain a semidefiniteness-preserving version of an optimization algorithm introduced by Mehrmann and Poloni [\textit{SIAM J.\ Matrix Anal.\ Appl.}, 33(2012), pp.\ 780--805] to compute a pair $(\mathcal{I}_{\opt},X_{\opt})$ with $M = \max_{i,j} \abs{(X_{\opt})_{ij}}$ as small as possible, which improves numerical stability in several contexts.
Style APA, Harvard, Vancouver, ISO itp.
6

Chia, Liang. "Language shift in a Singaporean Chinese family and the matrix language frame model". Thesis, University of Oxford, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365765.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Xu, Da. "Classical groups, integrals and Virasoro constraints". Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/629.

Pełny tekst źródła
Streszczenie:
First, we consider the group integrals where integrands are the monomials of matrix elements of irreducible representations of classical groups. These group integrals are invariants under the group action. Based on analysis on Young tableaux, we investigate some related duality theorems and compute the asymptotics of the group integrals for fixed signatures, as the rank of the classical groups go to infinity. We also obtain the Viraosoro constraints for some partition functions, which are power series of the group integrals. Second, we show that the proof of Witten's conjecture can be simplified by using the fermion-boson correspondence, i.e., the KdV hierarchy and Virasoro constraints of the partition function in Witten's conjecture can be achieved naturally. Third, we consider the partition function involving the invariants that are intersection numbers of the moduli spaces of holomorphic maps in nonlinear sigma model. We compute the commutator of the representation of Virasoro algebra and give a fat graph(ribbon graph) interpretation for each term in the diferential operators.
Style APA, Harvard, Vancouver, ISO itp.
8

Bai, Shuanghua. "Numerical methods for constrained Euclidean distance matrix optimization". Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/401542/.

Pełny tekst źródła
Streszczenie:
This thesis is an accumulation of work regarding a class of constrained Euclidean Distance Matrix (EDM) based optimization models and corresponding numerical approaches. EDM-based optimization is powerful for processing distance information which appears in diverse applications arising from a wide range of fields, from which the motivation for this work comes. Those problems usually involve minimizing the error of distance measurements as well as satisfying some Euclidean distance constraints, which may present enormous challenge to the existing algorithms. In this thesis, we focus on problems with two different types of constraints. The first one consists of spherical constraints which comes from spherical data representation and the other one has a large number of bound constraints which comes from wireless sensor network localization. For spherical data representation, we reformulate the problem as an Euclidean dis-tance matrix optimization problem with a low rank constraint. We then propose an iterative algorithm that uses a quadratically convergent Newton-CG method at its each step. We study fundamental issues including constraint nondegeneracy and the nonsingularity of generalized Jacobian that ensure the quadratic convergence of the Newton method. We use some classic examples from the spherical multidimensional scaling to demonstrate the exibility of the algorithm in incorporating various constraints. For wireless sensor network localization, we set up a convex optimization model using EDM which integrates connectivity information as lower and upper bounds on the elements of EDM, resulting in an EDM-based localization scheme that possesses both effciency and robustness in dealing with flip ambiguity under the presence of high level of noises in distance measurements and irregular topology of the concerning network of moderate size. To localize a large-scale network effciently, we propose a patching-stitching localization scheme which divides the network into several sub-networks, localizes each sub-network separately and stitching all the sub-networks together to get the recovered network. Mechanism for separating the network is discussed. EDM-based optimization model can be extended to add more constraints, resulting in a exible localization scheme for various kinds of applications. Numerical results show that the proposed algorithm is promising.
Style APA, Harvard, Vancouver, ISO itp.
9

Jin, Shengzhe. "Quality Assessment Planning Using Design Structure Matrix and Resource Constraint Analysis". University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1292518039.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Menzel, Andreas. "Constraints on the Fourth-Generation Quark Mixing Matrix from Precision Flavour Observables". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://dx.doi.org/10.18452/17711.

Pełny tekst źródła
Streszczenie:
Das Standardmodell einer zusätzlichen sequentiellen Fermiongeneration (SM4) war 2012 auf Basis eines Fits an elektroschwache Präzisionsobservable und die Higgs-Signalstärken mit einer Signifikanz von 5.3 sigma ausgeschlossen worden. Komplementär dazu wurden in der vorliegenden Arbeit Fits des SM4 an eine Kombination eines typischen Satzes von Flavour-Observablen mit den Ergebnissen des zuvor durchgeführten Elektroschwachen Präzisionsfits durchgeführt. Im SM3-Kontext extrahierte Größen wurden gemäß ihrer Bedeutung im SM4 reinterpretiert und die angepassten theoretischen Ausdrücke angegeben. Die resultierenden Einschränkungen der CKM-Matrix des SM4, ihrer potentiell CP-verletzenden Phasen sowie der Masse des up-type-Quarks der 4. Generation t'' werden angegeben. Zum Vergleich des SM4 mit dem SM3 werden die erreichten chi^2-Werte genutzt. chi^2=15.53 im SM4 und 9.56 im SM3 passen fast vollkommen zu einer gleich guten Beschreibung der Experimente durch beide Modelle, wobei das SM3 aber sechs Freiheitsgrade mehr besitzt. Außerdem wurden die Vorhersagen des SM3 und des SM4 für die Dimyon-Ladungsasymmetrie ASL mit experimentellen Werten verglichen. Die Vorhersage des SM3 ist ca. 2 sigma vom experimentellen Wert entfernt, die des SM4 ca. 3 sigma.\par Die Ergebnisse deuten nicht darauf hin, dass die Signifikanz des 2012 erreichten Ausschlusses des SM4 durch die Hinzunahme von Flavour-Observablen zu den damals verwendeten elektroschwachen Präzisionsobservablen und Higgs-Querschnitten bedeutend verringert würde.\par Es konnte jedoch keine genaue quantitative Aussage über die Auswirkungen der Flavourobservablen auf diese Signifikanz getroffen werden, weil das Programm CKMfitter likelihood-ratio-Berechnung nur durchführen kann, wenn sich eines der untersuchten Modelle durch Fixierung von Parametern aus dem anderen ergibt (nested models), was hier nicht der Fall ist.
The Standard Model extended by an additional sequential generation of Dirac fermions (SM4) was excluded with a significance of 5.3 sigma in 2012. This was achieved in a combined fit of the SM4 to Electroweak Precision Observables and signal strengths of the Higgs boson. This thesis complements this excludion by a fit of the SM4 to a typical set of Flavour physics observables and the results of the previously performed Electroweak Precision fit. Quantities extracted in an SM3 framework are reinterpreted in SM4 terms and the adapted theoretical expressions are given. The resultant constraints on the SM4''s CKM matrix, its potentially CP-violating phases and the mass of the new up-type quark t'' are given. To compare the relative performance of the SM4 and the SM3, this work uses the chi^2 values achieved in the fit. The values of 15.53 for the SM4 and 9.56 for the SM4 are almost perfectly consistent with both models describing the experimental data equally well with the SM3 having six degrees of freedom more. The dimuon charge asymmetry ASL was not used as a fit input because the interpretation of its measurement was subject to debate at the time when the fits were produced, but its prediction in the fit was used as an additional test of the SM4. The SM3''s prediction differs from the experimental values by about 2 sigma, and the SM4''s prediction by about 3 sigma. \par In summary, these results do not suggest that any significant reduction of the 5.3 sigma exclusion could be achieved by combining the Electroweak Precision Observables and Higgs inputs with Flavour physics data. However, the exact effect of the Flavour physics input on the significance of the SM4''s exclusion cannot be given at this point because the CKMfitter software is currently not able to perform a statistically stringent likelihood comparison of non-nested models.
Style APA, Harvard, Vancouver, ISO itp.
11

Gajaweera, Ruwan Naminda. "Coupling matrix synthesis for constrained topology microwave bandpass filters". Thesis, University of Essex, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399028.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Morris, Craig C. "Flight Dynamic Constraints in Conceptual Aircraft Multidisciplinary Analysis and Design Optimization". Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/25787.

Pełny tekst źródła
Streszczenie:
This work details the development of a stability and control module for implementation into a Multidisciplinary Design Optimization (MDO) framework for the conceptual design of conventional and advanced aircraft. A novel approach, called the Variance Constrained Flying Qualities (VCFQ) approach, is developed to include closed-loop dynamic performance metrics in the design optimization process. The VCFQ approach overcomes the limitations of previous methods in the literature, which only functioned for fully decoupled systems with single inputs to the system. Translation of the modal parameter based flying qualities requirements into state variance upper bounds allows for multiple-input control laws which can guarantee upper bounds on closed-loop performance metrics of the aircraft states and actuators to be rapidly synthesized. A linear matrix inequality (LMI) problem formulation provides a general and scalable numerical technique for computing the feedback control laws using convex optimization tools. The VCFQ approach is exercised in a design optimization study of a relaxed static stability transonic transport aircraft, wherein the empennage assembly is optimized subject to both static constraints and closed-loop dynamic constraints. Under the relaxed static stability assumption, application of the VCFQ approach resulted in a 36% reduction in horizontal tail area and a 32% reduction in vertical tail area as compared to the baseline configuration, which netted a weight savings of approximately 5,200 lbs., a 12% reduction in cruise trimmed drag, and a static margin which was marginally stable or unstable throughout the flight envelope. State variance based dynamic performance constraints offer the ability to analyze large, highly coupled systems, and the linear matrix inequality problem formulation can be extended to include higher-order closed-loop design objectives within the MDO. Recommendations for further development and extensions of this approach are presented at the end.
This material is based on research sponsored by Air Force Research Laboratory under agreement number FA8650-09-2-3938. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of Air Force Research Laboratory or the U.S. Government.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
13

Ankelhed, Daniel. "On low order controller synthesis using rational constraints". Licentiate thesis, Linköping : Department of Electrical Engineering, Linköping University, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-17002.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Tong, Lei. "Constrained Nonnegative Matrix Factorization for Hyperspectral Unmixing and Its Applications". Thesis, Griffith University, 2016. http://hdl.handle.net/10072/367613.

Pełny tekst źródła
Streszczenie:
Hyperspectral remote sensing imagery, containing both spatial and spectral information captured by imaging sensors, has been widely used for ground information extraction. Due to the long distance of the imaging sensors to the targets of monitoring and the intrinsic property of sensors, hyperspectral images normally do not have high spatial resolution, which causes mixed responses of various types of ground objects in the images. Therefore, hyperspectral unmixing has become an important technique to decompose mixed pixels into a collection of spectral signatures, or endmembers, and their corresponding proportions, i.e., abundance. Hyperspectral unmixing methods can be mainly divided into three categories: geometric based, statistics based, and sparse regression based. Among these methods, nonnegative matrix factorization (NMF), as one of the statistical methods, has attracted much attention. It treats unmixing as a blind source separation problem, and decomposes image data into endmember and abundance ma- trices simultaneously. However, the NMF algorithm may fall into local minima because the objective function of NMF is a non-convex function. Adding adequate constraint to NMF has become one solution to solve this problem. In this thesis, we introduce three different constraints for the NMF based hyperspe tral unmixing method. The first constraint is a partial prior knowledge of endmember constraint. It assumes that some endmembers could be treated as known endmembers before unmixing. The proposed model minimizes the differences between the spectral signatures of endmembers being estimated in the image data and the standard signa- tures of known endmembers extracted from a library or detected from the ground. The benefit of this method is that it not only uses the prior knowledge on the unmixing tasks, but also considers the distribution of the real data in the hyperspectral dataset, so that the discrepancy between the prior knowledge and the data can be compromised. Furthermore, the proposed method is general in nature, and can be easily extended to other NMF based hyperspectral unmixing algorithms.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Griffith School of Engineering
Science, Environment, Engineering and Technology
Full Text
Style APA, Harvard, Vancouver, ISO itp.
15

Li, Nan. "Maximum Likelihood Identification of an Information Matrix Under Constraints in a Corresponding Graphical Model". Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/128.

Pełny tekst źródła
Streszczenie:
We address the problem of identifying the neighborhood structure of an undirected graph, whose nodes are labeled with the elements of a multivariate normal (MVN) random vector. A semi-definite program is given for estimating the information matrix under arbitrary constraints on its elements. More importantly, a closed-form expression is given for the maximum likelihood (ML) estimator of the information matrix, under the constraint that the information matrix has pre-specified elements in a given pattern (e.g., in a principal submatrix). The results apply to the identification of dependency labels in a graphical model with neighborhood constraints. This neighborhood structure excludes nodes which are conditionally independent of a given node and the graph is determined by the non- zero elements in the information matrix for the random vector. A cross-validation principle is given for determining whether the constrained information matrix returned from this procedure is an acceptable model for the information matrix, and as a consequence for the neighborhood structure of the Markov Random Field (MRF) that is identified with the MVN random vector.
Style APA, Harvard, Vancouver, ISO itp.
16

Billson, Jeremy Paul. "The design, synthesis, and evaluation of some conformationally constrained matrix metalloprotease inhibitors". Thesis, University of Exeter, 1997. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.535965.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Ankelhed, Daniel. "On design of low order H-infinity controllers". Doctoral thesis, Linköpings universitet, Reglerteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-67869.

Pełny tekst źródła
Streszczenie:
When designing controllers with robust performance and stabilization requirements, H-infinity synthesis is a common tool to use. These controllers are often obtained by solving mathematical optimization problems. The controllers that result from these algorithms are typically of very high order, which complicates implementation. Low order controllers are usually desired, since they are considered more reliable than high order controllers. However, if a constraint on the maximum order of the controller is set that is lower than the order of the so-called augmented system, the optimization problem becomes nonconvex and it is relatively difficult to solve. This is true even when the order of the augmented system is low. In this thesis, optimization methods for solving these problems are considered. In contrast to other methods in the literature, the approach used in this thesis is based on formulating the constraint on the maximum order of the controller as a rational function in an equality constraint. Three methods are then suggested for solving this smooth nonconvex optimization problem. The first two methods use the fact that the rational function is nonnegative. The problem is then reformulated as an optimization problem where the rational function is to be minimized over a convex set defined by linear matrix inequalities (LMIs). This problem is then solved using two different interior point methods. In the third method the problem is solved by using a partially augmented Lagrangian formulation where the equality constraint is relaxed and incorporated into the objective function, but where the LMIs are kept as constraints. Again, the feasible set is convex and the objective function is nonconvex. The proposed methods are evaluated and compared with two well-known methods from the literature. The results indicate that the first two suggested methods perform well especially when the number of states in the augmented system is less than 10 and 20, respectively. The third method has comparable performance with two methods from literature when the number of states in the augmented system is less than 25.
Style APA, Harvard, Vancouver, ISO itp.
18

Madume, Jaison Pezisai. "Covariance matrix estimation methods for constrained portfolio optimization in a South African setting". Master's thesis, University of Cape Town, 2010. http://hdl.handle.net/11427/5745.

Pełny tekst źródła
Streszczenie:
One of the major topics of concern in Modern Portfolio Theory is portfolio optimization which is centred on the mean-variance framework. In order for this framework to be implemented, esti- mated parameters (covariance matrix for the constrained portfo- lio) are required. The problem with these estimated parameters is that they have to be extracted from historical data based on certain assumptions. Because of the di erent estimation methods that can be used the parameters thus obtained will su er either from estimation error or speci cation error. In order to obtain results that are realistic in the optimization, one needs then to establish covariance matrix estimators that are as good as possi- ble. This paper explores the various covariance matrix estimation methods in a South African setting focusing on the constrained portfolio. The empirical results show that the Ledoit shrinkage to a constant correlation method, the Principal Component Analy- sis method and the Portfolio of estimators method all perform as good as the Sample covariance matrix in the Ex-ante period but improve on it slightly in the Ex-post period. However, the im- provement is of a small magnitude, as a result the sample covari- ance matrix can be used in the constrained portfolio optimization in a South African setting.
Style APA, Harvard, Vancouver, ISO itp.
19

Kabasele, Philothe Mwamba. "TESTING THE MATRIX LANGUAGE FRAME MODEL WITH EVIDENCE FROM FRENCH-LINGALA CODE-SWITCHING". OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/616.

Pełny tekst źródła
Streszczenie:
My thesis investigates the universality of the Matrix Language Frame model developed by Myers-Scotton (2002). The work tests the model by using bilingual data which display code-switching between French and the low variety of Lingala. The main concern of the work is to test the constraints that are posited in terms of principles of the model and which claim that the Matrix Language dictates the morphosyntactic frame of a bilingual Complementizer Phrase (CP). In the light of the findings of this study, it was shown that the ML model failed to account for a number of situations; and such was the case of the Morpheme Order Principle and double morphology, specifically with the outsider late system morphemes.
Style APA, Harvard, Vancouver, ISO itp.
20

Lavine, Jerrold I. (Jerrold Isaac) 1968. "Parametric design constraints management using the design structure matrix : creation of an electronic catalog for a safety belt system". Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/88826.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Spalt, Taylor Brooke. "Constrained Spectral Conditioning for the Spatial Mapping of Sound". Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/70868.

Pełny tekst źródła
Streszczenie:
In aeroacoustic experiments of aircraft models and/or components, arrays of microphones are utilized to spatially isolate distinct sources and mitigate interfering noise which contaminates single-microphone measurements. Array measurements are still biased by interfering noise which is coherent over the spatial array aperture. When interfering noise is accounted for, existing algorithms which aim to both spatially isolate distinct sources and determine their individual levels as measured by the array are complex and require assumptions about the nature of the sound field. This work develops a processing scheme which uses spatially-defined phase constraints to remove correlated, interfering noise at the single-channel level. This is achieved through a merger of Conditioned Spectral Analysis (CSA) and the Generalized Sidelobe Canceller (GSC). A cross-spectral, frequency-domain filter is created using the GSC methodology to edit the CSA formulation. The only constraint needed is the user-defined, relative phase difference between the channel being filtered and the reference channel used for filtering. This process, titled Constrained Spectral Conditioning (CSC), produces single-channel Fourier Transform estimates of signals which satisfy the user-defined phase differences. In a spatial sound field mapping context, CSC produces sub-datasets derived from the original which estimate the signal characteristics from distinct locations in space. Because single-channel Fourier Transforms are produced, CSC's outputs could theoretically be used as inputs to many existing algorithms. As an example, data-independent, frequency-domain beamforming (FDBF) using CSC's outputs is shown to exhibit finer spatial resolution and lower sidelobe levels than FDBF using the original, unmodified dataset. However, these improvements decrease with Signal-to-Noise Ratio (SNR), and CSC's quantitative accuracy is dependent upon accurate modeling of the sound propagation and inter-source coherence if multiple and/or distributed sources are measured. In order to demonstrate systematic spatial sound mapping using CSC, it is embedded into the CLEAN algorithm which is then titled CLEAN-CSC. Simulated data analysis indicates that CLEAN-CSC is biased towards the mapping and energy allocation of relatively stronger sources in the field, which limits its ability to identify and estimate the level of relatively weaker sources. It is also shown that CLEAN-CSC underestimates the true integrated levels of sources in the field and exhibits higher-than-true peak source levels, and these effects increase and decrease respectively with increasing frequency. Five independent scaling methods are proposed for correcting the CLEAN-CSC total integrated output levels, each with their own assumptions about the sound field being measured. As the entire output map is scaled, these do not account for relative source level errors that may exist. Results from two airfoil tests conducted in NASA Langley's Quiet Flow Facility show that CLEAN-CSC exhibits less map noise than CLEAN yet more segmented spatial sound distributions and lower integrated source levels. However, using the same source propagation model that CLEAN assumes, the scaled CLEAN-CSC integrated source levels are brought into closer agreement with those obtained with CLEAN.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
22

Letournel, Marc. "Approches duales dans la résolution de problèmes stochastiques". Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00938751.

Pełny tekst źródła
Streszczenie:
Le travail général de cette thèse consiste à étendre les outils analytiques et algébriques usuellement employés dans la résolution de problèmes combinatoires déterministes à un cadre combinatoire stochastique. Deux cadres distincts sont étudiés : les problèmes combinatoires stochastiques discrets et les problèmes stochastiques continus. Le cadre discret est abordé à travers le problème de la forêt couvrante de poids maximal dans une formulation Two-Stage à multi-scénarios. La version déterministe très connue de ce problème établit des liens entre la fonction de rang dans un matroïde et la formulation duale, via l'algorithme glouton. La formulation stochastique discrète du problème de la forêt maximale couvrante est transformée en un problème déterministe équivalent, mais du fait de la multiplicité des scénarios, le dual associé est en quelque sorte incomplet. Le travail réalisé ici consiste à comprendre en quelles circonstances la formulation duale atteint néanmoins un minimum égal au problème primal intégral. D'ordinaire, une approche combinatoire classique des problèmes de graphes pondérés consiste à rechercher des configurations particulières au sein des graphes, comme les circuits, et à explorer d'éventuelles recombinaisons. Pour donner une illustration simple, si on change d'une manière infinitésimale les valeurs de poids des arêtes d'un graphe, il est possible que la forêt couvrante de poids maximal se réorganise complètement. Ceci est vu comme un obstacle dans une approche purement combinatoire. Pourtant, certaines grandeurs analytiques vont varier de manière continue en fonction de ces variations infinitésimales, comme la somme des poids des arêtes choisies. Nous introduisons des fonctions qui rendent compte de ces variations continues, et nous examinons dans quels cas les formulations duales atteignent la même valeur que les formulations primales intégrales. Nous proposons une méthode d'approximation dans le cas contraire et nous statuons sur la NP complétude de ce type de problème.Les problèmes stochastiques continus sont abordés via le problème de sac à dos avec contrainte stochastique. La formulation est de type ''chance constraint'', et la dualisation par variable lagrangienne est adaptée à une situation où la probabilité de respecter la contrainte doit rester proche de $1$. Le modèle étudié est celui d'un sac à dos où les objets ont une valeur et un poids déterminés par des distributions normales. Dans notre approche, nous nous attachons à appliquer des méthodes de gradient directement sur la formulation en espérance de la fonction objectif et de la contrainte. Nous délaissons donc une possible reformulation classique du problème sous forme géométrique pour détailler les conditions de convergence de la méthode du gradient stochastique. Cette partie est illustrée par des tests numériques de comparaison avec la méthode SOCP sur des instances combinatoires avec méthode de Branch and Bound, et sur des instances relaxées.
Style APA, Harvard, Vancouver, ISO itp.
23

Okoloko, Innocent. "Multi-path planning and multi-body constrained attitude control". Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71905.

Pełny tekst źródła
Streszczenie:
Thesis (PhD)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: This research focuses on the development of new efficient algorithms for multi-path planning and multi-rigid body constrained attitude control. The work is motivated by current and future applications of these algorithms in: intelligent control of multiple autonomous aircraft and spacecraft systems; control of multiple mobile and industrial robot systems; control of intelligent highway vehicles and traffic; and air and sea traffic control. We shall collectively refer to the class of mobile autonomous systems as “agents”. One of the challenges in developing and applying such algorithms is that of complexity resulting from the nontrivial agent dynamics as agents interact with other agents, and their environment. In this work, some of the current approaches are studied with the intent of exposing the complexity issues associated them, and new algorithms with reduced computational complexity are developed, which can cope with interaction constraints and yet maintain stability and efficiency. To this end, this thesis contributes the following new developments to the field of multipath planning and multi-body constrained attitude control: • The introduction of a new LMI-based approach to collision avoidance in 2D and 3D spaces. • The introduction of a consensus theory of quaternions by applying quaternions directly with the consensus protocol for the first time. • A consensus and optimization based path planning algorithm for multiple autonomous vehicle systems navigating in 2D and 3D spaces. • A proof of the consensus protocol as a dynamic system with a stochastic plant matrix. • A consensus and optimization based algorithm for constrained attitude synchronization of multiple rigid bodies. • A consensus and optimization based algorithm for collective motion on a sphere.
AFRIKAANSE OPSOMMING: Hierdie navorsing fokus op die ontwikkeling van nuwe koste-effektiewe algoritmes, vir multipad-beplanning en veelvuldige starre-liggaam beperkte standbeheer. Die werk is gemotiveer deur huidige en toekomstige toepassing van hierdie algoritmes in: intelligente beheer van veelvuldige outonome vliegtuig- en ruimtevaartuigstelsels; beheer van veelvuldige mobiele en industrile robotstelsels; beheer van intelligente hoofwegvoertuie en verkeer; en in lug- en see-verkeersbeheer. Ons sal hier “agente” gebruik om gesamentlik te verwys na die klas van mobiele outonome stelsels. Een van die uitdagings in die ontwikkeling en toepassing van sulke algoritmes is die kompleksiteit wat spruit uit die nie-triviale agentdinamika as gevolg van die interaksie tussen agente onderling, en tussen agente en hul omgewing. In hierdie werk word sommige huidige benaderings bestudeer met die doel om die kompleksiteitskwessies wat met hulle geassosieer word, bloot te l^e. Verder word nuwe algoritmes met verminderde berekeningskompleksiteit ontwikkel. Hierdie algoritmes kan interaksie-beperkings hanteer, en tog stabiliteit en doeltreffendheid behou. Vir hierdie doel dra die proefskrif die volgende nuwe ontwikkelings by tot die gebied van multipad-beplanning van multi-liggaam beperkte standbeheer: • Die voorstel van ’n nuwe LMI-gebasseerde benadering tot botsingsvermyding in 2D en 3D ruimtes. • Die voorstel van ’n konsensus-teorie van “quaternions” deur “quaternions” vir die eerste keer met die konsensusprotokol toe te pas. • ’n Konsensus- en optimeringsgebaseerde padbeplanningsalgoritme vir veelvoudige outonome voertuigstelsels wat in 2D en 3D ruimtes navigeer. • Die bewys van ’n konsensusprotokol as ’n dinamiese stelsel met ’n stochastiese aanlegmatriks. • ’n Konsensus- en optimeringsgebaseerde algoritme vir beperkte stand sinchronisasie van veelvoudige starre liggame. • ’n Konsensus- en optimeringsgebaseerde algoritme vir kollektiewe beweging op ’n sfeer.
Style APA, Harvard, Vancouver, ISO itp.
24

Menzel, Andreas [Verfasser], Heiko [Gutachter] Lacker, Peter [Gutachter] Uwer i Thorsten [Gutachter] Feldmann. "Constraints on the Fourth-Generation Quark Mixing Matrix from Precision Flavour Observables / Andreas Menzel ; Gutachter: Heiko Lacker, Peter Uwer, Thorsten Feldmann". Berlin : Mathematisch-Naturwissenschaftliche Fakultät, 2017. http://d-nb.info/1127108956/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Nagurney, Anna, i Alexander Eydeland. "A Splitting Equilibration Algorithm for the Computation of Large-Scale Constrained Matrix Problems; Theoretical Analysis and Applications". Massachusetts Institute of Technology, Operations Research Center, 1990. http://hdl.handle.net/1721.1/5316.

Pełny tekst źródła
Streszczenie:
In this paper we introduce a general parallelizable computational method for solving a wide spectrum of constrained matrix problems. The constrained matrix problem is a core problem in numerous applications in economics. These include the estimation of input/output tables, trade tables, and social/national accounts, and the projection of migration flows over space and time. The constrained matrix problem, so named by Bacharach, is to compute the best possible estimate X of an unknown matrix, given some information to constrain the solution set, and requiring either that the matrix X be a minimum distance from a given matrix, or that X be a functional form of another matrix. In real-world applications, the matrix X is often very large (several hundred to several thousand rows and columns), with the resulting constrained matrix problem larger still (with the number of variables on the order of the square of the number of rows/columns; typically, in the hundreds of thousands to millions). In the classical setting, the row and column totals are known and fixed, and the individual entries nonnegative. However, in certain applications, the row and column totals need not be known a priori, but must be estimated, as well. Furthermore, additional objective and subjective inputs are often incorporated within the model to better represent the application at hand. It is the solution of this broad class of large-scale constrained matrix problems in a timely fashion that we address in this paper. The constrained matrix problem has become a standard modelling tool among researchers and practitioners in economics. Therefore, the need for a unifying, robust, and efficient computational procedure for solving constrained matrix problems is of importance. Here we introduce a.n algorithm, the splitting equilibration algorithm, for computing the entire class of constrained matrix problems. This algorithm is not only theoretically justiflid, hilt l'n fi,1 vl Pnitsf htnh thP lilnlprxing s-trlrtilre of thpCp !arop-Cspe mrnhlem nn the advantages offered by state-of-the-art computer architectures, while simultaneously enhancing the modelling flexibility. In particular, we utilize some recent results from variational inequality theory, to construct a splitting equilibration algorithm which splits the spectrum of constrained matrix problems into series of row/column equilibrium subproblems. Each such constructed subproblem, due to its special structure, can, in turn, be solved simultaneously via exact equilibration in closed form. Thus each subproblem can be allocated to a distinct processor. \We also present numerical results when the splitting equilibration algorithm is implemented in a serial, and then in a parallel environment. The algorithm is tested against another much-cited algorithm and applied to input/output tables, social accounting matrices, and migration tables. The computational results illustrate the efficacy of this approach.
Style APA, Harvard, Vancouver, ISO itp.
26

Claase, Etienne H. "Robust multi-H2 output-feedback approach to aerial refuelling automation of large aircraft via linear matrix inequalities". Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80195.

Pełny tekst źródła
Streszczenie:
Thesis (MScEng)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: In recent years the aviation industry has shown an interest in the airborne refuelling of large transport aircraft to enable increased payload mass at take-off and to extend aircraft range. Due to the large volume of fuel to be transferred, a boom and receptacle refuelling system with a larger fuel transfer rate is employed. The refuelling operation is particularly difficult and strenuous for the pilot of the receiver aircraft, because the position of the receptacle relative to the tanker aircraft must be maintained within a narrow window for a relatively long period of time. The airborne refuelling of a large aircraft is typically much more difficult than that of a fighter aircraft, since the large aircraft is more sluggish, takes much longer to refuel, and has a relatively large distance between its refuelling receptacle and its centre of mass. These difficulties provide the motivation for developing flight control laws for Autonomous In-Flight Refuelling (AIFR) to alleviate the workload on the pilot. The objective of the research is to design a flight control system that can regulate the receptacle of a receiver aircraft to remain within the boom envelope of a tanker aircraft in light and medium turbulence. The flight control system must be robust to uncertainties in the aircraft dynamic model, and must obey actuator deflection and slew rate limits. Literature on AIFR shows a wide range of approaches, including Linear Quadratic Regulator (LQR), μ-synthesis and neural-network based adaptive control, none of which explicitly includes constraints on actuator amplitudes, actuator rates and regulation errors in the design/synthesis. A new approach to designing AIFR flight control laws is proposed, based on Linear Matrix Inequality (LMI) optimisation. The relatively new LMI technique enables optimised regulation of stochastic systems subject to time-varying uncertainties and coloured noise disturbance, while simultaneously constraining transient behaviour and multiple outputs and actuators to operate within their amplitude, saturation and slew rate limits. These constraints are achieved by directly formulating them as inequalities.
AFRIKAANSE OPSOMMING: Die lugvaart industrie toon huidiglik ’n belangstelling in die brandstof oordrag tussen twee groot vervoervliegtuie gedurende vlug, met die doel om die maksimum opstyggewig kapasiteit sowel as die maksimum ononderbroke vlugafstand vermoë van die hervulde vliegtuig te vermeerder. ’n Boom hervulling-stelsel word geïmplementeer om die hoë spoed van brandstof oordrag te voorsien. Die verrigting van vluggebonde hervulling van ’n groot, trae vliegtuig is moeiliker en meer veeleisend as bv. van ’n vegvliegtuig, veral vir die vlieënier van die hervulde vliegtuig, wat sy boom-skakel moet reguleer binne ’n relatiewe klein boom bewegingsruimte vir ’n relatiewe lang tydperk. Die kinematika betrokke speel ook ’n groter rol in ’n groot hervulde vliegtuig a.g.v. die langer afstand tussen die boom-skakel en die massa middelpunt/ draaipunt. Hierdie bied die motivering om ’n beheerstelsel te ontwikkel wat die taak outomaties uitvoer. Die doel van die navorsing is om ’n beheerstelsel te ontwerp wat die boom-skakel van die hervulde vliegtuig outomaties reguleer binne die bewegingsruimte van die boom, gedurende ligte en matige turbulensie. Daar word van die beheerder vereis om robuust te wees teen onsekerhede in die vliegtuig se meganika, sowel as om die beheer oppervlaktes en turbines van die vliegtuig binne hul defleksie-, wringkrag- en sleurtempo-perke te hou. Daar bestaan reeds ’n groot verskeidenheid van benaderings tot die outomatisering van luggebonde hervulling, onder andere LQR, μ-sintese en neurale-netwerk gebaseerde aanpasbare beheer, waarvan geeneen perke op aktueerders en regulasie foute direk in die ontwerp insluit nie. ’n Nuwe benadering word voorgestel wat gebaseer is op Linear Matrix Inequality (LMI) optimering. Die LMI tegniek is relatief nuut in die gebruik van beheerstelsel ontwerp. Dit stel die ontwerper in staat om ’n stogastiese stelsel, onderworpe aan tydvariante-stelsel-variasie en gekleurde ruis versteurings, optimaal te reguleer, terwyl aktueerders en stelsel gedrag direk beperk word.
Style APA, Harvard, Vancouver, ISO itp.
27

Roy, Prateep Kumar. "Analysis & design of control for distributed embedded systems under communication constraints". Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00534012.

Pełny tekst źródła
Streszczenie:
Les Systèmes de Contrôle Embarqués Distribués (SCED) utilisent les réseaux de communication dans les boucles de rétroaction. Étant donné que les systèmes SCED ont une puissance de batterie, une bande passante de communication et une puissance de calcul limitée, les débits des données ou des informations transmises sont bornées et ils peuvent affecter leur stabilité. Ceci nous amène à élargir le spectre de notre étude et y intégrer une étude sur la relation entre la théorie du contrôle d'un coté et celle de l'information de l'autre. La contrainte de débit de données induit la quantification des signaux tandis que les aspects de calcul temps réel et de communication induit des événements asynchrones qui ne sont plus réguliers ou périodiques. Ces deux phénomènes donnent au SCED une double nature, continue et discrète, et en font des cas d'étude spécifiques. Dans cette thèse, nous analysons la stabilité et la performance de SCED du point de vue de la théorie de l'information et du contrôle. Pour les systèmes linéaires, nous montrons l'importance du compromis entre la quantité d'information communiquée et les objectifs de contrôle, telles que la stabilité, la contrôlabilité/observabilité et les performances. Une approche de conception conjointe de contrôle et de communication (en termes de débit d'information au sens de Shannon) des SCED est étudiée. Les principaux résultats de ces travaux sont les suivants : nous avons prouvé que la réduction d'entropie (ce qui correspond à la réduction d'incertitude) dépend du Grammien de contrôlabilité. Cette réduction est également liée à l'information mutuelle de Shannon. Nous avons démontré que le Grammien de contrôlabilité constitue une métrique de l'entropie théorique de l'information en ce qui concerne les bruits induits par la quantification. La réduction de l'influence de ces bruits est équivalente à la réduction de la norme du Grammien de contrôlabilité. Nous avons établi une nouvelle relation entre la matrice d'information de Fisher (FIM) et le Grammien de Contrôlabilité (CG) basé sur la théorie de l'estimation et la théorie de l'information. Nous proposons un algorithme qui distribue de manière optimale les capacités de communication du réseau entre un nombre "n" d'actionneurs et/ou systèmes concurrents se basant sur la réduction de la norme du Grammien de Contrôlabilité
Style APA, Harvard, Vancouver, ISO itp.
28

Koller, Angela Erika. "The frequency assignment problem". Thesis, Brunel University, 2004. http://bura.brunel.ac.uk/handle/2438/4967.

Pełny tekst źródła
Streszczenie:
This thesis examines a wide collection of frequency assignment problems. One of the largest topics in this thesis is that of L(2,1)-labellings of outerplanar graphs. The main result in this topic is the fact that there exists a polynomial time algorithm to determine the minimum L(2,1)-span for an outerplanar graph. This result generalises the analogous result for trees, solves a stated open problem and complements the fact that the problem is NP-complete for planar graphs. We furthermore give best possible bounds on the minimum L(2,1)-span and the cyclic-L(2,1)-span in outerplanar graphs, when the maximum degree is at least eight. We also give polynomial time algorithms for solving the standard constraint matrix problem for several classes of graphs, such as chains of triangles, the wheel and a larger class of graphs containing the wheel. We furthermore introduce the concept of one-close-neighbour problems, which have some practical applications. We prove optimal results for bipartite graphs, odd cycles and complete multipartite graphs. Finally we evaluate different algorithms for the frequency assignment problem, using domination analysis. We compute bounds for the domination number of some heuristics for both the fixed spectrum version of the frequency assignment problem and the minimum span frequency assignment problem. Our results show that the standard greedy algorithm does not perform well, compared to some slightly more advanced algorithms, which is what we would expect. In this thesis we furthermore give some background and motivation for the topics being investigated, as well as mentioning several open problems.
Style APA, Harvard, Vancouver, ISO itp.
29

Balasubramaniam, Thirunavukarasu. "Matrix/tensor factorization with selective coordinate descent: Algorithms and output usage". Thesis, Queensland University of Technology, 2020. https://eprints.qut.edu.au/203189/1/Thirunavukarasu_Balasubramaniam_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
With advanced data collection methods and the tremendous growth in the number of online users, multi-context data has become ubiquitous and its analysis for knowledge discovery has become inevitable. Matrix/tensor factorizations are commonly used knowledge discovery methods. This thesis developed selective Coordinate Descent (SCD) algorithms that select only a few important elements during the factorization process to minimize the computational complexity and to improve the efficiency of factorization. Moreover, this thesis exploits various ways the output of SCD factorization can be applied in several knowledge discovery tasks like pattern mining, clustering, outlier detection, and recommender systems.
Style APA, Harvard, Vancouver, ISO itp.
30

Balachandran, Libish Kalathil. "Computational workflow management for conceptual design of complex systems : an air-vehicle design perspective". Thesis, Cranfield University, 2007. http://dspace.lib.cranfield.ac.uk/handle/1826/5070.

Pełny tekst źródła
Streszczenie:
The decisions taken during the aircraft conceptual design stage are of paramount importance since these commit up to eighty percent of the product life cycle costs. Thus in order to obtain a sound baseline which can then be passed on to the subsequent design phases, various studies ought to be carried out during this stage. These include trade-off analysis and multidisciplinary optimisation performed on computational processes assembled from hundreds of relatively simple mathematical models describing the underlying physics and other relevant characteristics of the aircraft. However, the growing complexity of aircraft design in recent years has prompted engineers to substitute the conventional algebraic equations with compiled software programs (referred to as models in this thesis) which still retain the mathematical models, but allow for a controlled expansion and manipulation of the computational system. This tendency has posed the research question of how to dynamically assemble and solve a system of non-linear models. In this context, the objective of the present research has been to develop methods which significantly increase the flexibility and efficiency with which the designer is able to operate on large scale computational multidisciplinary systems at the conceptual design stage. In order to achieve this objective a novel computational process modelling method has been developed for generating computational plans for a system of non-linear models. The computational process modelling was subdivided into variable flow modelling, decomposition and sequencing. A novel method named Incidence Matrix Method (IMM) was developed for variable flow modelling, which is the process of identifying the data flow between the models based on a given set of input variables. This method has the advantage of rapidly producing feasible variable flow models, for a system of models with multiple outputs. In addition, criteria were derived for choosing the optimal variable flow model which would lead to faster convergence of the system. Cont/d.
Style APA, Harvard, Vancouver, ISO itp.
31

Napolitano, Ralph E. Jr. "Finite differenc-cellular automation modeling of the evolution of interface morphology during alloy solidification under geometrical constraint : application to metal matrix composite solidification". Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/32810.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Pérez, Pérez Luis. "Time-Dependent Amplitude Analysis of Bºc→Ksπ+π- decays with the BABAR Experiment and constraints on the CKM matrix using the B →*π and B →RhoK modes". Paris 7, 2008. http://www.theses.fr/2008PA077229.

Pełny tekst źródła
Streszczenie:
Une analyse en amplitudes dépendantes du temps des désintégrations BO->Ks pi+pi- est effectuée afin de mesurer les paramètres de violation de CP des modes fO(980)Ks et rhoû(770)Ks, ainsi que l'asymétrie directe de CP pour K*(892)pi. Les résultats sont obtenus à partid d'un échantillon de (383+7-3)xlOA6 paires B-Bbar, enregistrées par le détecteur BABAR auprès du collisionneur asymétrique PEPII au SLAC. Deux solutions sont trouvées, avec des figures de mérite équivalentes sur la qualité de l'ajustement. En incluant les incertitudes systématiques et provenant du modèle de Dalitz utilisé, l'intervalle de confiance combiné sur beta_eff dans le mode fO(980)Ks est 18K*(892)+pi- et BObar->K*(892)-pi+ exclut l'intervalle [-132:+25] degrés (à 95% C. L. ). Les rapports d'embranchement et les asymétries directes de CP sont mesurées pour tous les modes résonants intermédiaires significatifs. Les mesures obtenues dans les modes rhoO(770)Ks et K*(892)+pi- sont utilisées comme paramètres d'entrée dans une analyse phénoménologique des désintégrations B->K*pi et B->rhoK, basée uniquement sur la symétrie d'isospin SU(2). L'ajout d'informations extérieures sur la matrice CKM permet de poser des contraintes sur l'espace des paramètres hadroniques. Pour B->K*pi, les intervalles obtenus sur les pingouins électrofaibles ne sont que marginalement en accord avec les attentes théoriques. Les contraintes sur la matrice CKM sont dominées par des incertitudes d'origine theorique. Une étude de prospective, utilisant les améliorations attendues sur les mesures de ces modes à LHCb, ou dans les programmes futurs tels que Super-B ou Belle-upgrade, permet d'illustrer le potentiel de physique de cette approche
This work aimed at studying the response of microorganisms to toxic elements, such as arsenic and to the lethal effects of mineral precipitation within cellular structures. We applied microscopic and spectroscopic tools adapted to the study of these organic-mineral assemblages. In a first section, we studied two different bacterial strains, both using Fe(II) as an electron donor under strictly anoxic conditions at neutral pH. The phototrophic strain SW2 precipitated iron on lipo-polysaccharidic fibres only at distance from the cells, whereas the denitrifying strain BoFeNl precipitated iron within its periplasm. Ultrafine cellular structures and proteins were preserved within these encrusted cells that can be considered as microfossils. In a second section, we studied unicellular eukaryotes from a Fe and As-rich acid mine drainage. Iron accumulation within the cells was shown to be completely decoupled from the processes of arsenic detoxification. Arsenic detoxification starts with As(V) reduction to As(III), followed by its complexation by thiol groups, involving the glutathione pathway and leading to its export from the cell. However, we show that As(V) was more toxic to the cells than As(III). Our results altogether provide new insights on the mechanisms of microbial biomineralization and detoxification of metals/metalloids and opens new perspectives for the search of biosignatures of specific metabolisms
Style APA, Harvard, Vancouver, ISO itp.
33

寛康, 阿部, i Hiroyasu Abe. "Extensions of nonnegative matrix factorization for exploratory data analysis". Thesis, https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13001149/?lang=0, 2017. https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13001149/?lang=0.

Pełny tekst źródła
Streszczenie:
非負値行列因子分解(NMF)は,全要素が非負であるデータ行列に対する行列分解法である.本論文では,実在するデータ行列に頻繁に見られる特徴や解釈容易性の向上を考慮に入れ,探索的にデータ分析を行うためのNMFの拡張について論じている.具体的には,零過剰行列や外れ値を含む行列を扱うための確率分布やダイバージェンス,さらには分解結果である因子行列の数や因子行列への直交制約について述べている.
Nonnegative matrix factorization (NMF) is a matrix decomposition technique to analyze nonnegative data matrices, which are matrices of which all elements are nonnegative. In this thesis, we discuss extensions of NMF for exploratory data analysis considering common features of a real nonnegative data matrix and an easy interpretation. In particular, we discuss probability distributions and divergences for zero-inflated data matrix and data matrix with outliers, two-factor vs. three-factor, and orthogonal constraint to factor matrices.
博士(文化情報学)
Doctor of Culture and Information Science
同志社大学
Doshisha University
Style APA, Harvard, Vancouver, ISO itp.
34

Kim, Jingu. "Nonnegative matrix and tensor factorizations, least squares problems, and applications". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42909.

Pełny tekst źródła
Streszczenie:
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
Style APA, Harvard, Vancouver, ISO itp.
35

Thapa, Nirmal. "CONTEXT AWARE PRIVACY PRESERVING CLUSTERING AND CLASSIFICATION". UKnowledge, 2013. http://uknowledge.uky.edu/cs_etds/15.

Pełny tekst źródła
Streszczenie:
Data are valuable assets to any organizations or individuals. Data are sources of useful information which is a big part of decision making. All sectors have potential to benefit from having information. Commerce, health, and research are some of the fields that have benefited from data. On the other hand, the availability of the data makes it easy for anyone to exploit the data, which in many cases are private confidential data. It is necessary to preserve the confidentiality of the data. We study two categories of privacy: Data Value Hiding and Data Pattern Hiding. Privacy is a huge concern but equally important is the concern of data utility. Data should avoid privacy breach yet be usable. Although these two objectives are contradictory and achieving both at the same time is challenging, having knowledge of the purpose and the manner in which it will be utilized helps. In this research, we focus on some particular situations for clustering and classification problems and strive to balance the utility and privacy of the data. In the first part of this dissertation, we propose Nonnegative Matrix Factorization (NMF) based techniques that accommodate constraints defined explicitly into the update rules. These constraints determine how the factorization takes place leading to the favorable results. These methods are designed to make alterations on the matrices such that user-specified cluster properties are introduced. These methods can be used to preserve data value as well as data pattern. As NMF and K-means are proven to be equivalent, NMF is an ideal choice for pattern hiding for clustering problems. In addition to the NMF based methods, we propose methods that take into account the data structures and the attribute properties for the classification problems. We separate the work into two different parts: linear classifiers and nonlinear classifiers. We propose two different solutions based on the classifiers. We study the effect of distortion on the utility of data. We propose three distortion measurement metrics which demonstrate better characteristics than the traditional metrics. The effectiveness of the measures is examined on different benchmark datasets. The result shows that the methods have the desirable properties such as invariance to translation, rotation, and scaling.
Style APA, Harvard, Vancouver, ISO itp.
36

Heß, Sibylle Charlotte [Verfasser], Katharina [Akademischer Betreuer] Morik i Arno P. J. M. [Gutachter] Siebes. "A mathematical theory of making hard decisions: model selection and robustness of matrix factorization with binary constraints / Sibylle Charlotte Heß ; Gutachter: Arno P. J. M. Siebes ; Betreuer: Katharina Morik". Dortmund : Universitätsbibliothek Dortmund, 2018. http://d-nb.info/1196874735/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Heß, Sibylle [Verfasser], Katharina [Akademischer Betreuer] Morik i Arno P. J. M. [Gutachter] Siebes. "A mathematical theory of making hard decisions: model selection and robustness of matrix factorization with binary constraints / Sibylle Charlotte Heß ; Gutachter: Arno P. J. M. Siebes ; Betreuer: Katharina Morik". Dortmund : Universitätsbibliothek Dortmund, 2018. http://d-nb.info/1196874735/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

BOSE, SURACHITA. "SMART GROWTH IN THE STATE OF OHIO: CONFLICTS AND CONSTRAINTS - AN ANALYSIS AND EVALUATION OF THE EVOLUTION OF SMART GROWTH IN THE CLEVELAND AND CINCINNATI METROPOLITAN REGIONS". University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1099601083.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Pérez, Pérez Luis Alejandro. "Time-Dependent Amplitude Analysis of B^0->Kspi+pi- decays with the BaBar Experiment and constraints on the CKM matrix using the B->K*pi and B->rho K modes". Phd thesis, Université Paris-Diderot - Paris VII, 2008. http://tel.archives-ouvertes.fr/tel-00379188.

Pełny tekst źródła
Streszczenie:
Une analyse en amplitudes dépendantes du temps des désintégrations $B^0 \to K^0_S\pi^+\pi^-$ est effectuée afin de mesurer les paramètres de violation de CP des modes $f_0(980)K_S^0$ et $\rho^0(770)K_S^0$, ainsi que l'asymétrie directe de CP pour $K^*(892)^\pm\pi^\mp$. Les résultats sont obtenus à partid d'un échantillon de $(383\pm 3)\times10^{6}$ paires $B\bar{B}$, enregistrées par le détecteur \babar\ auprès du collisionneur asymétrique PEPII au SLAC. Deux solutions sont trouvées, avec des figures de mérite équivalentes sur la qualité de l'ajustement. En incluant les incertitudes systématiques et provenant du modèle de Dalitz utilisé, l'intervalle de confiance combiné sur $\beta_{\rm eff}$ dans le mode $f_0(980)K_S^0$ est $18^\circ< \beta_{\rm eff}<76^\circ$ (à $95\%$ C.L.) ; la conservation de CP dans ce mode est exclue à $3.5$ écarts standard. Pour le mode $\rho^0(770)K_S^0$, l'intervalle de confiance combiné est est $-9^\circ< \beta_{\rm eff}<57^\circ$ (à $95\%$ C.L.). Pour le mode $K^*(892)^\pm\pi^\mp$, le paramètre d'asymétrie directe de CP est $A_{\rm CP}=-0.20 \pm 0.10\pm 0.01\pm 0.02$. La mesure de la phase relative entre les amplitudes de désintégration $B^0\to K^*(892)^+\pi^-$ et $\bar{B}^0\to K^*(892)^-\pi^+$ exclut l'intervalle $[-132^\circ : +25^\circ]$ (à $95\%$ C.L.). Les rapports d'embranchement et les asymétries directes de CP sont mesurées pour tous les modes résonants intermédiaires significatifs. Les mesures obtenues dans les modes $\rho^0(770) K^0_S$ et $K^{*\pm}(892)\pi^{\mp}$ sont utilisées comme paramètres d'entrée dans une analyse phénoménologique des désintégrations $B \to K^*\pi$ et $B \to \rho K$, basée uniquement sur la symétrie d'isospin $SU(2)$. L'ajout d'informations extérieures sur la matrice CKM permet de poser des contraintes sur l'espace des paramètres hadroniques. Pour $B \to K^*\pi$, les intervalles obtenus sur les pingouins électrofaibles ne sont que marginalement en accord avec les attentes théoriques. Les contraintes sur la matrice CKM sont dominées par des incertitudes d'origine théorique. Une étude de prospective, utilisant les améliorations attendues sur les mesures de ces modes à LHCb, ou dans les programmes futurs tels que Super-B ou Belle-upgrade, permet d'illustrer le potentiel de physique de cette approche.
Style APA, Harvard, Vancouver, ISO itp.
40

Maya, Gonzalez Martin. "Frequency domain analysis of feedback interconnections of stable systems". Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/frequency-domain-analysis-of-feedback-interconnections-of-stable-systems(c6415a11-3417-48ba-9961-ecef80b08e0e).html.

Pełny tekst źródła
Streszczenie:
The study of non-linear input-output maps can be summarized by three concepts: Gain, Positivity and Dissipativity. However, in order to make efficient use of these theorems it is necessary to use loop transformations and weightings, or so called ”multipliers”.The first problem this thesis studies is the feedback interconnection of a Linear Time Invariant system with a memoryless bounded and monotone non-linearity, or so called Absolute Stability problem, for which the test for stability is equivalent to show the existence of a Zames-Falb multiplier. The main advantage of this approach is that Zames–Falb multipliers can be specialized to recover important tools such as Circle criterion and the Popov criterion. Albeit Zames-Falb multipliers are an efficient way of describing non-linearities in frequency domain, the Fourier transform of the multiplier does not preserve the L1 norm. This problem has been addressed by two paradigms: mathematically complex multipliers with exact L1 norm and multipliers with mathematically tractable frequency domain properties but approximate L1 norm. However, this thesis exposes a third factor that leads to conservative results: causality of Zames-Falb multipliers. This thesis exposes the consequences of narrowing the search Zames-Falb multipliers to causal multipliers, and motivated by this argument, introduces an anticausal complementary method for the causal multiplier synthesis in [1].The second subject of this thesis is the feedback interconnection of two bounded systems. The interconnection of two arbitrary systems has been a well understood problem from the point of view of Dissipativity and Passivity. Nonetheless, frequency domain analysis is largely restricted for passive systems by the need of canonically factorizable multipliers, while Dissipativity mostly exploits constant multipliers. This thesis uses IQC to show the stability of the feedback interconnection of two non-linear systems by introducing an equivalent representation of the IQC Theorem, and then studies formally the conditions that the IQC multipliers need. The result of this analysis is then compared with Passivity and Dissipativity by a series of corollaries.
Style APA, Harvard, Vancouver, ISO itp.
41

Zemkoho, Alain B. "Bilevel programming". Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2012. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-89017.

Pełny tekst źródła
Streszczenie:
We have considered the bilevel programming problem in the case where the lower-level problem admits more than one optimal solution. It is well-known in the literature that in such a situation, the problem is ill-posed from the view point of scalar objective optimization. Thus the optimistic and pessimistic approaches have been suggested earlier in the literature to deal with it in this case. In the thesis, we have developed a unified approach to derive necessary optimality conditions for both the optimistic and pessimistic bilevel programs, which is based on advanced tools from variational analysis. We have obtained various constraint qualifications and stationarity conditions depending on some constructive representations of the solution set-valued mapping of the follower’s problem. In the auxiliary developments, we have provided rules for the generalized differentiation and robust Lipschitzian properties for the lower-level solution setvalued map, which are of a fundamental interest for other areas of nonlinear and nonsmooth optimization. Some of the results of the aforementioned theory have then been applied to derive stationarity conditions for some well-known transportation problems having the bilevel structure.
Style APA, Harvard, Vancouver, ISO itp.
42

Olsson, Katarina. "Population differentiation in Lythrum salicaria along a latitudinal gradient". Doctoral thesis, Umeå : Univ, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-364.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Zeißner, Sonja Verena [Verfasser], Kevin [Akademischer Betreuer] Kröninger i Johannes [Gutachter] Albrecht. "Development and calibration of an s-tagging algorithm and its application to constrain the CKM matrix elements |Vts| and |Vtd| in top-quark decays using ATLAS Run-2 Data / Sonja Verena Zeißner ; Gutachter: Johannes Albrecht ; Betreuer: Kevin Kröninger". Dortmund : Universitätsbibliothek Dortmund, 2021. http://d-nb.info/1238349277/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Meira, Vívian 1981. "A obviação/referência disjunta em complementação sentencial : uma proposta sintático-semântica". [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/268917.

Pełny tekst źródła
Streszczenie:
Orientador: Sonia Maria Lazzarini Cyrino
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Estudos da Linguagem
Made available in DSpace on 2018-08-22T08:47:05Z (GMT). No. of bitstreams: 1 Meira_Vivian_D.pdf: 1178142 bytes, checksum: e326a7771cea8102eee5851541fef4ea (MD5) Previous issue date: 2013
Resumo: Esta tese investiga padrões de referencialidade em complementação sentencial no português, italiano e grego moderno, especialmente, o fenômeno conhecido como obviação ou referência disjunta. Esta é uma restrição atestada nas línguas e se caracteriza pelo fato de o sujeito da oração subordinada ser obrigatoriamente disjunto em referência ao sujeito da oração matriz. Tradicionalmente, assume-se que a obviação é uma propriedade de complementação subjuntiva ou um fenômeno resultante, juntamente com o controle, da competição entre formas finitas/não-finitas. No entanto, os dados não condizem com essas hipóteses, já que a obviação é exibida tanto em complementação indicativa quanto nos contextos de infinitivo flexionado. Além disso, nem todo contexto volitivo exibe obviação. Assumindo a teoria de seleção semântica e a versão minimalista de subcategorização (cf. Adger, 2004), propomos que a obviação, exibida em complementação sentencial, é uma restrição semântica exigida por três tipos de predicados, os causativos, os volitivos e os perceptivos físicos, que serão tomados como predicados modais no sentido de serem capazes de impor restrições semânticas aos seus complementos. Estes predicados foram denominados de predicados de obviação, por compartilharem entre si algumas propriedades, como denotar leitura eventiva/não-epistêmica, exigir sujeito pronominal na encaixada independente referencialmente do sujeito matriz e subcategorizar complemento TP. Argumentamos ainda que esses predicados, devido ao seu caráter modal, selecionam semanticamente um traço [obviativo], que é transmitido ao sujeito da encaixada. Predicados de obviação se distinguem de outro grupo de predicado modal, os predicados de controle, por estes não permitirem que o argumento da encaixada seja disjunto do sujeito matriz. Esses dois grupos se distinguem de outro grupo de verbos que permitem referência livre, constituído especialmente por predicados epistêmicos, declarativos, dentre outros, que denotam leitura epistêmica/proposicional e subcategorizam complemento CP. Sintaticamente este grupo de predicados se distingue dos predicados de obviação por subcategorizarem estruturas distintas, pois, enquanto estes têm complemento TP, aqueles selecionam complemento CP. Para explicar por que obviação e controle são exibidos pelo predicado volitivo, propomos que há dois tipos de acepções no volitivo nas línguas: o volitivo padrão, que seleciona controle e o volitivo causativo, que exige obviação. Defendemos que o complemento infinitivo flexionado selecionado por causativo e perceptivo é uma estrutura TP, o que o diferencia da estrutura de infinitivo flexionado selecionada por factivos/epistêmicos/declarativos, que é tomado como um CP. Estes permitem referência livre e aqueles exigem obviação. Nossa proposta é mostrar que a obviação, exibida em complementação sentencial, não é um fenômeno restrito às línguas românicas ou às línguas que exibem a distinção finito/não-finito, mas são uma restrição semântica imposta por predicados de obviação os seus complementos e, devido a isso, essa restrição semântica será exibida por línguas que dispõem desses contextos em complementação sentencial
Abstract: This thesis investigates patterns of referentiality in sentential complementation in Portuguese, Italian and Modern Greek, especially the phenomenon known as obviation or disjoint reference. This is a constraint attested in languages, and it is characterized by the fact that the subject of the subordinate clause must be disjoint in reference to the subject of the matrix sentence. Traditionally, obviation has been assumed to be a property of subjunctive complementation, or a phenomenon arising along with the control from the competition between finite/non-finite forms. However, the data are not consistent with these hypotheses, since obviation appears in indicative complementation and inflected infinitive contexts. Moreover, obviation is not displayed in every volitional context. Based on the theory of semantic selection and a minimalist version of subcategorization (cf. Adger, 2004), this thesis proposes that obviation, in sentential complementation, is a semantic constraint required by three types of predicates, the causative, volitional and physical perceptive predicates, which will be taken as predicates able to impose semantic constraints on their complements. These predicates are called obviation predicates, which share some common properties, as denoting eventive/non-epistemic reading, they require referentially independent subject pronouns in an embedded clause, and select a TP complement. We argue that these predicates, because of their modal character, select semantically a trace [obviative], which is transmitted to the subject in the embedded clause. Obviation predicates are distinguished from another group of modal predicates, control predicates, which do not allow, in an embedded clause, an argument referentially independent from the matrix subject. These two groups are distinguished from yet another group of verbs that allow free reference, specially constituted by epistemic, declarative predicates, among others, which denote an epistemic/propositional reading and select CP complements. Syntactically, this group can be distinguished from obviation predicates by selecting distinct structures, because while these have a TP complement, the former select CP complements. To explain why both obviation and control are displayed by volitional predicates, we propose that there are two types of volitional meanings in the languages: the default volitional that selects control, and the causative volitional, that requires obviation. Furthermore, we argue that the inflected infinitive complement selected by causative and perceptive verbs is a TP structure, and they require obviation, which differ from the inflected infinitive selected by factives/epistemic/declarative verb, which take CP complements and allows free reference. The purpose of this thesis is to show that obviation, in sentential complementation, is not a phenomenon restricted to the Romance languages, or languages that exhibit a distinction between finite and non-finite forms, but that it is a semantic constraint imposed by obviation predicates on their complements and, consequently, this constraint will appear in languages which have these contexts in sentential complementation
Doutorado
Linguistica
Doutora em Linguística
Style APA, Harvard, Vancouver, ISO itp.
45

Limem, Abdelhakim. "Méthodes informées de factorisation matricielle non négative : Application à l'identification de sources de particules industrielles". Thesis, Littoral, 2014. http://www.theses.fr/2014DUNK0432/document.

Pełny tekst źródła
Streszczenie:
Les méthodes de NMF permettent la factorisation aveugle d'une matrice non-négative X en le produit X = G . F de deux matrices non-négatives G et F. Bien que ces approches sont étudiées avec un grand intêret par la communauté scientifique, elles souffrent bien souvent d'un manque de robustesse vis à vis des données et des conditions initiales et peuvent présenter des solutions multiples. Dans cette optique et afin de réduire l'espace des solutions admissibles, les travaux de cette thèse ont pour objectif d'informer la NMF, positionnant ainsi nos travaux entre la régression et les factorisations aveugles classiques. Par ailleurs, des fonctions de coûts paramétriques appelées divergences αβ sont utilisées, permettant de tolérer la présence d'aberrations dans les données. Nous introduisons trois types de contraintes recherchées sur la matrice F à savoir (i) la connaissance exacte ou bornée de certains de ses éléments et (ii) la somme à 1 de chacune de ses lignes. Des règles de mise à jour permettant de faire cohabiter l'ensemble de ces contraintes par des méthodes multiplicatives mixées à des projections sont proposées. D'autre part, nous proposons de contraindre la structure de la matrice G par l'usage d'un modèle physique susceptible de distinguer les sources présentes au niveau du récepteur. Une application d'identification de sources de particules en suspension dans l'air, autour d'une région industrielle du littoral nord de la France, a permis de tester l'intérêt de l'approche. À travers une série de tests sur des données synthétiques et réelles, nous montrons l'apport des différentes informations pour rendre les résultats de la factorisation plus cohérents du point de vue de l'interprétation physique et moins dépendants de l'initialisation
NMF methods aim to factorize a non negative observation matrix X as the product X = G.F between two non-negative matrices G and F. Although these approaches have been studied with great interest in the scientific community, they often suffer from a lack of robustness to data and to initial conditions, and provide multiple solutions. To this end and in order to reduce the space of admissible solutions, the work proposed in this thesis aims to inform NMF, thus placing our work in between regression and classic blind factorization. In addition, some cost functions called parametric αβ-divergences are used, so that the resulting NMF methods are robust to outliers in the data. Three types of constraints are introduced on the matrix F, i. e., (i) the "exact" or "bounded" knowledge on some components, and (ii) the sum to 1 of each line of F. Update rules are proposed so that all these constraints are taken into account by mixing multiplicative methods with projection. Moreover, we propose to constrain the structure of the matrix G by the use of a physical model, in order to discern sources which are influent at the receiver. The considered application - consisting of source identification of particulate matter in the air around an insdustrial area on the French northern coast - showed the interest of the proposed methods. Through a series of experiments on both synthetic and real data, we show the contribution of different informations to make the factorization results more consistent in terms of physical interpretation and less dependent of the initialization
Style APA, Harvard, Vancouver, ISO itp.
46

Silva, Michel Aguena da. "Cosmologia usando aglomerados de galáxias no Dark Energy Survey". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/43/43134/tde-22102017-163407/.

Pełny tekst źródła
Streszczenie:
Aglomerados de galáxias são as maiores estruturas no Universo. Sua distribuição mapeia os halos de matéria escura formados nos potenciais profundos do campo de matéria escura. Consequentemente, a abundância de aglomerados é altamente sensível a expansão do Universo, assim como ao crescimento das perturbações de matéria escura, constituindo uma poderosa ferramenta para fins cosmológicos. Na era atual de grandes levantamentos observacionais que produzem uma quantidade gigantesca de dados, as propriedades estatísticas dos objetos observados (galáxias, aglomerados, supernovas, quasares, etc) podem ser usadas para extrair informações cosmológicas. Para isso, é necessária o estudo da formação de halos de matéria escura, da detecção dos halos e aglomerados, das ferramentas estatísticas usadas para o vínculos de parâmetros, e finalmente, dos efeitos da detecções ópticas. No contexto da formulação da predição teórica da contagem de halos, foi analisada a influência de cada parâmetro cosmológico na abundância dos halos, a importância do uso da covariância dos halos, e a eficácia da utilização dos halos para vincular cosmologia. Também foi analisado em detalhes os intervalos de redshift e o uso de conhecimento prévio dos parâmetros ({\\it priors}). A predição teórica foi testada um uma simulação de matéria escura, onde a cosmologia era conhecida e os halos de matéria escura já haviam sido detectados. Nessa análise, foi atestado que é possível obter bons vínculos cosmológicos para alguns parâmetros (Omega_m,w,sigma_8,n_s), enquanto outros parâmetros (h,Omega_b) necessitavam de conhecimento prévio de outros testes cosmológicos. Na seção dos métodos estatísticos, foram discutidos os conceitos de {\\it likelihood}, {\\it priors} e {\\it posterior distribution}. O formalismo da Matriz de Fisher, bem como sua aplicação em aglomerados de galáxias, foi apresentado e usado para a realização de predições dos vínculos em levantamentos atuais e futuros. Para a análise de dados, foram apresentados métodos de Cadeias de Markov de Monte Carlo (MCMC), que diferentemente da Matriz de Fisher não assumem Gaussianidade entre os parâmetros vinculados, porém possuem um custo computacional muito mais alto. Os efeitos observacionais também foram estudados em detalhes. Usando uma abordagem com a Matriz de Fisher, os efeitos de completeza e pureza foram extensivamente explorados. Como resultado, foi determinado em quais casos é vantajoso incluir uma modelagem adicional para que o limite mínimo de massa possa ser diminuído. Um dos principais resultados foi o fato que a inclusão dos efeitos de completeza e pureza na modelagem não degradam os vínculos de energia escura, se alguns outros efeitos já estão sendo incluídos. Também foi verificados que o uso de priors nos parâmetros não cosmológicos só afetam os vínculos de energia escura se forem melhores que 1\\%. O cluster finder(código para detecção de aglomerados) WaZp foi usado na simulação, produzindo um catálogo de aglomerados. Comparando-se esse catálogo com os halos de matéria escura da simulação, foi possível investigar e medir os efeitos observacionais. A partir dessas medidas, pôde-se incluir correções para a predição da abundância de aglomerados, que resultou em boa concordância com os aglomerados detectados. Os resultados a as ferramentas desenvolvidos ao longo desta tese podem fornecer um a estrutura para a análise de aglomerados com fins cosmológicos. Durante esse trabalho, diversos códigos foram desenvolvidos, dentre eles, estão um código eficiente para computar a predição teórica da abundância e covariância de halos de matéria escura, um código para estimar a abundância e covariância dos aglomerados de galáxias incluindo os efeitos observacionais, e um código para comparar diferentes catálogos de halos e aglomerados. Esse último foi integrado ao portal científico do Laboratório Interinstitucional de e-Astronomia (LIneA) e está sendo usado para avaliar a qualidade de catálogos de aglomerados produzidos pela colaboração do Dark Energy Survey (DES), assim como também será usado em levantamentos futuros.
Abstract Galaxy clusters are the largest bound structures of the Universe. Their distribution maps the dark matter halos formed in the deep potential wells of the dark matter field. As a result, the abundance of galaxy clusters is highly sensitive to the expansion of the universe as well as the growth of dark matter perturbations, representing a powerful tool for cosmological purposes. In the current era of large scale surveys with enormous volumes of data, the statistical quantities from the objects surveyed (galaxies, clusters, supernovae, quasars, etc) can be used to extract cosmological information. The main goal of this thesis is to explore the potential use of galaxy clusters for constraining cosmology. To that end, we study the halo formation theory, the detection of halos and clusters, the statistical tools required to quarry cosmological information from detected clusters and finally the effects of optical detection. In the composition of the theoretical prediction for the halo number counts, we analyze how each cosmological parameter of interest affects the halo abundance, the importance of the use of the halo covariance, and the effectiveness of halos on cosmological constraints. The redshift range and the use of prior knowledge of parameters are also investigated in detail. The theoretical prediction is tested on a dark matter simulation, where the cosmology is known and a dark matter halo catalog is available. In the analysis of the simulation we find that it is possible to obtain good constraints for some parameters such as (Omega_m,w,sigma_8,n_s) while other parameters (h,Omega_b) require external priors from different cosmological probes. In the statistical methods, we discuss the concept of likelihood, priors and the posterior distribution. The Fisher Matrix formalism and its application on galaxy clusters is presented, and used for making forecasts of ongoing and future surveys. For the real analysis of data we introduce Monte Carlo Markov Chain (MCMC) methods, which do not assume Gaussianity of the parameters distribution, but have a much higher computational cost relative to the Fisher Matrix. The observational effects are studied in detail. Using the Fisher Matrix approach, we carefully explore the effects of completeness and purity. We find in which cases it is worth to include extra parameters in order to lower the mass threshold. An interesting finding is the fact that including completeness and purity parameters along with cosmological parameters does not degrade dark energy constraints if other observational effects are already being considered. The use of priors on nuisance parameters does not seem to affect the dark energy constraints, unless these priors are better than 1\\%.The WaZp cluster finder was run on a cosmological simulation, producing a cluster catalog. Comparing the detected galaxy clusters to the dark matter halos, the observational effects were investigated and measured. Using these measurements, we were able to include corrections for the prediction of cluster counts, resulting in a good agreement with the detected cluster abundance. The results and tools developed in this thesis can provide a framework for the analysis of galaxy clusters for cosmological purposes. Several codes were created and tested along this work, among them are an efficient code to compute theoretical predictions of halo abundance and covariance, a code to estimate the abundance and covariance of galaxy clusters including multiple observational effects and a pipeline to match and compare halo/cluster catalogs. This pipeline has been integrated to the Science Portal of the Laboratório Interinstitucional de e-Astronomia (LIneA) and is being used to automatically assess the quality of cluster catalogs produced by the Dark Energy Survey (DES) collaboration and will be used in other future surveys.
Style APA, Harvard, Vancouver, ISO itp.
47

Nguyen, Viet-Dung. "Contribution aux décompositions rapides des matrices et tenseurs". Thesis, Orléans, 2016. http://www.theses.fr/2016ORLE2085/document.

Pełny tekst źródła
Streszczenie:
De nos jours, les grandes masses de données se retrouvent dans de nombreux domaines relatifs aux applications multimédia, sociologiques, biomédicales, radio astronomiques, etc. On parle alors du phénomène ‘Big Data’ qui nécessite le développement d’outils appropriés pour la manipulation et l’analyse appropriée de telles masses de données. Ce travail de thèse est dédié au développement de méthodes efficaces pour la décomposition rapide et adaptative de tenseurs ou matrices de grandes tailles et ce pour l’analyse de données multidimensionnelles. Nous proposons en premier une méthode d’estimation de sous espaces qui s’appuie sur la technique dite ‘divide and conquer’ permettant une estimation distribuée ou parallèle des sous-espaces désirés. Après avoir démontré l’efficacité numérique de cette solution, nous introduisons différentes variantes de celle-ci pour la poursuite adaptative ou bloc des sous espaces principaux ou mineurs ainsi que des vecteurs propres de la matrice de covariance des données. Une application à la suppression d’interférences radiofréquences en radioastronomie a été traitée. La seconde partie du travail a été consacrée aux décompositions rapides de type PARAFAC ou Tucker de tenseurs multidimensionnels. Nous commençons par généraliser l’approche ‘divide and conquer’ précédente au contexte tensoriel et ce en vue de la décomposition PARAFAC parallélisable des tenseurs. Ensuite nous adaptons une technique d’optimisation de type ‘all-at-once’ pour la décomposition robuste (à la méconnaissance des ordres) de tenseurs parcimonieux et non négatifs. Finalement, nous considérons le cas de flux de données continu et proposons deux algorithmes adaptatifs pour la décomposition rapide (à complexité linéaire) de tenseurs en dimension 3. Malgré leurs faibles complexités, ces algorithmes ont des performances similaires (voire parfois supérieures) à celles des méthodes existantes de la littérature. Au final, ce travail aboutit à un ensemble d’outils algorithmiques et algébriques efficaces pour la manipulation et l’analyse de données multidimensionnelles de grandes tailles
Large volumes of data are being generated at any given time, especially from transactional databases, multimedia content, social media, and applications of sensor networks. When the size of datasets is beyond the ability of typical database software tools to capture, store, manage, and analyze, we face the phenomenon of big data for which new and smarter data analytic tools are required. Big data provides opportunities for new form of data analytics, resulting in substantial productivity. In this thesis, we will explore fast matrix and tensor decompositions as computational tools to process and analyze multidimensional massive-data. We first aim to study fast subspace estimation, a specific technique used in matrix decomposition. Traditional subspace estimation yields high performance but suffers from processing large-scale data. We thus propose distributed/parallel subspace estimation following a divide-and-conquer approach in both batch and adaptive settings. Based on this technique, we further consider its important variants such as principal component analysis, minor and principal subspace tracking and principal eigenvector tracking. We demonstrate the potential of our proposed algorithms by solving the challenging radio frequency interference (RFI) mitigation problem in radio astronomy. In the second part, we concentrate on fast tensor decomposition, a natural extension of the matrix one. We generalize the results for the matrix case to make PARAFAC tensor decomposition parallelizable in batch setting. Then we adapt all-at-once optimization approach to consider sparse non-negative PARAFAC and Tucker decomposition with unknown tensor rank. Finally, we propose two PARAFAC decomposition algorithms for a classof third-order tensors that have one dimension growing linearly with time. The proposed algorithms have linear complexity, good convergence rate and good estimation accuracy. The results in a standard setting show that the performance of our proposed algorithms is comparable or even superior to the state-of-the-art algorithms. We also introduce an adaptive nonnegative PARAFAC problem and refine the solution of adaptive PARAFAC to tackle it. The main contributions of this thesis, as new tools to allow fast handling large-scale multidimensional data, thus bring a step forward real-time applications
Style APA, Harvard, Vancouver, ISO itp.
48

Terreaux, Eugénie. "Théorie des Matrices Aléatoires pour l'Imagerie Hyperspectrale". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC091/document.

Pełny tekst źródła
Streszczenie:
La finesse de la résolution spectrale et spatiale des images hyperspectrales en font des données de très grande dimension. C'est également le cas d'autres types de données, où leur taille tend à augmenter pour de plus en plus d'applications. La complexité des données provenant de l'hétérogénéité spectrale et spatiale, de la non gaussianité du bruit et des processus physiques sous-jacents, renforcent la richesse des informations présentes sur une image hyperspectrale. Exploiter ces informations demande alors des outils statistiques adaptés aux grandes données mais aussi à leur nature non gaussienne. Des méthodes reposant sur la théorie des matrices aléatoires, théorie adaptée aux données de grande dimension, et reposant sur la robustesse, adaptée aux données non gaussiennes, sont ainsi proposées dans cette thèse, pour des applications à l'imagerie hyperspectrale. Cette thèse propose d'améliorer deux aspects du traitement des images hyperspectrales : l'estimation du nombre d'endmembers ou de l'ordre du modèle et le problème du démélange spectral. En ce qui concerne l'estimation du nombre d'endmembers, trois nouveaux algorithmes adaptés au modèle choisi sont proposés, le dernier présentant de meilleures performances que les deux autres, en raison de sa plus grande robustesse.Une application au domaine de la finance est également proposée. Pour le démélange spectral, trois méthodes sont proposées, qui tiennent comptent des diff érentes particularités possibles des images hyperspectrales. Cette thèse a permis de montrer que la théorie des matrices aléatoires présente un grand intérêt pour le traitement des images hyperspectrales. Les méthodes développées peuvent également s'appliquer à d'autres domaines nécessitant le traitement de données de grandes dimensions
Hyperspectral imaging generates large data due to the spectral and spatial high resolution, as it is the case for more and more other kinds of applications. For hyperspectral imaging, the data complexity comes from the spectral and spatial heterogeneity, the non-gaussianity of the noise and other physical processes. Nevertheless, this complexity enhances the wealth of collected informations, that need to be processed with adapted methods. Random matrix theory and robust processes are here suggested for hyperspectral imaging application: the random matrix theory is adapted to large data and the robustness enables to better take into account the non-gaussianity of the data. This thesis aims to enhance the model order selection on a hyperspectral image and the unmixing problem. As the model order selection is concerned, three new algorithms are developped, and the last one, more robust, gives better performances. One financial application is also presented. As for the unmixing problem, three methods that take into account the peculierities of hyperspectral imaging are suggested. The random matrix theory is of great interest for hyperspectral image processing, as demonstrated in this thesis. Differents methods developped here can be applied to other field of signal processing requiring the processing of large data
Style APA, Harvard, Vancouver, ISO itp.
49

Turki, Marwa. "Synthèse de contrôleurs prédictifs auto-adaptatifs pour l'optimisation des performances des systèmes". Thesis, Normandie, 2018. http://www.theses.fr/2018NORMR064.

Pełny tekst źródła
Streszczenie:
Bien que la commande prédictive fasse appel à des paramètres ayant une signification concrète, la valeur de ces derniers impacte fortement les performances obtenues du système à contrôler. Leur réglage n’étant pas trivial, la littérature fait état d’un nombre conséquent de méthodes de réglage. Celles-ci ne garantissent cependant pas des valeurs optimales. L’objectif de cette thèse est de proposer une approche analytique et originale de réglage de ces paramètres. Initialement applicable aux systèmes MIMO linéaires, l’approche proposée a été étendue aux systèmes non linéaires avec ou sans contraintes et pour lesquels il existe un modèle Takagi-Sugeno (T-S). La classe des systemès non linéaires considérés ici est écrite sous la forme quasi-linéaire paramétrique (quasi-LPV). Sous l’hypothese que le système soit commandable et observable, la méthode proposée garantit la stabilité optimale de ce système en boucle fermée. Pour ce faire, elle s’appuie, d’une part, sur une technique d’amélioration du conditionnement de la matrice hessienne et, d’autre part, sur le concept de rang effectif. Elle présente également l’avantage de requérir une charge calculatoire moindre que celle des approches identifiées dans la littérature. L’intérêt de l’approche proposée est montré à travers l’application en simulation à différents systèmes de complexité croissante. Les travaux menés ont permis d’aboutir à une stratégie de commande prédictive auto-adaptative dénommée "ATSMPC" (Adaptive Takagi-Sugeno Model-based Predictive Control)
Even though predictive control uses concrete parameters, the value of these latter has a strong impact on the obtained performances from the system to be controlled. Their tuning is not trivial. That is why the literature reports a number of adjustment methods. However, these ones do not always guarantee optimal values. The goal of this thesis is to propose an analytical and original tuning tuning approach of these parameters. Initially applicable to linear MIMO systems, the proposed approach has been extended to non-linear systems with or without constraints and for which a Takagi-Sugeno (T-S) model exists. The class of nonlinear systems considered here is written in quasi-linear parametric form (quasi-LPV). Assuming that the system is controllable and observable, the proposed method guarantees the optimal stability of this closed-loop system. To do this, it relies, on the one hand, on a conditioning improving technique of the Hessian matrix and, on the other hand, on the concept of effective rank. It also has the advantage of requiring a lower computational load than the approaches identified in the literature. The interest of the proposed approach is shown through the simulation on different systems of increasingcomplexity. The work carried out has led to a self-adaptive predictive control strategy called "ATSMPC" (Adaptive Takagi-Sugeno Model-based Predictive Control)
Style APA, Harvard, Vancouver, ISO itp.
50

Tomek, Peter. "Approximation of Terrain Data Utilizing Splines". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236488.

Pełny tekst źródła
Streszczenie:
Pro optimalizaci letových trajektorií ve velmi malé nadmorské výšce, terenní vlastnosti musí být zahrnuty velice přesne. Proto rychlá a efektivní evaluace terenních dat je velice důležitá vzhledem nato, že čas potrebný pro optimalizaci musí být co nejkratší. Navyše, na optimalizaci letové trajektorie se využívájí metody založené na výpočtu gradientu. Proto musí být aproximační funkce terenních dat spojitá do určitého stupne derivace. Velice nádejná metoda na aproximaci terenních dat je aplikace víceroměrných simplex polynomů. Cílem této práce je implementovat funkci, která vyhodnotí dané terenní data na určitých bodech spolu s gradientem pomocí vícerozměrných splajnů. Program by měl vyčíslit více bodů najednou a měl by pracovat v $n$-dimensionálním prostoru.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii