To see the other types of publications on this topic, follow the link: Combinatorics of cores.

Dissertations / Theses on the topic 'Combinatorics of cores'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 dissertations / theses for your research on the topic 'Combinatorics of cores.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Stockwell, Roger James. "Frameproof codes : combinatorial properties and constructions." Thesis, Royal Holloway, University of London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Houghten, Sheridan. "On combinatorial searches for designs and codes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0016/NQ43587.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Phillips, Linzy. "Erasure-correcting codes derived from Sudoku & related combinatorial structures." Thesis, University of South Wales, 2013. https://pure.southwales.ac.uk/en/studentthesis/erasurecorrecting-codes-derived-from-sudoku--related-combinatorial-structures(b359130e-bfc2-4df0-a6f5-55879212010d).html.

Full text
Abstract:
This thesis presents the results of an investigation into the use of puzzle-based combinatorial structures for erasure correction purposes. The research encompasses two main combinatorial structures: the well-known number placement puzzle Sudoku and a novel three component construction designed specifically with puzzle-based erasure correction in mind. The thesis describes the construction of outline erasure correction schemes incorporating each of the two structures. The research identifies that both of the structures contain a number of smaller sub-structures, the removal of which results in a grid with more than one potential solution - a detrimental property for erasure correction purposes. Extensive investigation into the properties of these sub-structures is carried out for each of the two outline erasure correction schemes, and results are determined that indicate that, although the schemes are theoretically feasible, the prevalence of sub-structures results in practically infeasible schemes. The thesis presents detailed classifications for the different cases of sub-structures observed in each of the outline erasure correction schemes. The anticipated similarities in the sub-structures of Sudoku and sub-structures of Latin Squares, an established area of combinatorial research, are observed and investigated, the proportion of Sudoku puzzles free of small sub-structures is calculated and a simulation comparing the recovery rates of small sub-structure free Sudoku and standard Sudoku is carried out. The analysis of sub-structures for the second erasure correction scheme involves detailed classification of a variety of small sub-structures; the thesis also derives probabilistic lower bounds for the expected numbers of case-specific sub-structures within the puzzle structure, indicating that specific types of sub-structure hinder recovery to such an extent that the scheme is infeasible for practical erasure correction. The consequences of complex cell inter-relationships and wider issues with puzzle-based erasure correction, beyond the structures investigated in the thesis are also discussed, concluding that while there are suggestions in the literature that Sudoku and other puzzle-based combinatorial structures may be useful for erasure correction, the work of this thesis suggests that this is not the case.
APA, Harvard, Vancouver, ISO, and other styles
4

Esterle, Alexandre. "Groupes d'Artin et algèbres de Hecke sur un corps fini." Thesis, Amiens, 2018. http://www.theses.fr/2018AMIE0061/document.

Full text
Abstract:
Nous déterminons dans cette thèse l'image des groupes de Artin associés à des groupes de Coxeter irréductibles dans leur algèbre de Iwahori-Hecke finie associée. Cela a été fait en type A dans des articles de Brunat, Marin et Magaard. Dans le cas générique, la clôture de l'image de Zariski a été déterminée dans tous les cas par Marin. L'approximation forte suggère que les résultats devraient être similaire dans le cas fini. Il est néanmoins impossible d'utiliser l'approximation forte sans utiliser de lourdes hypothèses et limiter l'étendue des résultats. Nous démontrons dans cette thèse que les résultats sont similaires mais que de nouveaux phénomènes interviennent de par la complexification des extensions de corps considérées. Les arguments principaux proviennent de la théorie des groupes finis. Nous utiliserons notamment un Théorème de Guralnick et Saxl qui utilise la classification des groupes finis simples pour les représentations de hautes dimensions. Ce théorème donne des conditions pour que des sous-groupes de groupes linéaires soient des groupes classiques dans une représentation naturelle. En petite dimension, nous utiliserons la classification des sous-groupes maximaux des groupes classiques de Bray, Holt et Roney-Dougal pour les cas les plus compliqués
In this doctoral thesis, we will determine the image of Artin groups associated to all finite irreducible Coxeter groups inside their associated finite Iwahori-Hecke algebra. This was done in type A in articles by Brunat, Marin and Magaard. The Zariski closure of the image was determined in the generic case by Marin. It is suggested by strong approximation that the results should be similar in the finite case. However, the conditions required to use are much too strong and would only provide a portion of the results. We show in this thesis that they are but that new phenomena arise from the different field factorizations. The techniques used in the finite case are very different from the ones in the generic case. The main arguments come from finite group theory. In high dimension, we will use a theorem by Guralnick-Saxl which uses the classification of finite simple groups to give a condition for subgroups of linear groups to be classical groups in a natural representation. In low dimension, we will mainly use the classification of maximal subgroups of classical groups obtained by Bray, Holt and Roney-Dougal for the complicated cases
APA, Harvard, Vancouver, ISO, and other styles
5

Paegelow, Raphaël. "Action des sous-groupes finis de SL2(C) sur la variété de carquois de Nakajima du carquois de Jordan et fibrés de Procesi." Electronic Thesis or Diss., Université de Montpellier (2022-....), 2024. http://www.theses.fr/2024UMONS005.

Full text
Abstract:
Dans cette thèse de doctorat, nous avons, dans un premier temps, étudié la décomposition en composantes irréductibles du lieu des points fixes sous l’action d’un sous-groupe fini Γ de SL2(C) de la variété de carquois de Nakajima du carquois de Jordan. La variété de carquois associé au carquois de Jordan est isomorphe soit au schéma ponctuel de Hilbert dans C2 soit à l’espace de Calogero-Moser. Nous avons décrit ces composantes irréductibles à l’aide de variétés de carquois du carquois de McKay associé au sous-groupe fini Γ. Nous nous sommes ensuite intéressés à la combinatoire découlant de l’ensemble d’indexation de ces composantes irréductibles en utilisant une action du groupe de Weyl affine introduite par Nakajima. De plus, nous avons construit un modèle combinatoire lorsque Γ est de type D, qui est le seul cas original et remarquable. En effet, si Γ est de type A, un tel travail a déjà été fait par Iain Gordon et si Γ est de type E, nous avons montré que les points fixes qui sont aussi des points fixes du tore diagonal maximal de SL2(C) sont les idéaux monomiaux du schéma ponctuel de Hilbert dans C2 indexés par les partitions en escaliers. De manière plus précise, si Γ est de type D, nous avons obtenu un modèle de l’ensemble indexant les composantes irréductibles contenant un point fixe du tore maximal diagonal de SL2(C) en termes de partitions symétriques. Enfin, si n est un entier plus grand que 1, en utilisant la classification des résolutions projectives et symplectiques de la singularité (C2)n/Γn où Γn est le produit en couronne du groupe symétrique Sn des n premiers entiers et de Γ, nous avons obtenu une description de toutes ces résolutions projectives et symplectiques en termes de composantes irréductibles du lieu des Γ-points fixes du schéma ponctuel de Hilbert dans C2.Dans un second temps, nous nous sommes intéressés à la restriction de deux fibrés vectoriels au-dessus d’une composante irréductible du lieu des Γ-points fixes du schéma de Hilbert dans C2 fixée. Le premier fibré est le fibré tautologique dont nous avons exprimé la restriction en termes de fibrés tautologiques de Nakajima sur la variété de carquois du carquois de McKay associée à la composante irréductible fixée. Le second fibré vectoriel est le fibré de Procesi. Ce fibré a été introduit par Marc Haiman dans ces travaux démontrant la conjecture n!. Nous avons étudié les fibres de ce fibré en tant que (Sn × Γ)-module. Dans la première partie du chapitre de cette thèse consacré au fibré de Procesi, nous avons démontré un théorème de réduction qui exprime le (Sn × Γ)-module associé à la fibre de la restriction du fibré de Procesi au-desus d’une composante irréductible C du lieu des Γ-points fixes du schéma de Hilbert de n points dans C2 comme l’induit de la fibre de la restriction du fibré de Procesi au-dessus d’une composante irréductible du lieu des Γ-points fixes du schéma de Hilbert de k points dans C2 où l’entier k ≤ n est explicite et dépend de la composante irréductible C et de Γ. Ce théorème est ensuite démontré avec d’autres outils dans deux cas particuliers pour Γ de type A. Enfin, lorsque Γ est de type D, certaines formules explicites de réduction des fibres de la restriction du fibré de Procesi au lieu des Γ-point fixes ont étéobtenues.Pour finir, si l est un entier plus grand que 1, alors dans le cas où Γ est le sous-groupe cyclique d’ordre l contenu dans le tore maximal diagonal de SL2(C) noté µl, le théorème de réduction restreint l’étude des fibres du fibré de Procesi au-dessus du lieu des µl-points fixes du schéma ponctuel de Hilbert dans C2 à l’étude des fibres au-dessus des points du schéma de Hilbert associés aux idéaux monomiaux paramétrés par les l-cœurs. Les (Sn × µl)-modules que l’on obtient semble être reliés à l’espace de Fock de l’algèbre de Kac-Moody ˆsll(C). Une conjecture dans ce sens est énoncée dans le dernier chapitre
In this doctoral thesis, first of all, we have studied the decomposition into irreducible components of the fixed point locus under the action of Γ a finite subgroup of SL2(C) of the Nakajima quiver variety of Jordan’s quiver. The quiver variety associated with Jordan’s quiver is either isomorphic to the punctual Hilbert scheme in C2 or to the Calogero-Moser space. We have described the irreducible components using quiver varieties of McKay’s quiver associated with the finite subgroup Γ. We were then interested in the combinatorics coming out of the indexing set of these irreducible components using an action of the affine Weyl group introduced by Nakajima. Moreover, we have constructed a combinatorial model when Γ is of type D, which is the only original and remarkable case. Indeed, when Γ is of type A, such work has already been done by Iain Gordon and if Γ is of type E, we have shown that the fixed points that are also fixed under the maximal diagonal torus of SL2(C) are the monomial ideals of the punctual Hilbert scheme in C2 indexed by staircase partitions. To be more precise, when Γ is of type D, we have obtained a model of the indexing set of the irreducible components containing a fixed point of the maximal diagonal torus of SL2(C) in terms of symmetric partitions. Finally, if n is an integer greater than 1, using the classification of the projective, symplectic resolutions of the singularity (C2)n/Γn where Γn is the wreath product of the symmetric group on n letters Sn with Γ, we have obtained a description of all such resolutions in terms of irreducible components of the Γ-fixedpoint locus of the Hilbert scheme of points in C2.Secondly, we were interested in the restriction of two vector bundles over a fixed irreducible component of the Γ-fixed point locus of the punctual Hilbert scheme in C2. The first vector bundle is the tautological vector bundle that we have expressed the restriction in terms of Nakajima’s tautological vector bundle on the quiver variety of McKay’s quiver associated with the fixed irreducible component. The second vector bundle is the Procesi bundle. This vector bundle was introduced by Marc Haiman in his work proving the n! conjecture. We have studied the fibers of this bundle as (Sn × Γ)-module. In the first part of the chapter of this thesis dedicated to the Procesi bundle, we have shown a reduction theorem that expresses the (Sn × Γ)-module associated with the fiber of the restriction of the Procesi bundle over an irreducible component C of the Γ-fixed point locus of Hilbert scheme of n points in C2 as the induced of the fiber of the restriction of the Procesi bundle over an irreducible component of the Γ-fixed point locus of the Hilbert scheme of k points in C2 where k ≤ n is explicit and depends on the irreducible component C and Γ. This theorem is then proven with other tools in two edge cases when Γ is of type A. Finally, when Γ is of type D, some explicit reduction formulas of the restriction of the Procesi bundle to the Γ-fixed point locus have been obtained.To finish, if l is an integer greater than 1, then in the case where Γ is the cyclic group of order l contained in the maximal diagonal torus of SL2(C) denoted by µl, the reduction theorem restricts the study of the fibers of the Procesi bundle over the µl-fixed points of the punctual Hilbert scheme in C2 to the study of the fibers over points in the Hilbert scheme associated with monomial ideals parametrized by the l-cores. The (Sn × Γ)-module that one obtains seems to be related to the Fock space of the Kac-Moody algebra ˆsll(C). A conjecture in this direction has been stated in the last chapter
APA, Harvard, Vancouver, ISO, and other styles
6

Paris, Gabrielle. "Resolution of some optimisation problems on graphs and combinatorial games." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1180/document.

Full text
Abstract:
J'ai étudié trois problèmes d'optimisation dans les graphes et les jeux combinatoires.Tout d'abord, les codes identifiants dans les graphes où les sommets font faces à des failles: les codes cherchent à repérer les failles pour les réparer. On s'est intéressé aux codes identifiants dans les graphes circulants en utilisant des plongements de ces graphes dans des grilles infinies.Ensuite, j'ai étudié le jeu de marquage de sommets et le jeu de coloration d'arêtes: ici deux joueurs se font face, le premier cherche à construire une coloration correcte (ou un marquage correct) et le deuxième cherche à l'en empêcher. Pour le jeu de marquage on s'est intéressé aux changements de stratégie gagnante lorsqu'on modifie le graphe. Pour le jeu de coloration d'arêtes on a donné une stratégie gagnante pour le premier joueur pourvu que le graphe considéré admette une certaine décomposition sur les arêtes. On améliore notamment des résultats sur les graphes planaires.Enfin j'ai étudié les jeux à tas purement de casse: deux joueurs à tour de rôle prennent un tas et le cassent en un certain nombre de tas non vides. On s'intéresse aux stratégies gagnantes lorsque les joueurs jouent sur un unique tas contenant n jetons. Ces jeux de pure casse semblent, à l'oeil nu, être réguliers. On a montré que c'est effectivement le cas pour certains et on a donné un test qui permet de déterminer la régularité cas par cas. Un seul cas ne semble pas correspondre à cette régularité: son comportement reste un mystère.En conclusion, je me suis intéressé à trois problèmes bilatéraux qui utilisent différentes méthodes et qui remplissent des propos différents dans le domaine de la combinatoire
I studied three optimization problems on graphs and combinatorial games.First, identifying codes were studied : vertices couteract faults. Identifying codes help locate the fault to repare it. We focused on circulant graphs by embedding them on infinite grids.Then, the marking and the coloring games were studied : two player games were one player wants to build something (a proper coloration or a proper marking) and the other wants to prevent the first player from doing so. For the marking game we studied the evolution of the strategy when modifying the graph. For the coloring game we defined a new edge-wise decomposition of graphs and we defined a new strategy on this decomposition that improves known results on planar graphs.In the end, I studied pure breaking games : two players take turns to break a heap of tokens in a given number of non-empty heaps. We focused on winning strategies for the game starting with a unique heap on n tokens. These games seem, on first sight, to be all regular : we showed this is the case for some of them and we gave a test to study one game at a time. Only one of these games does not seem to be regular, its behavior remains a mystery.To sum up, I studied three bilateral problems that use different methods and have different purposes in combinatorics
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Lei. "Construction of structured low-density parity-check codes : combinatorial and algebraic approaches /." For electronic version search Digital dissertations database. Restricted to UC campuses. Access is free to UC campus dissertations, 2005. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vandomme, Elise. "Contributions to combinatorics on words in an abelian context and covering problems in graphs." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GRENM010/document.

Full text
Abstract:
Cette dissertation se divise en deux parties, distinctes mais connexes, qui sont le reflet de la cotutelle. Nous étudions et résolvons des problèmes concernant d'une part la combinatoire des mots dans un contexte abélien et d'autre part des problèmes de couverture dans des graphes. Chaque question fait l'objet d'un chapitre. En combinatoire des mots, le premier problème considéré s'intéresse à la régularité des suites au sens défini par Allouche et Shallit. Nous montrons qu'une suite qui satisfait une certaine propriété de symétrie est 2-régulière. Ensuite, nous appliquons ce théorème pour montrer que les fonctions de complexité 2-abélienne du mot de Thue--Morse ainsi que du mot appelé ''period-doubling'' sont 2-régulières. Les calculs et arguments développés dans ces démonstrations s'inscrivent dans un schéma plus général que nous espérons pouvoir utiliser à nouveau pour prouver d'autres résultats de régularité. Le deuxième problème poursuit le développement de la notion de mot de retour abélien introduite par Puzynina et Zamboni. Nous obtenons une caractérisation des mots sturmiens avec un intercepte non nul en termes du cardinal (fini ou non) de l'ensemble des mots de retour abélien par rapport à tous les préfixes. Nous décrivons cet ensemble pour Fibonacci ainsi que pour Thue--Morse (bien que cela ne soit pas un mot sturmien). Nous étudions la relation existante entre la complexité abélienne et le cardinal de cet ensemble. En théorie des graphes, le premier problème considéré traite des codes identifiants dans les graphes. Ces codes ont été introduits par Karpovsky, Chakrabarty et Levitin pour modéliser un problème de détection de défaillance dans des réseaux multiprocesseurs. Le rapport entre la taille optimale d'un code identifiant et la taille optimale du relâchement fractionnaire d'un code identifiant est comprise entre 1 et 2 ln(|V|)+1 où V est l'ensemble des sommets du graphe. Nous nous concentrons sur les graphes sommet-transitifs, car nous pouvons y calculer précisément la solution fractionnaire. Nous exhibons des familles infinies, appelées quadrangles généralisés, de graphes sommet-transitifs pour lesquelles les solutions entière et fractionnaire sont de l'ordre |V|^k avec k dans {1/4, 1/3, 2/5}. Le second problème concerne les (r,a,b)-codes couvrants de la grille infinie déjà étudiés par Axenovich et Puzynina. Nous introduisons la notion de 2-coloriages constants de graphes pondérés et nous les étudions dans le cas de quatre cycles pondérés particuliers. Nous présentons une méthode permettant de lier ces 2-coloriages aux codes couvrants. Enfin, nous déterminons les valeurs exactes des constantes a et b de tout (r,a,b)-code couvrant de la grille infinie avec |a-b|>4. Il s'agit d'une extension d'un théorème d'Axenovich
This dissertation is divided into two (distinct but connected) parts that reflect the joint PhD. We study and we solve several questions regarding on the one hand combinatorics on words in an abelian context and on the other hand covering problems in graphs. Each particular problem is the topic of a chapter. In combinatorics on words, the first problem considered focuses on the 2-regularity of sequences in the sense of Allouche and Shallit. We prove that a sequence satisfying a certain symmetry property is 2-regular. Then we apply this theorem to show that the 2-abelian complexity functions of the Thue--Morse word and the period-doubling word are 2-regular. The computation and arguments leading to these results fit into a quite general scheme that we hope can be used again to prove additional regularity results. The second question concerns the notion of return words up to abelian equivalence, introduced by Puzynina and Zamboni. We obtain a characterization of Sturmian words with non-zero intercept in terms of the finiteness of the set of abelian return words to all prefixes. We describe this set of abelian returns for the Fibonacci word but also for the Thue-Morse word (which is not Sturmian). We investigate the relationship existing between the abelian complexity and the finiteness of this set. In graph theory, the first problem considered deals with identifying codes in graphs. These codes were introduced by Karpovsky, Chakrabarty and Levitin to model fault-diagnosis in multiprocessor systems. The ratio between the optimal size of an identifying code and the optimal size of a fractional relaxation of an identifying code is between 1 and 2 ln(|V|)+1 where V is the vertex set of the graph. We focus on vertex-transitive graphs, since we can compute the exact fractional solution for them. We exhibit infinite families, called generalized quadrangles, of vertex-transitive graphs with integer and fractional identifying codes of order |V|^k with k in {1/4,1/3,2/5}. The second problem concerns (r,a,b)-covering codes of the infinite grid already studied by Axenovich and Puzynina. We introduce the notion of constant 2-labellings of weighted graphs and study them in four particular weighted cycles. We present a method to link these labellings with covering codes. Finally, we determine the precise values of the constants a and b of any (r,a,b)-covering code of the infinite grid with |a-b|>4. This is an extension of a theorem of Axenovich
APA, Harvard, Vancouver, ISO, and other styles
9

Larico, Mullisaca Celso Ever. "Un Algoritmo GRASP-Reactivo para resolver el problema de cortes 1D." Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2010. https://hdl.handle.net/20.500.12672/2649.

Full text
Abstract:
Se tiene un grupo de requerimientos de piezas con una cantidad ilimitada de barras de algún tipo de material de tamaño estándar y éste posee mayor dimensión que el grupo de requerimientos. El problema de cortes 1D describe la utilización de las barras de tamaño estándar realizando cortes sobre ellas, de manera que se satisfaga todos los requerimientos con el menor número de barras de tamaño estándar. El problema es catalogado como NP-Difícil [Garey+79], y es ampliamente aplicado en diversos sectores de la industria tales como la maderera, vidrio, papelera, siderúrgica, etc. La presente tesis propone dos algoritmos GRASP Reactivo para el problema de cortes 1D, basado en los algoritmos GRASP BFD y GRASP FFD propuestos por [Mauricio+02], además, desarrolla un sistema de optimización basado en los algoritmos propuesto. Se realizan experimentos numéricos del algoritmo propuesto sobre 100 instancias de pruebas, de donde se obtiene una eficiencia promedio de 97.04% y una eficiencia ponderada de 97,19% para el GRASP Reactivo BFD con proceso de mejoría, además se observa que el GRASP BFD con proceso de mejoría converge más rápido al encontrar una solución, donde realiza en promedio 1237 iteraciones. Los resultados numéricos muestran una mejora del GRASP Reactivo con respecto al GRASP básico implementado por Ganoza y Solano [Ganoza+02] que obtuvo una eficiencia promedio de 96.73%. Estas mejorías se pueden explicar porque el parámetro de relajación y se ajusta de manera automática y es guiada en la búsqueda de una mejor solución.
It has a set of requirements of parts with an unlimited number of bars of some kind of standard size and material and this has increased the group size requirements. The cutting stock problem 1D describes the use of standard-size bars of making cuts on them, so that it meets all requirements with the least number of standard size bars. The problem is listed as NP-Hard [Garey+79], and is widely used in various industry sectors such as wood, glass, paper, steel, and so on. This thesis proposes two algorithms Reactive GRASP to the cutting stock problem 1D, based on the algorithms GRASP BFD and GRASP FFD proposed by [Mauricio+02], also, developed an optimization system based on the proposed algorithms. Numerical experiments are conducted of the proposed algorithm on 100 instances of testing, where you get an average efficiency of 97.04% and a weighted efficiency of 97,04%, also be seen that the GRASP BFD with improvement converges faster to find a solution average of 1237 iterations. The numerical results show an improvement of reactive GRASP with respect to the basic GRASP implemented by Ganoza and Solano [Ganoza+02], who obtained an average efficiency of 96,73%. These improvements can be explained as the relaxation parameter and is set automatically and is guided in the search for a better solution.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
10

Passuello, Alberto. "Semidefinite programming in combinatorial optimization with applications to coding theory and geometry." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00948055.

Full text
Abstract:
We apply the semidefinite programming method to obtain a new upper bound on the cardinality of codes made of subspaces of a linear vector space over a finite field. Such codes are of interest in network coding.Next, with the same method, we prove an upper bound on the cardinality of sets avoiding one distance in the Johnson space, which is essentially Schrijver semidefinite program. This bound is used to improve existing results on the measurable chromatic number of the Euclidean space.We build a new hierarchy of semidefinite programs whose optimal values give upper bounds on the independence number of a graph. This hierarchy is based on matrices arising from simplicial complexes. We show some properties that our hierarchy shares with other classical ones. As an example, we show its application to the problem of determining the independence number of Paley graphs.
APA, Harvard, Vancouver, ISO, and other styles
11

Dolce, Francesco. "Codes bifixes, combinatoire des mots et systèmes dynamiques symboliques." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1036/document.

Full text
Abstract:
L'étude des ensembles de mots complexité linéaire joue un rôle très important dans la théorie de combinatoire des mots et dans la théorie des systèmes dynamiques symboliques.Cette famille d'ensembles comprend les ensembles de facteurs : d'un mot Sturmien ou d'un mot d'Arnoux-Rauzy, d'un codage d'échange d'intervalle, d'un point fixe d'un morphisme primitif, etc.L'enjeu principal de cette thèse est l'étude de systèmes dynamiques minimales, définis de façon équivalente comme ensembles factoriels de mots uniformément récurrents.Comme résultat principal nous considérons une hiérarchie naturelle de systèmes minimal contenante les ensembles neutres, les tree sets et les ensembles spéculaires.De plus, on va relier ces systèmes au groupe libre en utilisant les mots de retours et les bases de sous-groupes d'indice fini.L'on étude aussi les systèmes symboliques dynamiques engendrés par les échanges d'intervalle et les involutions linéaires, ce qui nous permet d'obtenir des exemples et des interprétations géométriques des familles d'ensembles que définis dans notre hiérarchie.L'un des principal outil utilisé ici est l'étude des extensions possibles d'un mot dans un ensemble, ce qui nous permet de déterminer des propriétés telles que la complexité factorielle.Dans ce manuscrit, nous définissons le graphe d'extension, un graphe non orienté associé à chaque mot $w$ dans un ensemble $S$ qui décrit les extensions possibles de $w$ dans $S$ à gauche et à droite.Dans cette thèse, nous présentons plusieurs classes d'ensembles de mots définis par les formes possibles que les graphes d'extensions des éléments dans l'ensemble peuvent avoir.L'une des conditions les plus faibles que nous allons étudier est la condition de neutralité: un mot $w$ est neutre si le nombre de paires $(a,b)$ de lettres telles que $awb in S$ est égal au nombre de lettres $a$ tel que $aw in S$ plus le nombre de lettres $b$ tel que $wb in S$ moins 1.Un ensemble tel que chaque mot non vide satisfait la condition de neutralité est appelé un ensemble neutre.Une condition plus forte est la condition de l'arbre: un mot $w$ satisfait cette condition si son graphe d'extension est à la fois acyclique et connecté.Un ensemble est appelé un tree set si tout mot non vide satisfait cette condition.La famille de tree sets récurrents apparaît comme fermeture naturelle de deux familles d'ensembles très importants : les facteurs d'un mot d'Arnoux-Rauzy et les ensembles d'échange d'intervalle.Nous présentons également les ensembles spéculaires, une sous-famille remarquable de tree sets.Il s'agit également de sous-ensembles de groupes qui forment une généralisation naturelle des groupes libres.Ces ensembles de mots sont une généralisation abstraite des codages naturelles d'échanges d'intervalle et d'involutions linéaires.Pour chaque classe d'ensembles considéré dans cette thèse, nous montrons plusieurs résultats concernant les propriétés de fermeture (sous décodage maximale bifixe ou par rapport aux mots dérivés), la cardinalité des codes bifixes et les de mots de retour, la connexion entre mots de retour et bases du groupe libre, ainsi qu'entre les codes bifixes et les sous-groupes du groupe libre.Chacun de ces résultats est prouvé en utilisant les hypothèses les plus faibles possibles
Sets of words of linear complexity play an important role in combinatorics on words and symbolic dynamics.This family of sets includes set of factors of Sturmian and Arnoux-Rauzy words, interval exchange sets and primitive morphic sets, that is, sets of factors of fixed points of primitive morphisms.The leading issue of this thesis is the study of minimal dynamical systems, also defined equivalently as uniformly recurrent sets of words.As a main result, we consider a natural hierarchy of minimal systems containing neutral sets, tree sets and specular sets.Moreover, we connect the minimal systems to the free group using the notions of return words and basis of subroups of finite index.Symbolic dynamical systems arising from interval exchanges and linear involutions provide us geometrical examples of this kind of sets.One of the main tool used here is the study of possible extensions of a word in a set, that allows us to determine properties such as the factor complexity.In this manuscript we define the extension graph, an undirected graph associated to each word $w$ in a set $S$ which describes the possible extensions of $w$ in $S$ on the left and the right.In this thesis we present several classes of sets of words defined by the possible shapes that the graphs of elements in the set can have.One of the weakest condition that we will study is the neutrality condition: a word $w$ is neutral if the number of pairs $(a, b)$ of letters such that $awb in S$ is equal to the number of letters $a$ such that $aw in S$ plus the number of letters $b$ such that $wb in S$ minus 1.A set such that every nonempty word satisfies the neutrality condition is called a neutral set.A stronger condition is the tree condition: a word $w$ satisfies this condition if its extension graph is both acyclic and connected.A set is called a tree set if any nonempty word satisfies this condition.The family of recurrent tree sets appears as a the natural closure of two known families, namely the Arnoux-Rauzy sets and the interval exchange sets.We also introduce specular sets, a remarkable subfamily of the tree sets.These are subsets of groups which form a natural generalization of free groups.These sets of words are an abstract generalization of the natural codings of interval exchanges and of linear involutions.For each class of sets considered in this thesis, we prove several results concerning closure properties (under maximal bifix decoding or under taking derived words), cardinality of the bifix codes and set of return words in these sets, connection between return words and basis of the free groups, as well as between bifix codes and subgroup of the free group.Each of these results is proved under the weakest possible assumptions
APA, Harvard, Vancouver, ISO, and other styles
12

Nguyen, Thanh Hai. "Enveloppe convexe des codes de Huffman finis." Thesis, Aix-Marseille 2, 2010. http://www.theses.fr/2010AIX22130/document.

Full text
Abstract:
Dans cette thèse, nous étudions l'enveloppe convexe des arbres binaires à racine sur n feuilles.Ce sont les arbres de Huffman dont les feuilles sont labellisées par n caractères. à chaque arbre de Huffman T de n feuilles, nous associons un point xT , appelé point de Huffman, dans l'espace Qn où xT est le nombre d'arêtes du chemin reliant la feuille du ième caractère et la racine.L'enveloppe convexe des points de Huffman est appelé Huffmanoèdre. Les points extrêmes de ce polyèdre sont obtenus dans un premier temps en utilisant l'algorithme d'optimisation qui est l'algorithme de Huffman. Ensuite, nous décrivons des constructions de voisinages pour un point de Huffman donné. En particulier, une de ces constructions est principalement basée sur la construction des sommets adjacents du Permutoèdre. Puis, nous présentons une description partielle du Huffmanoèdre contenant en particulier une famille d'inégalités définissant des facettes dont les coefficients, une fois triés, forment une suite de Fibonacci. Cette description bien que partielle nous permet d'une part d'expliquer la plupart d'inégalités définissant des facettes du Huffmanoèdre jusqu'à la dimension 8, d'autre part de caractériser les arbres de Huffman les plus profonds, i.e. une caractérisation de tous les facettes ayant au moins un plus profond arbre de Huffman comme point extrême. La contribution principale de ce travail repose essentiellement sur les liens que nous établissons entre la construction des arbres et la génération des facettes
In this thesis, we study the convex hull of full binary trees of n leaves. There are the Huffman trees, the leaves of which are labeled by n characters. To each Huffman tree T of n leaves, we associate a point xT , called Huffman point, in the space Qn where xT i is the lengths of the path from the root node to the leaf node marked by the ith character. The convex hull of the Huffman points is called Huffmanhedron. The extreme points of the Huffmanhedron are first obtained by using the optimization algorithm which is the Huffman algorithm. Then, we describe neighbour constructions given a Huffman point x. In particular, one of these constructions is mainly based on the neighbour construction of the Permutahedron. Thereafter, we present a partial description of the Huffmanhedron particularly containing a family of inequalities-defining facets whose coeficients follows in some way the law of the well-known Fibonacci sequence. This description allows us, on the one hand, to explain the most of inequalities-defining facets of the Huffmanhedron up to the dimension 8, on the other hand, to characterize the Huffman deepest trees, i.e a linear characterization of all the facets containing at least a Huffman deepest tree as its extreme point. The main contribution of this work is essentially base on the link what we establish between the Huffman tree construction and the facet generation
APA, Harvard, Vancouver, ISO, and other styles
13

Lokman, Banu. "Converging Preferred Regions In Multi-objective Combinatorial Optimization Problems." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613379/index.pdf.

Full text
Abstract:
Finding the true nondominated points is typically hard for Multi-objective Combinatorial Optimization (MOCO) problems. Furthermore, it is not practical to generate all of them since the number of nondominated points may grow exponentially as the problem size increases. In this thesis, we develop an exact algorithm to find all nondominated points in a specified region. We combine this exact algorithm with a heuristic algorithm that approximates the possible locations of the nondominated points. Interacting with a decision maker (DM), the heuristic algorithm first approximately identifies the region that is of interest to the DM. Then, the exact algorithm is employed to generate all true nondominated points in this region. We conduct experiments on Multi-objective Assignment Problems (MOAP), Multi-objective Knapsack Problems (MOKP) and Multi-objective Shortest Path (MOSP) Problems
and the algorithms work well. Finding the worst possible value for each criterion among the set of efficient solutions has important uses in multi-criteria problems since the proper scaling of each criterion is required by many approaches. Such points are called nadir points. v It is not straightforward to find the nadir points, especially for large problems with more than two criteria. We develop an exact algorithm to find the nadir values for multi-objective integer programming problems. We also find bounds with performance guarantees. We demonstrate that our algorithms work well in our experiments on MOAP, MOKP and MOSP problems. Assuming that the DM'
s preferences are consistent with a quasiconcave value function, we develop an interactive exact algorithm to solve MIP problems. Based on the convex cones derived from pairwise comparisons of the DM, we generate constraints to prevent points in the implied inferior regions. We guarantee finding the most preferred point and our computational experiments on MOAP, MOKP and MOSP problems show that a reasonable number of pairwise comparisons are required.
APA, Harvard, Vancouver, ISO, and other styles
14

Dalyac, Constantin. "Quantum many-body dynamics for combinatorial optimisation and machine learning." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS275.

Full text
Abstract:
L'objectif de cette thèse est d'explorer et de qualifier l'utilisation de la dynamique quantique à N-corps pour résoudre des problèmes industriels difficiles et des tâches d'apprentissage automatique. En tant que collaboration entre des partenaires industriels et universitaires, cette thèse explore les capacités d'un dispositif à atome neutre pour résoudre des problèmes réels. Tout d'abord, nous examinons les problèmes d'optimisation combinatoire et montrons comment les atomes neutres peuvent naturellement encoder un célèbre problème d'optimisation combinatoire appelé le "Maximum Independent Set on Unit-Disk graphs". Ces problèmes apparaissent dans les défis industriels tels que le chargement intelligent des véhicules électriques. L'objectif est de comprendre pourquoi et comment on peut s'attendre à ce qu'une approche quantique résolve ce problème plus efficacement qu'une méthode classique. Les algorithmes que nous proposons sont testés sur un véritable à l'aide d'un ensemble de données provenant d'EDF, la compagnie française d'électricité. Nous explorons en outre l'utilisation d'atomes neutres en 3D pour résoudre des problèmes qui sont hors de portée des méthodes d'approximation classiques. Enfin, nous essayons d'améliorer notre intuition sur les types d'instances pour lesquelles une approche quantique peut (ou ne peut pas) donner de meilleurs résultats que les méthodes classiques. Dans la deuxième partie de cette thèse, nous explorons l'utilisation de la dynamique quantique dans le domaine de l'apprentissage automatique. En plus d'être une grande chaîne de mots à la mode, l'apprentissage automatique quantique (QML) a été de plus en plus étudié ces dernières années. Dans cette partie, nous proposons et mettons en œuvre un protocole quantique pour l'apprentissage automatique sur des ensembles de données de graphes, et nous montrons des résultats prometteurs en ce qui concerne la complexité de l'espace de caractéristiques associé. Enfin, nous explorons l'expressivité des modèles d'apprentissage automatique quantique et présentons des exemples où les méthodes classiques peuvent approximer efficacement les modèles d'apprentissage automatique quantique
The goal of this thesis is to explore and qualify the use of N-body quantum dynamics to Tsolve hard industrial problems and machine learning tasks. As a collaboration between industrial and academic partners, this thesis explores the capabilities of a neutral atom device in tackling real-world problems. First, we look at combinatorial optimisation problems and showcase how neutral atoms can naturally encode a famous combinatorial optimisation problem called the Maximum Independent Set on Unit-Disk graphs. These problems appear in industrial challenges such as Smart-Charging of electric vehicles. The goal is to understand why and how we can expect a quantum approach to solve this problem more efficiently than classical method and our proposed algorithms are tested on real hardware using a dataset from EDF, the French Electrical company. We furthermore explore the use of 3D neutral atoms to tackle problems that are out of reach of classical approximation methods. Finally, we try to improve our intuition on the types of instances for which a quantum approach can(not) yield better results than classical methods. In the second part of this thesis, we explore the use of quantum dynamics in the field of machine learning. In addition of being a great chain of buzzwords, Quantum Machine Learning (QML) has been increasingly investigated in the past years. In this part, we propose and implement a quantum protocol for machine learning on datasets of graphs, and show promising results regarding the complexity of the associated feature space. Finally, we explore the expressivity of quantum machine learning models and showcase examples where classical methods can efficiently approximate quantum machine learning models
APA, Harvard, Vancouver, ISO, and other styles
15

Levy, Marlow H. "Allocating non-monetary incentives for Navy Nurse Corps Officers menu method vs. bid method Combinatorial Retention Auction Mechanism (CRAM) /." Thesis, Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Mar/10Mar%5FLevy.pdf.

Full text
Abstract:
Thesis (M.S. in Management)--Naval Postgraduate School, March 2010.
Thesis Advisor(s): Gates, William R. ; Coughlan, Peter. "March 2010." Author(s) subject terms: Combinatorial Retention Auction Mechanism, auction mechanism, auction, Nurse Corps, Nurse Corps retention, retention, retention mechanism, Menu Method, Bid Method. Includes bibliographical references (p. 95-99). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
16

Harney, Isaiah H. "Colorings of Hamming-Distance Graphs." UKnowledge, 2017. http://uknowledge.uky.edu/math_etds/49.

Full text
Abstract:
Hamming-distance graphs arise naturally in the study of error-correcting codes and have been utilized by several authors to provide new proofs for (and in some cases improve) known bounds on the size of block codes. We study various standard graph properties of the Hamming-distance graphs with special emphasis placed on the chromatic number. A notion of robustness is defined for colorings of these graphs based on the tolerance of swapping colors along an edge without destroying the properness of the coloring, and a complete characterization of the maximally robust colorings is given for certain parameters. Additionally, explorations are made into subgraph structures whose identification may be useful in determining the chromatic number.
APA, Harvard, Vancouver, ISO, and other styles
17

Rix, James Gregory. "Hypercube coloring and the structure of binary codes." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2809.

Full text
Abstract:
A coloring of a graph is an assignment of colors to its vertices so that no two adjacent vertices are given the same color. The chromatic number of a graph is the least number of colors needed to color all of its vertices. Graph coloring problems can be applied to many real world applications, such as scheduling and register allocation. Computationally, the decision problem of whether a general graph is m-colorable is NP-complete for m ≥ 3. The graph studied in this thesis is a well-known combinatorial object, the k-dimensional hypercube, Qk. The hypercube itself is 2-colorable for all k; however, coloring the square of the cube is a much more interesting problem. This is the graph in which the vertices are binary vectors of length k, and two vertices are adjacent if and only if the Hamming distance between the two vectors is at most 2. Any color class in a coloring of Q2k is a binary (k;M, 3) code. This thesis will begin with an introduction to binary codes and their structure. One of the most fundamental combinatorial problems is finding optimal binary codes, that is, binary codes with the maximum cardinality satisfying a specified length and minimum distance. Many upper and lower bounds have been produced, and we will analyze and apply several of these. This leads to many interesting results about the chromatic number of the square of the cube. The smallest k for which the chromatic number of Q2k is unknown is k = 8; however, it can be determined that this value is either 13 or 14. Computational approaches to determine the chromatic number of Q28 were performed. We were unable to determine whether 13 or 14 is the true value; however, much valuable insight was learned about the structure of this graph and the computational difficulty that lies within. Since a 13-coloring of Q28 must have between 9 and 12 color classes being (8; 20; 3) binary codes, this led to a thorough investigation of the structure of such binary codes.
APA, Harvard, Vancouver, ISO, and other styles
18

Cascardo, Neil D. Kumar Sandeep. "Integrating monetary and non-monetary retention incentives for the U.S. Navy Dental Corps officers utilizing the Combinatorial Retention Auction Mechanism (CRAM)." Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Mar/10Mar%5FCascardo.pdf.

Full text
Abstract:
Thesis (M.S. in Management)--Naval Postgraduate School, March 2010.
Thesis Advisor(s): Gates, William R. ; Coughlan, Peter J. "March 2010." Description based on title screen as viewed on April 28, 2010. Author(s) subject terms: CRAM, Dental Corps, extrinsic, incentive, intrinsic, monetary, motivation, Navy, nonmonetary, retention. Includes bibliographical references (p. 141-143). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
19

Kumar, Sandeep. "Integrating monetary and non-monetary retention incentives for the U.S. Navy Dental Corps officers utilizing the Combinatorial Retention Auction Mechanism (CRAM)." Thesis, Monterey, California. Naval Postgraduate School, 2010. http://hdl.handle.net/10945/5381.

Full text
Abstract:
Approved for public release; distribution is unlimited
This research focused on the Navy Dental Corps community because of the retention challenges encountered, especially at the senior Lieutenant and Lieutenant Commander Ranks. The Dental Corps has retention goals by accession cohort and specialty mix to support the correct number of specialty trained officers to meet billet requirements in support of Navy and Marine Corps Dental Readiness. The requirement is to retain a healthy number of Dental Officers by specialty and pay grade to meet both clinical needs, and maintain senior leadership capability in the future. This research used the Universal Incentive Package (UIP) auction and Combinatorial Retention Auction Mechanism (CRAM) to identify the cost savings opportunities for the Navy, while retaining the optimal number of Dental Corps officers. Additionally, this research summarized the importance of creating a balance between monetary and non-monetary incentives. The Oracle Crystal Ball Monte Carlo simulation indicated that CRAM outperformed monetary only and universal auction mechanisms with an average savings between 24 and 30 percent. This research concluded that 61 percent retention level could be achieved by offering CRAM with an average savings of 24 percent over monetary only and UIP. The research concludes that CRAM provides an opportunity to individualize benefits that are not only valued by Dental Corps officers, but are also cost effective for the Navy. For the Navy to achieve its retention goals and becoming a top-50 employer, it is imperative to create a balance between monetary and non-monetary incentives. This not only enhances morale but also overcomes work-related challenges.
APA, Harvard, Vancouver, ISO, and other styles
20

Ramasubramanian, Brinda. "Combinatorial Approaches to Study Protein Stability: Design and Application of Cell-Based Screens to Engineer Tumor Suppressor Proteins." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1325256130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Parreau, Aline. "Problèmes d'identification dans les graphes." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00745054.

Full text
Abstract:
Dans cette thèse, nous étudions des problèmes d'identification des sommets dans les graphes. Identifier les sommets d'un graphe consiste à attribuer à chaque sommet un objet qui rend le sommet unique par rapport aux autres. Nous nous intéressons particulièrement aux codes identifiants : sous-ensembles de sommets d'un graphe, dominants, tels que le voisinage fermé de chaque sommet du graphe a une intersection unique avec l'ensemble. Les sommets du code identifiant peuvent être considérés comme des capteurs et chaque sommet du graphe comme un lieu possible pour une défaillance. Nous caractérisons tout d'abord l'ensemble des graphes pour lesquels tous les sommets sauf un sont nécessaires dans tout code identifiant. Le problème consistant à trouver un code identifiant optimal, c'est-'a-dire de taille minimale, étant NP-difficile, nous l'étudions sur quatre classes restreintes de graphes. Suivant les cas, nous pouvons résoudre complètement le problème (pour les graphes de Sierpinski), améliorer les bornes générales (pour les graphes d'intervalles, les graphes adjoints, la grille du roi) ou montrer que le problème reste difficile même restreint (pour les graphes adjoints). Nous considérons ensuite des variations autour des codes identifiants permettant plus de flexibilité pour les capteurs. Nous étudions par exemple des capteurs du plan capables de détecter des défaillances 'a un rayon connu avec une erreur tolérée. Nous donnons des constructions de tels codes et bornons leur taille pour des valeurs de rayons et d'erreurs fixés ou asymptotiques. Nous introduisons enfin la notion de coloration identifiante d'un graphe, permettant d'identifier les sommets d'un graphe avec les couleurs présentes dans son voisinage. Nous comparons cette coloration avec la coloration propre des graphes et donnons des bornes sur le nombre de couleurs nécessaires pour identifier un graphe, pour plusieurs classes de graphes.
APA, Harvard, Vancouver, ISO, and other styles
22

Zeh, Alexander. "Algebraic Soft- and Hard-Decision Decoding of Generalized Reed--Solomon and Cyclic Codes." Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00866134.

Full text
Abstract:
Deux défis de la théorie du codage algébrique sont traités dans cette thèse. Le premier est le décodage efficace (dur et souple) de codes de Reed--Solomon généralisés sur les corps finis en métrique de Hamming. La motivation pour résoudre ce problème vieux de plus de 50 ans a été renouvelée par la découverte par Guruswami et Sudan à la fin du 20ème siècle d'un algorithme polynomial de décodage jusqu'au rayon Johnson basé sur l'interpolation. Les premières méthodes de décodage algébrique des codes de Reed--Solomon généralisés faisaient appel à une équation clé, c'est à dire, une description polynomiale du problème de décodage. La reformulation de l'approche à base d'interpolation en termes d'équations clés est un thème central de cette thèse. Cette contribution couvre plusieurs aspects des équations clés pour le décodage dur ainsi que pour la variante décodage souple de l'algorithme de Guruswami--Sudan pour les codes de Reed--Solomon généralisés. Pour toutes ces variantes un algorithme de décodage efficace est proposé. Le deuxième sujet de cette thèse est la formulation et le décodage jusqu'à certaines bornes inférieures sur leur distance minimale de codes en blocs linéaires cycliques. La caractéristique principale est l'intégration d'un code cyclique donné dans un code cyclique produit (généralisé). Nous donnons donc une description détaillée du code produit cyclique et des codes cycliques produits généralisés. Nous prouvons plusieurs bornes inférieures sur la distance minimale de codes cycliques linéaires qui permettent d'améliorer ou de généraliser des bornes connues. De plus, nous donnons des algorithmes de décodage d'erreurs/d'effacements [jusqu'à ces bornes] en temps quadratique.
APA, Harvard, Vancouver, ISO, and other styles
23

Boulanger, Christophe. "Accès multiple à répartition par les codes : optimisation de séquences et architectures de récepteurs associés." Grenoble INPG, 1998. http://www.theses.fr/1998INPG0086.

Full text
Abstract:
Ce travail porte sur l'etude de radiocommunications basees sur la methode d'acces multiple a repartition par les codes (amrc), qui est derivee de la technique d'etalement de spectre par sequence directe. Les enjeux futures de cette derniere et les contraintes imposees par les canaux de radiopropagation sont d'abord exposes. Puis, parmi les differentes techniques envisagees pour accroitre la capacite de tels systemes, deux sont plus particulierement etudiees : l'optimisation des sequences d'etalement et l'etude d'architectures de reception associees. L'utilisation de methodes derivees de l'optimisation combinatoire permet d'ameliorer les proprietes de correlation des sequences decrites classiquement, grace a la prise en compte de criteres supplementaires. Ces sequences prouvent leur apport dans des outils de simulation et sur des maquettes reelles. Toutefois, ces dernieres ne permettent pas de remedier, a elles seules, aux problemes inherents aux architectures de reception amrc classiquement utilisees a ce jour. C'est pourquoi, apres avoir souligne les faiblesses de telles structures, nous proposons l'emploi d'architectures multi-utilisateurs basees sur la suppression parallele d'interferences, afin d'ameliorer la qualite de reception. Cette demarche pragmatique est alors validee par un demonstrateur et amelioree grace a la mise en uvre de recepteurs lineaires plus sophistiques. Des simulations sur station de travail et des mesures experimentales ont ensuite valide ces etudes. La principale originalite de cette partie reside dans la prise en compte d'une dynamique finie et des problemes lies a la synchronisation. En effet, nous avons garde a l'esprit la realisation materielle de telles architectures avec des circuits dedies. Cette these trouve toute sa pertinence dans la proposition du standard umts actuel qui preconise l'utilisation de l'amrc large bande pour la troisieme generation de telephonie mobile, remplacante du gsm actuel. Cette proposition mentionne notamment la necessite de rechercher une augmentation de capacite pour l'utilisation de recepteurs amrc evolues.
APA, Harvard, Vancouver, ISO, and other styles
24

Foucaud, Florent. "Aspects combinatoires et algorithmiques des codes identifiants dans les graphes." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2012. http://tel.archives-ouvertes.fr/tel-00766138.

Full text
Abstract:
Nous étudions des aspects combinatoires et algorithmiques relatifs aux codes identifiants dans les graphes. Un code identifiant est un ensemble de sommets d'un graphe tel que, d'une part, chaque sommet hors du code a un voisin dans le code et, d'autre part, tous les sommets ont un voisinage distinct à l'intérieur du code. Nous caractérisons tout d'abord les graphes orientés et non-orientés atteignant les bornes supérieures connues pour la taille minimum d'un code identifiant. Nous donnons également de nouveaux majorants et minorants sur ce paramètre pour les graphes de degré maximum donné, les graphes de maille au moins 5, les graphes d'intervalles et les graphes adjoints. Nous étudions ensuite la complexité algorithmique des problèmes de décision et d'optimisation associés à la notion de code identifiant. Nous montrons que ces problèmes restent algorithmiquement difficiles, même quand on les restreint aux graphes bipartis, co-bipartis, split, d'intervalles ou adjoints. Enfin, nous donnons un algorithme PTAS pour les graphes d'intervalles unitaires.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Weiyi. "Protein Engineering Hydrophobic Core Residues of Computationally Designed Protein G and Single-Chain Rop: Investigating the Relationship between Protein Primary structure and Protein Stability through High-Throughput Approaches." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398956266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Vernay, Rémi. "Etudes d'objets combinatoires : applications à la bio-informatique." Phd thesis, Université de Bourgogne, 2011. http://tel.archives-ouvertes.fr/tel-00668134.

Full text
Abstract:
Cette thèse porte sur des classes d'objets combinatoires, qui modélisent des données en bio-informatique. Nous étudions notamment deux méthodes de mutation des gènes à l'intérieur du génome : la duplication et l'inversion. Nous étudions d'une part le problème de la duplication-miroir complète avec perte aléatoire en termes de permutations à motifs exclus. Nous démontrons que la classe de permutations obtenue avec cette méthode après p duplications à partir de l'identité est la classe de permutations qui évite les permutations alternées de longueur 2p + 1. Nous énumérons également le nombre de duplications nécessaires et suffisantes pour obtenir une permutation quelconque de longueur n à partir de l'identité. Nous proposons également deux algorithmes efficaces permettant de reconstituer deux chemins différents entre l'identité et une permutation déterminée. Nous donnons enfin des résultats connexes sur d'autres classes proches. La restriction de la relation d'ordre < induite par le code de Gray réfléchi à l'ensemble des compositions et des compositions bornées induit de nouveaux codes de Gray pour ces ensembles. La relation d'ordre < restreinte à l'ensemble des compositions bornées d'un intervalle fournit encore un code de Gray. L'ensemble des ncompositions bornées d'un intervalle généralise simultanément l'ensemble produit et l'ensemble des compositions d'un entier et donc la relation < définit de façon unifiée tous ces codes de Gray. Nous réexprimons les codes de Gray de Walsh et Knuth pour les compositions (bornées) d'un entier à l'aide d'une unique relation d'ordre. Alors, le code de Gray deWalsh pour des classes de compositions et de permutations devient une sous-liste de celui de Knuth, lequel est à son tour une sous-liste du code de Gray réfléchi.
APA, Harvard, Vancouver, ISO, and other styles
27

Peterson, Nicholas Richard. "On Random k-Out Graphs with Preferential Attachment." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1370527839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bellissimo, Michael Robert. "A LOWER BOUND ON THE DISTANCE BETWEEN TWO PARTITIONS IN A ROUQUIER BLOCK." University of Akron / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=akron1523039734121649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Paula, Ana Rachel Brito de 1990. "Polinômios de permutação e palavras balanceadas." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/307070.

Full text
Abstract:
Orientador: Fernando Eduardo Torres Orihuela
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-27T14:35:36Z (GMT). No. of bitstreams: 1 Paula_AnaRachelBritode_M.pdf: 1519694 bytes, checksum: 61b845f0f57e58e56f6a1f759fc9a382 (MD5) Previous issue date: 2015
Resumo: A dissertação "Polinômios de Permutação e Palavras Balanceadas" tem como principal objetivo estudar a influência dos polinômios de permutação na teoria de códigos mediante o conceito de palavra balanceada. A base do trabalho é o artigo "Permutacion polynomials and aplications to coding theory" de Yann Laigke-Chapuy. Expomos os conceitos básicos de polinômios de permutação como algumas de suas características, exemplos e métodos para identificação dos mesmos. Em seguida trataremos dos códigos lineares com ênfase nos binários explorando particularmente a conjectura de Helleseth
Abstract: The main goal in writing this dissertation is the study of the influence of the Theory of Permutation Polynomials in the context of Coding Theory via the concept of balanced word. Our basic reference is the paper "Permutation polynomials and applications to coding theory" by Y. Laigke- Chapury. Our plan is to introduce the basic concepts in Coding Theory, Permutation Polynomials; then we mainly consider the long-standing open Helleseth¿s conjecture
Mestrado
Matematica Aplicada
Mestra em Matemática Aplicada
APA, Harvard, Vancouver, ISO, and other styles
30

Rombach, Michaela Puck. "Colouring, centrality and core-periphery structure in graphs." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:7326ecc6-a447-474f-a03b-6ec244831ad4.

Full text
Abstract:
Krivelevich and Patkós conjectured in 2009 that χ(G(n, p)) ∼ χ=(G(n, p)) ∼ χ∗=(G(n, p)) for C/n < p < 1 − ε, where ε > 0. We prove this conjecture for n−1+ε1 < p < 1 − ε2 where ε1, ε2 > 0. We investigate several measures that have been proposed to indicate centrality of nodes in networks, and find examples of networks where they fail to distinguish any of the vertices nodes from one another. We develop a new method to investigate core-periphery structure, which entails identifying densely-connected core nodes and sparsely-connected periphery nodes. Finally, we present an experiment and an analysis of empirical networks, functional human brain networks. We found that reconfiguration patterns of dynamic communities can be used to classify nodes into a stiff core, a flexible periphery, and a bulk. The separation between this stiff core and flexible periphery changes as a person learns a simple motor skill and, importantly, it is a good predictor of how successful the person is at learning the skill. This temporally defined core-periphery organisation corresponds well with the core- periphery detected by the method that we proposed earlier the static networks created by averaging over the subjects dynamic functional brain networks.
APA, Harvard, Vancouver, ISO, and other styles
31

Roux, Antoine. "Etude d’un code correcteur linéaire pour le canal à effacements de paquets et optimisation par comptage de forêts et calcul modulaire." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS337.

Full text
Abstract:
La transmission fiable de données sur un canal de transmission est un problème récurrent en Informatique. En effet, quel que soit le canal de transmission employé, on observe obligatoirement de la détérioration de l’information transmise, voire sa perte pure et simple. Afin de palier à ce problème, plusieurs solutions ont été apportées, notamment via l’emploi de codes correcteurs. Dans cette thèse, nous étudions un code correcteur développé en 2014 et 2015 pour l’entreprise Thales durant ma deuxième année de Master en apprentissage. Il s’agit d’un code actuellement utilisé par Thales pour fiabiliser une transmission UDP passant par un dispositif réseau, l’Elips-SD. L’Elips-SD est une diode réseau qu’on place sur une fibre optique et qui garantit physiquement que la transmission est unidirectionnelle. Le cas d’utilisation principal de cette diode est de permettre le suivi de la production d’un site sensible, ou encore de superviser son fonctionnement, tout en garantissant à ce site une protection face aux intrusions extérieures. A l’opposé, un autre cas d’utilisation est la transmission de données depuis un ou plusieurs sites non-sécurisés vers un site sécurisé, dont on souhaite s’assurer qu’aucune information ne pourra par la suite fuiter. Le code correcteur que nous étudions est un code correcteur linéaire pour le canal à effacements de paquets, qui a reçu la certification OTAN de la Direction Générale des Armées. Nous l’avons babtisé "Fauxtraut", anagramme de "Fast algorithm using Xor to repair altered unidirectionnal transmissions". Afin d’étudier ce code correcteur, de présenter son fonctionnement et ses performances, et les différentes modifications apportées durant cette thèse, nous établissons tout d’abord un état de l’art des codes correcteurs, en nous concentrant principalement sur les codes linéaires non-MDS, tels que les codes LDPC. Puis nous présentons le fonctionnement de Fauxtraut, et analysons son comportement (complexité, consommation mémoire, performances) par la théorie et par des simulations. Enfin, nous présenterons différentes versions de ce code correcteur développées durant cette thèse, qui aboutissent à d’autres cas d’utilisation, tels que la transmission d’information sur un canal unidirectionnel à erreurs ou sur un canal bidirectionnel, à l’image de ce que permet de faire le protocole H-ARQ. Dans cette partie, nous étudierons notamment le comportement de notre code correcteur via la théorie des graphes : calculer la probabilité de décoder convenablement ou non revient à connaître la probabilité d’apparition de cycles dans le sous-graphe de graphes particuliers, les graphes de Rook et les graphes bipartis complets. Le problème s’énonce simplement et s’avère compliqué, et nous espérons qu’il saura intéresser des chercheurs du domaine. Nous présentons une méthode permettant de calculer exactement cette probabilité pour de petits graphes (qui aboutit à un certain nombre de formules closes), et une fonction tendant asymptotiquement vers cette probabilité pour de plus grands graphes. Nous étudierons aussi la manière de paramétrer automatiquement notre code correcteur par le calcul modulaire et la combinatoire, utilisant la fonction de Landau, qui retourne un ensemble de nombres entiers dont la somme est fixée et le plus commun multiple est maximal. Dans une dernière partie, nous présentons un travail effectué durant cette thèse ayant conduit à une publication dans la revue Theoretical Computer Science. Il concerne un problème non-polynomial de la théorie des graphes : le couplage maximal dans les graphes temporels. Cet article propose notamment deux algorithmes de complexité polynomiale : un algorithme de 2-approximation et un algorithme de kernelisation pour ce problème. L’algorithme de 2- approximation peut notamment être utilisé de manière incrémentale : arêtes du flot de liens nous parviennent les unes après les autres, et on construit la 2-approximation au fur et à mesure de leur arrivée
Reliably transmitting information over a transmission channel is a recurrent problem in Informatic Sciences. Whatever may be the channel used to transmit information, we automatically observe erasure of this information, or pure loss. Different solutions can be used to solve this problem, using forward error correction codes is one of them. In this thesis, we study a corrector code developped in 2014 and 2015 for Thales society during my second year of master of apprenticeship. It is currently used to ensure the reliability of a transmission based on the UDP protocole, and passing by a network diode, Elips-SD. Elip-SD is an optical diode that can be plugged on an optical fiber to physically ensure that the transmission is unidirectional. The main usecase of such a diode is to enable supervising a critical site, while ensuring that no information can be transmitted to this site. At the opposite, another usecase is the transmission from one or multiple unsecured emitters to one secured receiver who wants to ensure that no information can be robbed. The corrector code that we present is a linear corrector code for the binary erasure channel using packets, that obtained the NATO certification from the DGA ("Direction Générale de Armées" in French). We named it Fauxtraut, for "Fast algorithm using Xor to repair altered unidirectional transmissions". In order to study this code, presenting how it works, its performance and the modifications we added during this thesis, we first establish a state of the art of forward error correction, focusing on non-MDS linear codes such as LDPC codes. Then we present Fauxtraut behavior, and analyse it theorically and with simulations. Finally, we present different versions of this code that were developped during this thesis, leading to other usecases such as transmitting reliable information that can be altered instead of being erased, or on a bidirectionnal channel, such as the H-ARQ protocole, and different results on the number of cycles in particular graphs. In the last part, we present results that we obtained during this thesis and that finally lead to an article in the Technical Computer Science. It concerns a non-polynomial problema of Graphs theorie : maximum matching in temporal graphs. In this article, we propose two algorithms with polynomial complexity : a 2-approximation algorithm and a kernelisation algorithm forthis problema
APA, Harvard, Vancouver, ISO, and other styles
32

Leocadio, Marcelo Augusto. "Código MDS com a métrica POSET." Universidade Federal de Viçosa, 2013. http://locus.ufv.br/handle/123456789/4927.

Full text
Abstract:
Made available in DSpace on 2015-03-26T13:45:36Z (GMT). No. of bitstreams: 1 texto completo.pdf: 1755688 bytes, checksum: 33e268f82618cf29e2d1fa6df5c6fa6c (MD5) Previous issue date: 2013-07-30
Fundação de Amparo a Pesquisa do Estado de Minas Gerais
A poset metric is the generalization of the Hamming metric. In this work we make a detailed study of poset spaces, hierarchy of I -weights and I -distribution of P P weights, emphasizing the non-degenerate poset codes. We verify the duality relation between the hierarchy weights of poset code and its dual. In the sequel two new parameters are defined to a class of poset codes non-degenerate with dual code is too non-degenerate in the environment. As a result enunciated in the Minimality Theorem, the Variance Theorem and the Minimality Identity in the poset spaces.
Uma generalização da métrica de Hamming é a métrica poset. Faremos um estudo detalhado dos espaços poset, hierarquia de I-pesos e a I-distribuição de pesos, dando ênfase aos códigos poset não degenerados. Verificamos a relação de dualidade poset entre as hierarquias de um código e seu dual. Definimos dois novos parâmetros para a classe de códigos dualmente não degenerados no ambiente poset. Como consequência, enunciamos e mostramos o Teorema da Minimalidade, o Teorema da e Variância e a Identidade de Minimalidades no espaço poset.
APA, Harvard, Vancouver, ISO, and other styles
33

Pakovitch, Fedor. "Combinatoire des arbres planaires et arithmétiques des courbes hyperelliptiques." Université Joseph Fourier (Grenoble ; 1971-2015), 1997. http://www.theses.fr/1997GRE10073.

Full text
Abstract:
Le but principal de cette these est de proposer une nouvelle methode pour des etudes dans le cadre de la theorie des dessins d'enfants de a. Grotendieck de certaines questions concernant l'action du groupe de galois absolu sur l'ensemble des arbres planaires. On definit l'application qui associe a chaque arbre planaire a n aretes, une courbe hyperelliptique avec un point de n-division. Cette construction permet d'etablir un lien entre la theorie de la torsion des courbes hyperelliptiques et celle des dessins d'enfants. En particulier, en utilisant les resultats correspondants sur la torsion des courbes elliptiques, on obtient des estimations inferieures sur les degres des corps des modules des arbres de certaines classes. D'autre part, la construction ci-dessus donne une suite interessante d'exemples de diviseurs rationnels de torsion sur des courbes hyperelliptiques definies sur des corps de nombres. Les trois premiers chapitres de cette these sont consacres a la presentation de ces questions. Le quatrieme chapitre porte sur la theorie geometrique des fonctions et est motive par un probleme d'unicite pose en 1976 par c. C. Yang : est-il vrai que le polynome complexe de degre n est defini a symetrie pres par l'image reciproque de deux points. On prouve que la reponse a cette question est affirmative et on donne quelques generalisations.
APA, Harvard, Vancouver, ISO, and other styles
34

Berg, Christopher James. "Combinatorics of (l,0)-JM partitions, l-cores, the ladder crystal and the finite Hecke algebra." Diss., 2009. http://proquest.umi.com/pqdweb?did=1866259771&sid=3&Fmt=2&clientId=48051&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Deugau, Christopher Jordan. "Algorithms and combinatorics of maximal compact codes." Thesis, 2006. http://hdl.handle.net/1828/2101.

Full text
Abstract:
The implementation of two different algorithms for generating compact codes of some size N are presented. An analysis of both algorithms is given. in an attempt to prove whether or not the algorithms run in constant amortized time. Meta-Fibonacci sequences are also investigated in this paper. Using a particular numbering on k-ary trees, we find that a group of meta-Fibonacci sequences count the number of nodes at the bottom level of these k-ary trees. These meta-Fibonacci sequences are also related to compact codes. Finally, generating functions are proved for the meta-Fibonacci sequences discussed.
APA, Harvard, Vancouver, ISO, and other styles
36

"Listing Combinatorial Objects." Doctoral diss., 2012. http://hdl.handle.net/2286/R.I.15797.

Full text
Abstract:
abstract: Gray codes are perhaps the best known structures for listing sequences of combinatorial objects, such as binary strings. Simply defined as a minimal change listing, Gray codes vary greatly both in structure and in the types of objects that they list. More specific types of Gray codes are universal cycles and overlap sequences. Universal cycles are Gray codes on a set of strings of length n in which the first n-1 letters of one object are the same as the last n-1 letters of its predecessor in the listing. Overlap sequences allow this overlap to vary between 1 and n-1. Some of our main contributions to the areas of Gray codes and universal cycles include a new Gray code algorithm for fixed weight m-ary words, and results on the existence of universal cycles for weak orders on [n]. Overlap cycles are a relatively new structure with very few published results. We prove the existence of s-overlap cycles for k-permutations of [n], which has been an open research problem for several years, as well as constructing 1- overlap cycles for Steiner triple and quadruple systems of every order. Also included are various other results of a similar nature covering other structures such as binary strings, m-ary strings, subsets, permutations, weak orders, partitions, and designs. These listing structures lend themselves readily to some classes of combinatorial objects, such as binary n-tuples and m-ary n-tuples. Others require more work to find an appropriate structure, such as k-subsets of an n-set, weak orders, and designs. Still more require a modification in the representation of the objects to fit these structures, such as partitions. Determining when and how we can fit these sets of objects into our three listing structures is the focus of this dissertation.
Dissertation/Thesis
Ph.D. Mathematics 2012
APA, Harvard, Vancouver, ISO, and other styles
37

林坤熒. "A research of codes structure from combinatorial designs." Thesis, 1987. http://ndltd.ncl.edu.tw/handle/28505123694497147101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Dang, Rajdeep Singh. "Experimental Studies On A New Class Of Combinatorial LDPC Codes." Thesis, 2007. https://etd.iisc.ac.in/handle/2005/523.

Full text
Abstract:
We implement a package for the construction of a new class of Low Density Parity Check (LDPC) codes based on a new random high girth graph construction technique, and study the performance of the codes so constructed on both the Additive White Gaussian Noise (AWGN) channel as well as the Binary Erasure Channel (BEC). Our codes are “near regular”, meaning thereby that the the left degree of any node in the Tanner graph constructed varies by at most 1 from the average left degree and so also the right degree. The simulations for rate half codes indicate that the codes perform better than both the regular Progressive Edge Growth (PEG) codes which are constructed using a similar random technique, as well as the MacKay random codes. For high rates the ARG (Almost Regular high Girth) codes perform better than the PEG codes at low to medium SNR’s but the PEG codes seem to do better at high SNR’s. We have tried to track both near codewords as well as small weight codewords for these codes to examine the performance at high rates. For the binary erasure channel the performance of the ARG codes is better than that of the PEG codes. We have also proposed a modification of the sum-product decoding algorithm, where a quantity called the “node credibility” is used to appropriately process messages to check nodes. This technique substantially reduces the error rates at signal to noise ratios of 2.5dB and beyond for the codes experimented on. The average number of iterations to achieve this improved performance is practically the same as that for the traditional sum-product algorithm.
APA, Harvard, Vancouver, ISO, and other styles
39

Dang, Rajdeep Singh. "Experimental Studies On A New Class Of Combinatorial LDPC Codes." Thesis, 2007. http://hdl.handle.net/2005/523.

Full text
Abstract:
We implement a package for the construction of a new class of Low Density Parity Check (LDPC) codes based on a new random high girth graph construction technique, and study the performance of the codes so constructed on both the Additive White Gaussian Noise (AWGN) channel as well as the Binary Erasure Channel (BEC). Our codes are “near regular”, meaning thereby that the the left degree of any node in the Tanner graph constructed varies by at most 1 from the average left degree and so also the right degree. The simulations for rate half codes indicate that the codes perform better than both the regular Progressive Edge Growth (PEG) codes which are constructed using a similar random technique, as well as the MacKay random codes. For high rates the ARG (Almost Regular high Girth) codes perform better than the PEG codes at low to medium SNR’s but the PEG codes seem to do better at high SNR’s. We have tried to track both near codewords as well as small weight codewords for these codes to examine the performance at high rates. For the binary erasure channel the performance of the ARG codes is better than that of the PEG codes. We have also proposed a modification of the sum-product decoding algorithm, where a quantity called the “node credibility” is used to appropriately process messages to check nodes. This technique substantially reduces the error rates at signal to noise ratios of 2.5dB and beyond for the codes experimented on. The average number of iterations to achieve this improved performance is practically the same as that for the traditional sum-product algorithm.
APA, Harvard, Vancouver, ISO, and other styles
40

Ozols, Maris. "Quantum Random Access Codes with Shared Randomness." Thesis, 2009. http://hdl.handle.net/10012/4458.

Full text
Abstract:
We consider a communication method, where the sender encodes n classical bits into 1 qubit and sends it to the receiver who performs a certain measurement depending on which of the initial bits must be recovered. This procedure is called (n,1,p) quantum random access code (QRAC) where p > 1/2 is its success probability. It is known that (2,1,0.85) and (3,1,0.79) QRACs (with no classical counterparts) exist and that (4,1,p) QRAC with p > 1/2 is not possible. We extend this model with shared randomness (SR) that is accessible to both parties. Then (n,1,p) QRAC with SR and p > 1/2 exists for any n > 0. We give an upper bound on its success probability (the known (2,1,0.85) and (3,1,0.79) QRACs match this upper bound). We discuss some particular constructions for several small values of n. We also study the classical counterpart of this model where n bits are encoded into 1 bit instead of 1 qubit and SR is used. We give an optimal construction for such codes and find their success probability exactly---it is less than in the quantum case. Interactive 3D quantum random access codes are available on-line at http://home.lanet.lv/~sd20008/racs
APA, Harvard, Vancouver, ISO, and other styles
41

Howard, Leah. "Nets of order 4m+2: linear dependence and dimensions of codes." Thesis, 2009. http://hdl.handle.net/1828/1566.

Full text
Abstract:
A k-net of order n is an incidence structure consisting of n2 points and nk lines. Two lines are said to be parallel if they do not intersect. A k-net of order n satisfies the following four axioms: (i) every line contains n points; (ii) parallelism is an equivalence relation on the set of lines; (iii) there are k parallel classes, each consisting of n lines and (iv) any two non-parallel lines meet exactly once. A Latin square of order n is an n by n array of symbols in which each row and column contains each symbol exactly once. Two Latin squares L and M are said to be orthogonal if the n2 ordered pairs (Li,j , Mi,j ) are all distinct. A set of t mutual ly orthogonal Latin squares is a collection of Latin squares, necessarily of the same order, that are pairwise orthogonal. A k-net of order n is combinatorially equivalent to k − 2 mutually orthogonal Latin squares of order n. It is this equivalence that motivates much of the work in this thesis. One of the most important open questions in the study of Latin squares is: given an order n what is the maximum number of mutually orthogonal Latin squares of that order? This is a particularly interesting question when n is congruent to two modulo four. A code is constructed from a net by defining the characteristic vectors of lines to be generators of the code over the finite field F2 . Codes allow the structure of nets to be profitably explored using techniques from linear algebra. In this dissertation a framework is developed to study linear dependence in the code of the net N6 of order ten. A complete classification and combinatorial description of such dependencies is given. This classification could facilitate a computer search for a net or could be used in conjunction with more refined techniques to rule out the existence of these nets combinatorially. In more generality relations in 4-nets of order congruent to two modulo four are also characterized. One type of dependency determined algebraically is shown not to be combinatorially feasible in a net N6 of order ten. Some dependencies are shown to be related geometrically, allowing for a concise classification. Using a modification of the dimension argument first introduced by Dougherty [19] new upper bounds are established on the dimension of codes of nets of order congruent to two modulo four. New lower bounds on some of these dimensions are found using a combinatorial argument. Certain constraints on the dimension of a code of a net are shown to imply the existence of specific combinatorial structures in the net. The problem of packing points into lines in a prescribed way is related to packing problems in graphs and more general packing problems in combinatorics. This dissertation exploits the geometry of nets and symmetry of complete multipartite graphs and combinatorial designs to further unify these concepts in the context of the problems studied here.
APA, Harvard, Vancouver, ISO, and other styles
42

Williams, Aaron Michael. "Shift gray codes." Thesis, 2009. http://hdl.handle.net/1828/1966.

Full text
Abstract:
Combinatorial objects can be represented by strings, such as 21534 for the permutation (1 2) (3 5 4), or 110100 for the binary tree corresponding to the balanced parentheses (()()). Given a string s = s1 s2 sn, the right-shift operation shift(s, i, j) replaces the substring si si+1..sj by si+1..sj si. In other words, si is right-shifted into position j by applying the permutation (j j−1 .. i) to the indices of s. Right-shifts include prefix-shifts (i = 1) and adjacent-transpositions (j = i+1). A fixed-content language is a set of strings that contain the same multiset of symbols. Given a fixed-content language, a shift Gray code is a list of its strings where consecutive strings differ by a shift. This thesis asks if shift Gray codes exist for a variety of combinatorial objects. This abstract question leads to a number of practical answers. The first prefix-shift Gray code for multiset permutations is discovered, and it provides the first algorithm for generating multiset permutations in O(1)-time while using O(1) additional variables. Applications of these results include more efficient exhaustive solutions to stacker-crane problems, which are natural NP-complete traveling salesman variants. This thesis also produces the fastest algorithm for generating balanced parentheses in an array, and the first minimal-change order for fixed-content necklaces and Lyndon words. These results are consequences of the following theorem: Every bubble language has a right-shift Gray code. Bubble languages are fixed-content languages that are closed under certain adjacent-transpositions. These languages generalize classic combinatorial objects: k-ary trees, ordered trees with fixed branching sequences, unit interval graphs, restricted Schr oder and Motzkin paths, linear-extensions of B-posets, and their unions, intersections, and quotients. Each Gray code is circular and is obtained from a new variation of lexicographic order known as cool-lex order. Gray codes using only shift(s, 1, n) and shift(s, 1, n−1) are also found for multiset permutations. A universal cycle that omits the last (redundant) symbol from each permutation is obtained by recording the first symbol of each permutation in this Gray code. As a special case, these shorthand universal cycles provide a new fixed-density analogue to de Bruijn cycles, and the first universal cycle for the "middle levels" (binary strings of length 2k + 1 with sum k or k + 1).
APA, Harvard, Vancouver, ISO, and other styles
43

Hall, Joanne. "Graphical associations and codes with small covering radius." Master's thesis, 2007. http://hdl.handle.net/1885/151419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

"Existence and Construction of Difference Families and Their Applications to Combinatorial Codes in Multiple-Access Communications." Thesis, 2009. http://hdl.handle.net/2237/12277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

籾原, 幸二, and Koji Momihara. "Existence and Construction of Difference Families and Their Applications to Combinatorial Codes in Multiple-Access Communications." Thesis, 2009. http://hdl.handle.net/2237/12277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ouyang, Yingkai. "Transmitting Quantum Information Reliably across Various Quantum Channels." Thesis, 2013. http://hdl.handle.net/10012/7507.

Full text
Abstract:
Transmitting quantum information across quantum channels is an important task. However quantum information is delicate, and is easily corrupted. We address the task of protecting quantum information from an information theoretic perspective -- we encode some message qudits into a quantum code, send the encoded quantum information across the noisy quantum channel, then recover the message qudits by decoding. In this dissertation, we discuss the coding problem from several perspectives.} The noisy quantum channel is one of the central aspects of the quantum coding problem, and hence quantifying the noisy quantum channel from the physical model is an important problem. We work with an explicit physical model -- a pair of initially decoupled quantum harmonic oscillators interacting with a spring-like coupling, where the bath oscillator is initially in a thermal-like state. In particular, we treat the completely positive and trace preserving map on the system as a quantum channel, and study the truncation of the channel by truncating its Kraus set. We thereby derive the matrix elements of the Choi-Jamiolkowski operator of the corresponding truncated channel, which are truncated transition amplitudes. Finally, we give a computable approximation for these truncated transition amplitudes with explicit error bounds, and perform a case study of the oscillators in the off-resonant and weakly-coupled regime numerically. In the context of truncated noisy channels, we revisit the notion of approximate error correction of finite dimension codes. We derive a computationally simple lower bound on the worst case entanglement fidelity of a quantum code, when the truncated recovery map of Leung et. al. is rescaled. As an application, we apply our bound to construct a family of multi-error correcting amplitude damping codes that are permutation-invariant. This demonstrates an explicit example where the specific structure of the noisy channel allows code design out of the stabilizer formalism via purely algebraic means. We study lower bounds on the quantum capacity of adversarial channels, where we restrict the selection of quantum codes to the set of concatenated quantum codes. The adversarial channel is a quantum channel where an adversary corrupts a fixed fraction of qudits sent across a quantum channel in the most malicious way possible. The best known rates of communicating over adversarial channels are given by the quantum Gilbert-Varshamov (GV) bound, that is known to be attainable with random quantum codes. We generalize the classical result of Thommesen to the quantum case, thereby demonstrating the existence of concatenated quantum codes that can asymptotically attain the quantum GV bound. The outer codes are quantum generalized Reed-Solomon codes, and the inner codes are random independently chosen stabilizer codes, where the rates of the inner and outer codes lie in a specified feasible region. We next study upper bounds on the quantum capacity of some low dimension quantum channels. The quantum capacity of a quantum channel is the maximum rate at which quantum information can be transmitted reliably across it, given arbitrarily many uses of it. While it is known that random quantum codes can be used to attain the quantum capacity, the quantum capacity of many classes of channels is undetermined, even for channels of low input and output dimension. For example, depolarizing channels are important quantum channels, but do not have tight numerical bounds. We obtain upper bounds on the quantum capacity of some unital and non-unital channels -- two-qubit Pauli channels, two-qubit depolarizing channels, two-qubit locally symmetric channels, shifted qubit depolarizing channels, and shifted two-qubit Pauli channels -- using the coherent information of some degradable channels. We use the notion of twirling quantum channels, and Smith and Smolin's method of constructing degradable extensions of quantum channels extensively. The degradable channels we introduce, study and use are two-qubit amplitude damping channels. Exploiting the notion of covariant quantum channels, we give sufficient conditions for the quantum capacity of a degradable channel to be the optimal value of a concave program with linear constraints, and show that our two-qubit degradable amplitude damping channels have this property.
APA, Harvard, Vancouver, ISO, and other styles
47

Rebenich, Niko. "Counting prime polynomials and measuring complexity and similarity of information." Thesis, 2016. http://hdl.handle.net/1828/7251.

Full text
Abstract:
This dissertation explores an analogue of the prime number theorem for polynomials over finite fields as well as its connection to the necklace factorization algorithm T-transform and the string complexity measure T-complexity. Specifically, a precise asymptotic expansion for the prime polynomial counting function is derived. The approximation given is more accurate than previous results in the literature while requiring very little computational effort. In this context asymptotic series expansions for Lerch transcendent, Eulerian polynomials, truncated polylogarithm, and polylogarithms of negative integer order are also provided. The expansion formulas developed are general and have applications in numerous areas other than the enumeration of prime polynomials. A bijection between the equivalence classes of aperiodic necklaces and monic prime polynomials is utilized to derive an asymptotic bound on the maximal T-complexity value of a string. Furthermore, the statistical behaviour of uniform random sequences that are factored via the T-transform are investigated, and an accurate probabilistic model for short necklace factors is presented. Finally, a T-complexity based conditional string complexity measure is proposed and used to define the normalized T-complexity distance that measures similarity between strings. The T-complexity distance is proven to not be a metric. However, the measure can be computed in linear time and space making it a suitable choice for large data sets.
Graduate
0544 0984 0405
nrebenich@gmail.com
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography