Dissertations / Theses on the topic 'Code distance'

To see the other types of publications on this topic, follow the link: Code distance.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Code distance.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ketkar, Avanti Ulhas. "Code constructions and code families for nonbinary quantum stabilizer code." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/2743.

Full text
Abstract:
Stabilizer codes form a special class of quantum error correcting codes. Nonbinary quantum stabilizer codes are studied in this thesis. A lot of work on binary quantum stabilizer codes has been done. Nonbinary stabilizer codes have received much less attention. Various results on binary stabilizer codes such as various code families and general code constructions are generalized to the nonbinary case in this thesis. The lower bound on the minimum distance of a code is nothing but the minimum distance of the currently best known code. The focus of this research is to improve the lower bounds on this minimum distance. To achieve this goal, various existing quantum codes are studied that have good minimum distance. Some new families of nonbinary stabilizer codes such as quantum BCH codes are constructed. Different ways of constructing new codes from the existing ones are also found. All these constructions together help improve the lower bounds.
APA, Harvard, Vancouver, ISO, and other styles
2

Miller, John. "High code rate, low-density parity-check codes with guaranteed minimum distance and stopping weight /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2003. http://wwwlib.umi.com/cr/ucsd/fullcit?p3090443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Filho, Nelson Whitaker. "Aircraft Distance Measurement System." International Foundation for Telemetering, 1994. http://hdl.handle.net/10150/611674.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California
The Aircraft Distance Measurement System (ADMS) could be used in Flight Test application to determine the aircraft position and speed during takeoff, landing and acceleration-stop performance test within runway limits using a microwave link.
APA, Harvard, Vancouver, ISO, and other styles
4

Nordström, Markus. "Automatic Source Code Classification : Classifying Source Code for a Case-Based Reasoning System." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-25519.

Full text
Abstract:
This work has investigated the possibility of classifying Java source code into cases for a case-based reasoning system. A Case-Based Reasoning system is a problem solving method in Artificial Intelligence that uses knowledge of previously solved problems to solve new problems. A case in case-based reasoning consists of two parts: the problem part and solution part. The problem part describes a problem that needs to be solved and the solution part describes how this problem was solved. In this work, the problem is described as a Java source file using words that describes the content in the source file and the solution is a classification of the source file along with the source code. To classify Java source code, a classification system was developed. It consists of four analyzers: type filter, documentation analyzer, syntactic analyzer and semantic analyzer. The type filter determines if a Java source file contains a class or interface. The documentation analyzer determines the level of documentation in asource file to see the usefulness of a file. The syntactic analyzer extracts statistics from the source code to be used for similarity, and the semantic analyzer extracts semantics from the source code. The finished classification system is formed as a kd-tree, where the leaf nodes contains the classified source files i.e. the cases. Furthermore, a vocabulary was developed to contain the domain knowledge about the Java language. The resulting kd-tree was found to be imbalanced when tested, as the majority of source files analyzed were placed inthe left-most leaf nodes. The conclusion from this was that using documentation as a part of the classification made the tree imbalanced and thus another way has to be found. This is due to the fact that source code is not documented to such an extent that it would be useful for this purpose.
APA, Harvard, Vancouver, ISO, and other styles
5

Rivas, Angel Esteban Labrador. "Coordination of distance and overcurrent relays using a mathematical optimization technique." Universidade Estadual de Londrina. Centro de Tecnologia e Urbanismo. Programa de Pós-Graduação em Engenharia Elétrica, 2018. http://www.bibliotecadigital.uel.br/document/?code=vtls000218372.

Full text
Abstract:
A proteção da rede de transmissão tem um papel importante nos sistemas de energia. Para melhorar a proteção é comum combinar diferentes tipos de relés; a combinação de relés de sobrecorrente e distância é um esquema bem conhecido. A lenta velocidade operacional do relé de sobrecorrente força a aplicação do relé de distância como o dispositivo de proteção principal. Os relés de sobrecorrente são usados como proteção de retaguarda tendo o esquema de distância como principal. Para atingir esse objetivo, a coordenação entre os sistemas de proteção primária e de retaguarda deve ser realizada desenvolvendo uma função objetivo com ambos parâmetros. Velocidade, seletividade e estabilidade são restrições, que devem ser satisfeitas através da coordenação. A coordenação do problema de relés direcionais de sobrecorrente (DOCRs) é um problema de programação não linear (NLP), geralmente resolvido com uma técnica de programação linear (LP) apenas considerando a configuração de tempo de atraso (TDS) como uma variável de decisão, sem lidar com o problema não-linear de configuração da corrente de partida (PS), ou com a resolução do componente PS usando uma técnica heurística. Um método meta-heurístico apresentado para resolver o problema de otimização é o algoritmo de otimização de colônias de formigas (ACO). O ACO empregado é uma extensão do algoritmo ACO para problemas de otimização de domínio contínuo implementados para problemas de otimização de variáveis mistas, condensados em dois tipos de variáveis tanto contínuas como categóricas. Neste trabalho, tanto o TDS como o PS são variáveis de decisão, o TDS é considerado contínuo e o PS categórico. Normalmente, a solução inicial é gerada aleatoriamente, além disso, esses resultados são comparados usando os mesmos valores aleatórios PS para resolver um relaxamento do problema DOCR com LP para obter novos valores TDS. A inclusão de relés de distância na formulação adicionará uma variável de tipo contínuo, mas com características lineares (constantes) que não alteram a formulação de DOCR para este problema de PNL. Para esta metodologia, cinco sistemas de transmissão (3, 6, 8, 9 e 15 Barras) foram avaliados para comparar a coordenação DOCR clássica, a introdução dos relés de distância e a resposta do modelo a soluções iniciais de alta qualidade junto a uma metodologia hibrida utilizando LP.
Protection of power transmission has an important role in power systems. To improve protection is common to combine different types of relays, which combination of overcurrent and distance relays is a well-known protection scheme. A slow operational speed of overcurrent relay forces application of distance relay as the main protection device. Overcurrent relays are used as backup protection to main distance protection system. To achieve this aim, coordination between primary and backup protection systems should be performed developing an objective function with both parameters. Speed, selectivity, and stability are constraints, which must be satisfied by performing coordination. The coordination of directional overcurrent relays (DOCRs) problem is a nonlinear programming problem (NLP), usually solved with a linear programming technique (LP) only considering the time dial setting (TDS) as a decision variable, without dealing with the non-linear problem of plug setting (PS), or solving the PS component using a heuristic technique. A metaheuristic algorithm method presented to solve the optimization problem is an ant colony optimization (ACO) algorithm. The ACO used is an extension of the ACO algorithm for continuous domain optimization problems implemented to mixed variable optimization problems, condensed in two types of variables both continuous and categorical. In this work, both TDS and PS are decision variables, TDS is considered continuous and PS categorical. Normally, the initial solution is random generated, in addition, those results are compared by using the same random PS values for solving a relaxation of the DOCRs problem with LP to obtain new TDS values. Including distance relays in the formulation will add an additional variable continuous type, but with linear (barely constant) characteristics making no changes in DOCRs formulation for this NLP problem. For this methodology, five transmission systems (3, 6, 8, 9, and 15 Bus accordingly) were evaluated to compare classical DOCR coordination, distance relays introduction and model response to high-quality initial solutions within a hybrid method using LP.
APA, Harvard, Vancouver, ISO, and other styles
6

Toste, Marisa Lapa. "Distance properties of convolutional codes over Z pr." Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17953.

Full text
Abstract:
Doutoramento em Matemática e Aplicações
Nesta tese consideramos códigos convolucionais sobre o anel polinomial [ ] r p ′ D , onde p é primo e r é um inteiro positivo. Em particular, focamo-nos no conjunto das palavras de código com suporte finito e estudamos as suas propriedades no que respeita às distâncias. Investigamos as duas propriedades mais importantes dos códigos convolucionais, nomeadamente, a distância livre e a distância de coluna. Começamos por analisar e solucionar o problema de, dado um conjunto de parâmetros, determinar a distância livre máxima possível que um código convolucional sobre [ ] r p ′ D pode atingir. Com efeito, obtemos um novo limite superior para esta distância generalizando os limites obtidos no contexto dos códigos convolucionais sobre corpos finitos. Além disso, mostramos que esse limite é ótimo, no sentido em que não pode ser melhorado. Para tal, apresentamos construções de códigos convolucionais (não necessariamente livres) que permitem atingir esse limite, para um certo conjunto de parâmetros. De acordo com a literatura chamamos a esses códigos MDS. Definimos também distâncias de coluna de um código convolucional. Obtemos limites superiores para as distâncias de coluna e chamamos MDP aos códigos cujas distâncias de coluna atingem estes limites superiores. Além disso, mostramos a existência de códigos MDP. Note-se, porém, que os códigos MDP apresentados não são completamente gerais pois os seus parâmetros devem satisfazer determinadas condições. Finalmente, estudamos o código dual de um código convolucional definido em (( )) r p ′ D . Os códigos duais de códigos convolucionais sobre corpos finitos foram exaustivamente investigados, como é refletido na literatura sobre o tema. Estes códigos são relevantes pois fornecem informação sobre a distribuição dos pesos do código e é neste sentido a inclusão deste assunto no âmbito desta tese. Outra razão importante para o estudo de códigos duais é a sua utilidade para o desenvolvimento de algoritmos de descodificação quando consideramos um erasure channel. Nesta tese são analisadas algumas propriedades fundamentais dos duais. Em particular, mostramos que códigos convolucionais definidos em (( )) r p ′ D admitem uma matriz de paridade. Para além disso, apresentamos um método construtivo para determinar um codificador de um código dual. keywords Convolutional codes, finite rings, free distance, column distance, MDS, MDP, dual code abstract In this thesis we consider convolutional codes over the polynomial ring [ ] r p ′ D , where p is a prime and r is a positive integer. In particular, we focus in the set of finite support codewords and study their distances properties. We investigate the two most important distance properties of convolutional codes, namely, the free distance and the column distance. First we address and fully solve the problem of determining the maximum possible free distance a convolutional code over [ ] r p ′ D can achieve, for a given set of parameters. Indeed, we derive a new upper bound on this distance generalizing the Singleton-type bounds derived in the context of convolutional codes over finite fields. Moreover, we show that such a bound is optimal in the sense that it cannot be improved. To do so we provide concrete constructions of convolutional codes (not necessarily free) that achieve this bound for any given set of parameters. In accordance with the literature we called such codes Maximum Distance Separable (MDS). We define the notion of column distance of a convolutional code. We obtain upper-bounds on the column distances and call Maximum Distance Profile (MDP) the codes that attain the maximum possible column distances. Furthermore, we show the existence of MDP codes. We note however that the MDP codes presented here are not completely general as their parameters need to satisfy certain conditions. Finally, we study the dual code of a convolutional code defined in (( )) r p ′ D . Dual codes of convolutional codes over finite fields have been thoroughly investigated as it is reflected in the large body of literature on this topic. They are relevant as they provide value information on the weight distribution of the code and therefore fit in the scope of this thesis. Another important reason for the study of dual codes is that they can be very useful for the development of decoding algorithms of convolutional codes over the erasure channel. In this thesis some fundamental properties have been analyzed. In particular, we show that convolutional codes defined in (( )) r p ′ D admit a parity-check matrix. Moreover, we
In this thesis we consider convolutional codes over the polynomial ring [ ] r p ′ D , where p is a prime and r is a positive integer. In particular, we focus in the set of finite support codewords and study their distances properties. We investigate the two most important distance properties of convolutional codes, namely, the free distance and the column distance. First we address and fully solve the problem of determining the maximum possible free distance a convolutional code over [ ] r p ′ D can achieve, for a given set of parameters. Indeed, we derive a new upper bound on this distance generalizing the Singleton-type bounds derived in the context of convolutional codes over finite fields. Moreover, we show that such a bound is optimal in the sense that it cannot be improved. To do so we provide concrete constructions of convolutional codes (not necessarily free) that achieve this bound for any given set of parameters. In accordance with the literature we called such codes Maximum Distance Separable (MDS). We define the notion of column distance of a convolutional code. We obtain upper-bounds on the column distances and call Maximum Distance Profile (MDP) the codes that attain the maximum possible column distances. Furthermore, we show the existence of MDP codes. We note however that the MDP codes presented here are not completely general as their parameters need to satisfy certain conditions. Finally, we study the dual code of a convolutional code defined in (( )) r p ′ D . Dual codes of convolutional codes over finite fields have been thoroughly investigated as it is reflected in the large body of literature on this topic. They are relevant as they provide value information on the weight distribution of the code and therefore fit in the scope of this thesis. Another important reason for the study of dual codes is that they can be very useful for the development of decoding algorithms of convolutional codes over the erasure channel. In this thesis some fundamental properties have been analyzed. In particular, we show that convolutional codes defined in (( )) r p ′ D admit a parity-check matrix. Moreover, we provide a constructive method to explicitly compute an encoder of the dual code.
APA, Harvard, Vancouver, ISO, and other styles
7

Papadimitriou, Panayiotis D. "Code design based on metric-spectrum and applications." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1365.

Full text
Abstract:
We introduced nested search methods to design (n, k) block codes for arbitrary channels by optimizing an appropriate metric spectrum in each iteration. For a given k, the methods start with a good high rate code, say k/(k + 1), and successively design lower rate codes up to rate k/2^k corresponding to a Hadamard code. Using a full search for small binary codes we found that optimal or near-optimal codes of increasing length can be obtained in a nested manner by utilizing Hadamard matrix columns. The codes can be linear if the Hadamard matrix is linear and non-linear otherwise. The design methodology was extended to the generic complex codes by utilizing columns of newly derived or existing unitary codes. The inherent nested nature of the codes make them ideal for progressive transmission. Extensive comparisons to metric bounds and to previously designed codes show the optimality or near-optimality of the new codes, designed for the fading and the additive white Gaussian noise channel (AWGN). It was also shown that linear codes can be optimal or at least meeting the metric bounds; one example is the systematic pilot-based code of rate k/(k + 1) which was proved to meet the lower bound on the maximum cross-correlation. Further, the method was generalized such that good codes for arbitrary channels can be designed given the corresponding metric or the pairwise error probability. In synchronous multiple-access schemes it is common to use unitary block codes to transmit the multiple users’ information, especially in the downlink. In this work we suggest the use of newly designed non-unitary block codes, resulting in increased throughput efficiency, while the performance is shown not to be substantially sacrificed. The non-unitary codes are again developed through suitable nested searches. In addition, new multiple-access codes are introduced that optimize certain criteria, such as the sum-rate capacity. Finally, the introduction of the asymptotically optimum convolutional codes for a given constraint length, reduces dramatically the search size for good convolutional codes of a certain asymptotic performance, and the consequences to coded code-division multiple access (CDMA) system design are highlighted.
APA, Harvard, Vancouver, ISO, and other styles
8

Kacan, Denis, and Darius Sidlauskas. "Information Visualization and Machine Learning Applied on Static Code Analysis." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3033.

Full text
Abstract:
Software engineers will possibly never see the perfect source code in their lifetime, but they are seeing much better analysis tools for finding defects in software. The approaches used in static code analysis emerged from simple code crawling to usage of statistical and probabilistic frameworks. This work presents a new technique that incorporates machine learning and information visualization into static code analysis. The technique learns patterns in a program’s source code using a normalized compression distance and applies them to classify code fragments into faulty or correct. Since the classification frequently is not perfect, the training process plays an essential role. A visualization element is used in the hope that it lets the user better understand the inner state of the classifier making the learning process transparent. An experimental evaluation is carried out in order to prove the efficacy of an implementation of the technique, the Code Distance Visualizer. The outcome of the evaluation indicates that the proposed technique is reasonably effective in learning to differentiate between faulty and correct code fragments, and the visualization element enables the user to discern when the tool is correct in its output and when it is not, and to take corrective action (further training or retraining) interactively, until the desired level of performance is reached.
APA, Harvard, Vancouver, ISO, and other styles
9

Ménéxiadis, Géraldine. "Détection à grande distance et localisation du supersonique "Concorde" à partir de signaux infrasonores." Phd thesis, Université de la Méditerranée - Aix-Marseille II, 2008. http://tel.archives-ouvertes.fr/tel-00487912.

Full text
Abstract:
L'objet de cette étude est la résolution d'un problème inverse inédit, à savoir la localisation d'un avion supersonique à partir de signaux acoustiques enregistrés par une station de mesure unique. La distance de l'aéronef à la station de mesure est a priori inconnue, mais peut varier de quelques dizaines à quelques centaines de kilomètres ou davantage. Les signaux exploités à l'occasion de ce travail se situent généralement dans la gamme infrasonore, au-dessous de 20 Hz voire de 10 Hz. L'ONERA ayant mené des campagnes de mesure en Bretagne lors des premiers vols commerciaux transatlantiques de l'avion Concorde, les premières exploitations ont consisté à reprendre les données de ces campagnes et à développer à cette occasion un code de propagation acoustique basé sur la théorie des rayons. Le code de l'ONERA existant SIMOUN a été adapté en trois dimensions pour pouvoir tenir compte de la météorologie réelle et a reçu un certain nombre d'aménagements, dont le calcul de l'atténuation acoustique en fonction de la fréquence et la prise en compte de la rotondité de la Terre dont la négligence aurait entraîné des erreurs importantes aux grandes distances. Le calcul de niveau acoustique étant peu significatif aux distances considérées, des méthodes inédites basées sur l'analyse spectrale ont été développées. Associées à une technique de goniométrie basée notamment sur le calcul des fonctions d'intercorrélation temporelles, elles nous permettent de localiser l'avion supersonique en gisement-distance. Une première méthode, valable jusqu'à 200 kilomètres environ, est basée sur la divergence en fonction de la distance à l'aéronef de l'onde de pression en N correspondant au bang sonique. Il en résulte une modification du spectre en arche caractéristique de cette onde qui peut être corrélée avec la distance de propagation sous réserve de connaître l'onde en N initialement émise, reliée à la vitesse et à la géométrie de l'avion. Une seconde méthode beaucoup plus générale consiste à évaluer l'augmentation de la pente du spectre de l'onde en N, sachant que l'absorption atmosphérique, proportionnelle à la distance parcourue, augmente avec la fréquence et que la dissipation des effets non-linéaires a également tendance à augmenter la pente du spectre du signal. Cette méthode semble convenir pour des distances comprises entre 200 et 1000 km environ et présente l'avantage d'être indépendante des caractéristiques de la source sonore. Afin de pallier aux limitations de cette méthode, principalement liées au rapport signal sur bruit, l'analyse de signaux enregistrés en Suède à 3000 km de l'avion suggère d'utiliser pour les très grandes distances une méthode basée sur la durée totale du signal. Cette durée augmente en effet avec la distance, en rapport avec le phénomène classique de "rumble" qui transforme en roulement de tonnerre le signal impulsionnel émis par un coup de foudre.
APA, Harvard, Vancouver, ISO, and other styles
10

Abbara, Mamdouh. "Turbo-codes quantiques." Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00842327.

Full text
Abstract:
L'idée des turbo-codes, construction très performante permettant l'encodage de l'information classique, ne pouvait jusqu'à présent pas être transposé au problème de l'encodage de l'information quantique. En effet, il subsistait des obstacles tout aussi théoriques que relevant de leur implémentation. A la version quantique connue de ces codes, on ne connaissait ni de résultat établissant une distance minimale infinie, propriété qui autorise de corriger un nombre arbitraire d'erreurs, ni de décodage itératif efficace, car les turbo-encodages quantiques, dits catastrophiques, propagent certaines erreurs lors d'un tel décodage et empêchent son bon fonctionnement. Cette thèse a permis de relever ces deux défis, en établissant des conditions théoriques pour qu'un turbo-code quantique ait une distance minimale infinie, et d'autre part, en exhibant une construction permettant au décodage itératif de bien fonctionner. Les simulations montrent alors que la classe de turbo-codes quantiques conçue est efficace pour transmettre de l'information quantique via un canal dépolarisant dont l'intensité de dépolarisation peut aller jusqu'à p = 0,145. Ces codes quantiques, de rendement constant, peuvent aussi bien être utilisés directement pour encoder de l'information quantique binaire, qu'être intégrés comme modules afin d'améliorer le fonctionnement d'autres codes tels que les LDPC quantiques.
APA, Harvard, Vancouver, ISO, and other styles
11

Grant, Eugene. "INTERCEPTOR TARGET MISSILE TELEMETRY." International Foundation for Telemetering, 1997. http://hdl.handle.net/10150/607598.

Full text
Abstract:
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada
A target missile is a unique piece of test hardware. This test tool must be highly reliable, low cost and simple and must perform any task that the developing interceptor missile planners require. The target missile must have ample power and guidance resources to put the target in a specified place in the sky at a desired time. The telemetry and measurement system for the target missile must have the same requirements as its interceptor missile but must be flexible enough to accept new requirements as they are applied to the target and its interceptor. The United States Army has tasked Coleman Aerospace to design and build this type of target missile. This paper describes and analyzes the telemetry and instrumentation system that a Hera target missile carries. This system has been flying for the past two years, has completed seven out of seven successful test flights and has accomplished all test objectives to date. The telemetry and instrumentation system is an integral part of the missile self-test system. All preflight checks and flight simulations are made with the on-board three-link telemetry system through a radio frequency (RF) link directly through the missile antenna system to a ground station antenna. If an RF transmission path is not available due to test range restrictions, a fiber-optic cable links the pulse code modulator (PCM) encoder to the receiving ground stations which include the bitsync, decommutator and recorders. With this capability, alternative testing is not limited by RF test range availability. The ground stations include two mobile stations and a factory station for all testing including preflight testing of the missile system prior to flight test launches. These three ground stations are built in a single configuration with additional equipment in the mobile units for use at remote locations. The design, fabrication, testing and utilization of these ground stations are reviewed. The telemetry system is a modification of the classical PCM system and will operate with its interceptor missile at least into the first decade from the year 2000.
APA, Harvard, Vancouver, ISO, and other styles
12

Fang, Juing. "Décodage pondère des codes en blocs et quelques sujets sur la complexité du décodage." Paris, ENST, 1987. http://www.theses.fr/1987ENST0005.

Full text
Abstract:
Etude de la compléxité théorique du décodage des codes en blocs à travers une famille d'algorithmes basée sur le principe d'optimisation combinatoire. Puis on aborde un algorithme parallèle de décodage algébrique dont la complexitré est liée au niveau de bruit du canal. Enfin on introduit un algorithme de Viterbi pour les applications de traitement en chaînes.
APA, Harvard, Vancouver, ISO, and other styles
13

Tujkovic, D. (Djordje). "Space-time turbo coded modulation for wireless communication systems." Doctoral thesis, University of Oulu, 2003. http://urn.fi/urn:isbn:9514269977.

Full text
Abstract:
Abstract High computational complexity constrains truly exhaustive computer searches for good space-time (ST) coded modulations mostly to low constraint length space-time trellis codes (STTrCs). Such codes are primarily devised to achieve maximum transmit diversity gain. Due to their low memory order, optimization based on the design criterion of secondary importance typically results in rather modest coding gains. As another disadvantage of limited freedom, the different low memory order STTrCs are almost exclusively constructed for either slow or fast fading channels. Therefore in practical applications characterized by extremely variable Doppler frequencies, the codes typically fail to demonstrate desired robustness. On the other hand, the main drawback of eventually increased constraint lengths is the prohibitively large decoding complexity, which may increase exponentially if optimal maximum-likelihood decoding (MLD) is applied at the receiver. Therefore, robust ST coded modulation schemes with large equivalent memory orders structured as to allow sub-optimal, low complexity, iterative decoding are needed. To address the aforementioned issues, this thesis proposes parallel concatenated space-time turbo coded modulation (STTuCM). It is among the earliest multiple-input multiple-output (MIMO) coded modulation designs built on the intersection of ST coding and turbo coding. The systematic procedure for building an equivalent recursive STTrC (Rec-STTrC) based on the trellis diagram of an arbitrary non-recursive STTrC is first introduced. The parallel concatenation of punctured constituent Rec-STTrCs designed upon the non-recursive Tarokh et al. STTrCs (Tarokh-STTrCs) is evaluated under different narrow-band frequency flat block fading channels. Combined with novel transceiver designs, the applications for future wide-band code division multiple access (WCDMA) and orthogonal frequency division multiplexing (OFDM) based broadband radio communication systems are considered. The distance spectrum (DS) interpretation of the STTuCM and union bound (UB) performance analysis over slow and fast fading channels reveal the importance of multiplicities in the ST coding design. The modified design criteria for space-time codes (STCs) are introduced that capture the joint effects of error coefficients and multiplicities in the two dimensional DS of a code. Applied to STTuCM, such DS optimization resulted in a new set of constituent codes (CCs) for improved and robust performance over both slow and fast fading channels. A recursive systematic form with a primitive equivalent feedback polynomial is assumed for CCs to assure good convergence in iterative decoding. To justify such assumptions, the iterative decoding convergence analysis based on the Gaussian approximation of the extrinsic information is performed. The DS interpretation, introduced with respect to an arbitrary defined effective Hamming distance (EHD) and effective product distance (EPD), is applicable to the general class of geometrically non-uniform (GNU) CCs. With no constrains on the implemented information interleaving, the STTuCM constructed from newly designed CCs achieves full spatial diversity over quasi-static fading channels, the condition commonly identified as the most restrictive for robust performance over a variety of Doppler spreads. Finally, the impact of bit-wise and symbol-wise information interleaving on the performance of STTuCM is studied.
APA, Harvard, Vancouver, ISO, and other styles
14

Gharaei, Mohammad. "Nouveaux concepts pour les réseaux d'accès optiques." Paris, Télécom ParisTech, 2010. http://www.theses.fr/2010ENST0022.

Full text
Abstract:
Les travaux de recherche présentés dans cette thèse adressent les nouveaux concepts pour les réseaux d'accès optiques. La première partie de la thèse porte sur l'implémentation de plusieurs réseaux privés optiques compatible avec les réseaux d'accès optiques (PONs: passive optical networks). Deux topologies de plusieurs réseaux optiques privés regroupés en anneau ou en étoile sont étudiées. Les différents ONUs (optical network units) d’un réseau PON constituant un réseau privé communiquent entre eux grâce à la technique OCDMA (optical code division multiple access). Le bilan de puissance optique et le dimensionnement des deux architectures sont analysés. Les performances en taux d'erreur binaires et les diagrammes de l'œil des deux architectures sont mesurées expérimentalement. Enfin, il est montré que le trafic des réseaux privés n’affecte pas celui du PON et s’ajoute au débit de ce dernier. Les réseaux hybrides WDM/OCDMA sont considérés pour améliorer la capacité de multiplexage des réseaux d’accès optiques actuels. Les principales causes de dégradation des performances dans ces réseaux hybrides sont les interférences d'accès multiple de la technique OCDMA et la diaphotie linéaire entre canaux WDM. Lorsque le nombre d’utilisateurs du réseau est supérieur au nombre de canaux simultanés WDM/OCDMA, une panne «soft blocking» apparaît. L’analyse de la capacité de télétrafic dans ces conditions a montré que l’utilisation de composants WDM avec de faibles niveaux de diaphotie et des codeurs/décodeurs OCDMA avec de bonnes propriétés de corrélation permet à la capacité de télétrafic du système WDM/OCDMA de se rapprocher de la capacité nominale
This thesis deals with new concepts for future optical access networks. Firstly, optical private networking over PON is studied as it can potentially improve the QoS performance and security issue of conventional internet protocol based virtual private networks. A decentralized private networking traffic from PON traffic is a profitable approach to increase network throughput performance. We study the implementation of multiple optical private networks over PON layout using OCDMA technique via ring as well as star topology. The power budget and the network scalability of these architectures are analyzed and tested experimentally. These two architectures are demonstrated to have a negligible impact on the functionality of PON which proves the efficiency and the feasibility of simultaneous optical PNs over PON layout. Then, capacity performance of WDM/OCDMA networks is analyzed since hybrid networks are considered to improve multiplexing capacity of optical access networks. Physical layer limitations such as multiple access interference (MAI), beat noise and linear interchannel crosstalk are the major reasons for free error rate performance degradation. Crosstalk limitations have been evaluated to optimize user capacity performance. Then teletraffic capacity of WDM/OCDMA systems has been analyzed under the non-zero outage probability constraints to demonstrate a flexible capacity performance. Finally, it has been demonstrated that using WDM (de)multiplexers with lower crosstalk levels accompanying OCDMA encoders/decoders with good correlation properties, causes the teletraffic capacity to be closer to the nominal capacity
APA, Harvard, Vancouver, ISO, and other styles
15

Zeh, Alexander. "Algebraic Soft- and Hard-Decision Decoding of Generalized Reed--Solomon and Cyclic Codes." Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00866134.

Full text
Abstract:
Deux défis de la théorie du codage algébrique sont traités dans cette thèse. Le premier est le décodage efficace (dur et souple) de codes de Reed--Solomon généralisés sur les corps finis en métrique de Hamming. La motivation pour résoudre ce problème vieux de plus de 50 ans a été renouvelée par la découverte par Guruswami et Sudan à la fin du 20ème siècle d'un algorithme polynomial de décodage jusqu'au rayon Johnson basé sur l'interpolation. Les premières méthodes de décodage algébrique des codes de Reed--Solomon généralisés faisaient appel à une équation clé, c'est à dire, une description polynomiale du problème de décodage. La reformulation de l'approche à base d'interpolation en termes d'équations clés est un thème central de cette thèse. Cette contribution couvre plusieurs aspects des équations clés pour le décodage dur ainsi que pour la variante décodage souple de l'algorithme de Guruswami--Sudan pour les codes de Reed--Solomon généralisés. Pour toutes ces variantes un algorithme de décodage efficace est proposé. Le deuxième sujet de cette thèse est la formulation et le décodage jusqu'à certaines bornes inférieures sur leur distance minimale de codes en blocs linéaires cycliques. La caractéristique principale est l'intégration d'un code cyclique donné dans un code cyclique produit (généralisé). Nous donnons donc une description détaillée du code produit cyclique et des codes cycliques produits généralisés. Nous prouvons plusieurs bornes inférieures sur la distance minimale de codes cycliques linéaires qui permettent d'améliorer ou de généraliser des bornes connues. De plus, nous donnons des algorithmes de décodage d'erreurs/d'effacements [jusqu'à ces bornes] en temps quadratique.
APA, Harvard, Vancouver, ISO, and other styles
16

Kandel, Khagendra. "A Preliminary Numerical Investigation of Heat Exchanger Piles." University of Toledo / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1501858241480803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Кравчук, Володимир Вікторович. "Комплекс програм для визначення нероздільних завадостійких кодів." Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2020. https://ela.kpi.ua/handle/123456789/35023.

Full text
Abstract:
Бакалаврський проєкт включає пояснювальну записку (55 с., 45 рис., 4 додатки). В даній роботі досліджена тема завадостійкого кодування та пошуку максимальної кліки на графі. Розглянуто різні типи кодування, описана проблема аналітичної швидкості коду, проаналізовано алгоритм Брона-Кербоша для пошуку клік. На основі особливостей еквівалентних кодів та графа Хемінга, запропоновано способи покращення алгоритму для вирішення задачі пошуку максимального нероздільного завадостійкого коду. Було вирішено розробити комплекс програм, який допоможе спростити визначення та дослідження нероздільних завадостійких кодів. Було сформовано конкретні вимоги та функціональність для комплексу, а саме: можливість пошуку максимальних нероздільних завадостійких кодів відповідно до заданих користувачем параметрів, зупинка роботи комплексу в певний момент часу із збереженням проміжних даних з якими працював алгоритм, завантаження збережених даних та продовження роботи після зупинки, можливість виконання різних операцій над кодами, таких як визначення мінімальної кодової відстані, визначення кодової відстані кодослова до коду, сортування коду, надання користувачу простого та зрозумілого графічного інтерфейсу для зручності роботи з програмою. Комплекс програм реалізований мовою програмування Java, яка підтримується усіма популярними операційними системами, з використанням стандартної бібліотеки JavaFX, для розробки графічних інтерфейсів.
The bachelors project includes an explanatory note (97 pages, 41 drawings, 7 annexes). In this work, the topics of error correction and error detection coding, finding maximal clique of graph have been researched. Different types of coding were considered, the problem of analytic speed of code was described and Bron-Kerbosh algorithm was analyzed. Based on specifics of equivalents codes and Hamming graph the methods of algorithm optimization for finding maximal undivided error correcting code were suggested. It has been decided to develop a complex of program which will help to calculate and research error correcting codes. The concrete requirements and functionality for the complex were formulated: possibility to search maximal undivided error correcting code according to parameters provided by user, stop work of complex in the moment with saving intermediate data algorithm are working with, loading the saved data and continue work after algorithm had been stopped, the possibility to perform some operations with codes like compute the minimal code distance, compute minimal code distance between a word and a code, sort code, provide simple and understandable graphical user interface for comfortable working with program. The complex of programs is implemented by Java programming language which is supported by all the most popular operation systems using native library JavaFX for developing graphical user interface.
APA, Harvard, Vancouver, ISO, and other styles
18

Khalid, Omar. "Quantum accuracy threshold for distance-5 codes." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=101149.

Full text
Abstract:
A quantum computer can only solve classically intractable problems like factoring large integers if it can perform quantum computations scalably. The minimum accuracy required among the components of a quantum computer to perform scalable computation is called the quantum accuracy threshold, epsilon 0. We explore the accuracy threshold for Calderbank-Shor-Steane codes of distance 5. Our accuracy thresholds are based on the threshold theorem proven by Aliferis et al. [3]. Their threshold theorem is applicable to concatenated codes and can be used to derive a rigorous lower bound on the threshold. In this thesis we consider degenerate codes with parameters [[17,1,5]] and [[19,1,5]], and a non-degenerate [[21,1,5]] code. For each code we design a set of logical operations sufficient to perform universal computation. We estimate the thresholds using simulation tools for an independent stochastic error model. Our simulations incorporate randomized sampling techniques to estimate the number of ways our largest logical operation can fail. A lower bound of epsilon0 ≥ 3.624 x 10-5 is derived for the [[17,1,5]] code.
APA, Harvard, Vancouver, ISO, and other styles
19

Ould, Cheikh Mouhamedou Youssouf. "On distance measurement methods for turbo codes." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=100669.

Full text
Abstract:
New digital communication applications, such as multimedia, require very powerful error correcting codes that deliver low error rates while operating at low to moderate ratios (SNRs). Turbo codes have reasonable complexity and can achieve very low error rates if a proper interleaver design is in place. The use of well-designed interleavers result in very low error rates, especially for medium to long interleavers where turbo codes offer the greatest potential for achieving high minimum distance (d min) values.
The reliable determination of a code's error performance at very low error rates using simulations may take months or may not be practical at all. However, the knowledge of dmin and its multiplicities can be used to estimate the error rates at high SNR. This thesis is concerned with efficient and accurate distance measurement methods for turbo codes. Since high values of dmin can be caused by high input weight values, say up to 20 or higher, if a brute force algorithm is used the accurate determination of dmin requires that all possible input sequences of input weight up to 20 be tested. Testing all possible input sequences becomes impractical as the size of the interleaver and the value of input weight increase. Thus, the accurate determination of the distance spectrum, or at least dmin and its multiplicities, is a significant problem, especially for interleavers that yield high dmin. Based on Garello's true distance measurement method, this thesis presents an efficient and accurate distance measurement method for single- and double-binary turbo codes that uses proper trellis termination such as dual-termination or tail-biting. This method is applied to determine the distance spectra for the digital video broadcasting with return channel via satellite (DVB-RCS) standard double-binary turbo codes. It is also used to design new interleavers for DVB-RCS that yield a significant improvement in error performance compared to those in the standard.
This method fits particularly well with tail-biting turbo codes that use structured interleavers. This is because the distance properties repeat and this method can use this knowledge to reduce the search space. The reduction in search space results in significant reduction in complexity (i.e., execution time), which allows the determination of high dmin values in reasonable time. This efficiency is demonstrated for both single- and double-binary turbo codes, using structured interleavers that have high dmin values for various code rates. This method reduces the execution tunes by a factor of 40 to 400.
APA, Harvard, Vancouver, ISO, and other styles
20

Sims, Kristian Brian. "Orientable Single-Distance Codes for Absolute Incremental Encoders." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/9067.

Full text
Abstract:
Digital encoders are electro-mechanical sensors that measure linear or angular position using special binary patterns. The properties of these patterns influence the traits of the resulting encoders, such as their maximum speed, resolution, tolerance to error, or cost to manufacture. We describe a novel set of patterns that can be used in encoders that are simple and compact, but require some initial movement to register their position. Previous designs for such encoders, called absolute incremental encoders, tend to incorporate separate patterns for the functions of tracking incremental movement and determining the absolute position. The encoders in this work, however, use a single pattern that performs both functions, which maximizes information density and yields better resolution. Compared to existing absolute encoders, these absolute incremental encoders are much simpler with fewer pattern tracks and read heads, potentially allowing for lower-cost assembly of high resolution encoders. Furthermore, as the manufacturing requirements are less stringent, we expect such encoders may be suitable for use in D.I.Y. %27maker%27 projects, such as those undertaken recently by our lab.
APA, Harvard, Vancouver, ISO, and other styles
21

Zeng, Fanxuan. "Nonlinear codes: representation, constructions, minimum distance computation and decoding." Doctoral thesis, Universitat Autònoma de Barcelona, 2014. http://hdl.handle.net/10803/284241.

Full text
Abstract:
Resum La teoria de codis estudia el disseny de codis correctors d'errors per a la transmisió fidedigne d'informació per un canal amb soroll. Un codi corrector d'errors (o simplement codi) es un proces que consisteix en expressar una seqüència d'elements sobre un alfabet de tal manera que qualsevol error que sigui introduït pot ser detactat i corregit (amb limitacions), i està basat en la tècnica d'afegir elements redundants. Aquest proces inclou la codifcació, la transmisió i la descodifcació de la seqüència d'elements. La majoria dels codis utilitzat són codis bloc i la majoria d'ells tenen una estructura lineal, que facilita el procés de codifcació i descodifcació. En aquesta memòria, estudiarem codis correctors d'errors no lineals. Mal¬grat els codis no lineals no tenen les mateixes bones propietats per codifcar i descodifcar com els lineals, el codis no lineals tenen interes ates que alguns dels millors codis no son lineals. La primera qüestió que apareix quan s'utilitzen codis no lineals és la seva representació. Els codis lineals poden ser representats utilitzant una matriu generadora o una matriu de control. La millor manera de representar un codi no lineal és utilitzar la representacio kernel/caset, que permet represen¬tar un codi mitjanCoding theory deals with the design of error-correcting codes for the reliable transmission of information across noisy channels. An error-correcting code (or code) is a process, which consists on expressing a sequence of elements over an alphabet in such a way that any introduced error can be detected and corrected (with limitation), and it is based on adding redundancy elements. This process includes encoding, transmitting and decoding the sequence of elements. Most of the used codes are block codes and most of them have a linear structure, which facilitates the process of encoding and decoding. In this dissertation, nonlinear error-correcting codes are studied. Despite non¬linear codes do not have the same good properties for encoding and decoding as linear ones, they have interest because some of best codes are nonlinear. The frst question that arises when we use nonlinear codes is their repre-sentation. Linear codes can be represented by using a generator or parity¬check matrix. The best way to represent a nonlinear code is by using the kernel/coset representation, which allows us to represent it through some representative codewords instead of all codewords. In this dissertation, this representation is studied and efcient algorithms to compute the kernel and coset representatives from the list of codewords are given. In addition, prop¬erties such as equality, inclusion, intersection and union between nonlinear codes are given in terms of this representation. Also, some well known code constructions (extended, punctured,...) are described by manipulating directly the kernel and coset representatives ofthe constituent nonlinearcodes. In order to identify a code (linear or nonlinear), the length n, number of codewords M and minimum distance d are the most important parameters. The length n and size M are comparatively easy to compute. On the other hand, to determine the minimum distance of a code is not so easy. As a matter offact, it has been proven to be an NP-hard problem [37]. However, some algorithms have been developed to compute the minimum distance for linear codes using diferent approaches: Grabner bases [7], tree structure [25], probabilistic algorithms [13, 23] and vector enumeration [39]. For nonlinear codes, except for some special families, no general algorithms have been developed to compute their minimum distance. Using the kernel/coset representation and the Brouwer-Zimmermann's algorithm to compute the minimum dis¬tance for linear codes, new algorithms to compute the minimum distance for nonlinear codes are described. The hardest problem in the process of transmitting information is de¬coding. For linear codes, a general decoding algorithm is the syndrome de¬coding. However, there is not any general decoding method for nonlinear codes. Based on the kernel/coset representation and the minimum distance computation, new general algorithms to decode linear and nonlinear codes are proposed. For some linear codes (codes with a big codimension), the proposed algorithms have better performance than the syndrome decoding algorithm. For nonlinear codes, this is the frst general method for decoding, which is comparable to syndrome decoding for linear ones. Finally, most of these algorithms have been evaluated using the MAGMA software, and a new MAGMA package to deal with binary nonlinear codes has been developed, based in the results given in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
22

Bilal, Muhammad. "Codes over rings: maximum distance separability and self-duality." Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/107703.

Full text
Abstract:
Una parte imporante de la teoría de códigos es la de determinar cotas del número de palabras de un código. Uno de los problemas fundamentales de la teoría de códigos es encontrar códigos con la máxima distancia mínima d. Los investigadores han encontrado diferentes cotas superiores e inferiores para los códigos lineales y no lineales, por ejemplo cotas de Plotkin, Johnson, Singleton, Elias, Linear Programming, Griesmer, Gilbert y Varshamov. En esta tesis se ha estudiado la cota de Singleton, que es una cota superior de la distancia mínima de un código, y se han definido los códigos Z2Z4-aditivos a distancia máxima separable (MDS). Dos cotas diferentes se presentan en este trabajo en el que se han caracterizado todos los códigos Z2Z4-aditivos a distancia máxima separable con respecto a la cota de Singleton (MDSS) y condiciones en los parámetros para códigos Z2Z4-aditivos a distancia máxima separable con respecto a la cota obtenida a partir del rango (MDSR). La generación de nuevos códigos ha sido siempre un tema interesante, dando lugar al estudio de las propiedades de estos nuevos códigos generados y a establecer nuevos resultados. Los códigos autoduales son una clase importante de códigos. Hay numerosas construcciones de códigos autoduales a partir de objetos combinatorios. En este trabajo se han dado dos métodos para generar códigos autoduales a partir de esquemas de asociación de clase 3; las construcciones pure y bordered. Con estos dos métodos, se han obtenido códigos binarios autoduales a partir de esquemas de asociación de clase 3 no simétricos y códigos sobre Zk a partir de esquemas de asociación rectangulares. Borges, Dougherty y Fernández-Córdoba en 2011 presentaron un método para generar nuevos códigos Z2Z4-aditivos autoduales a partir de otros códigos Z2Z4-aditivos autoduales extendiendo su longitud. En este trabajo se ha comprobado si las propiedades como separabilidad, antipodalidad y el tipo del código se mantienen o no cuando se utiliza este método.
Bounds on the size of a code are an important part of coding theory. One of the fundamental problems in coding theory is to find a code with largest possible distance d. Researchers have found different upper and lower bounds on the size of linear and nonlinear codes e.g., Plotkin, Johnson, Singleton, Elias, Linear Programming, Griesmer, Gilbert and Varshamov bounds. In this dissertation we have studied the Singleton bound, which is an upper bound on the minimum distance of a code, and have defined maximum distance separable (MDS) Z2Z4 additive codes. Two different forms of these bounds are presented in this work where we have characterized all maximum distance separable Z2Z4-additive codes with respect to the Singleton bound (MDSS) and strong conditions are given for maximum distance separable Z2Z4-additive codes with respect to the rank bound (MDSR). Generation of new codes has always been an interesting topic, where one can study the properties of these newly generated codes and establish new results. Self-dual codes are an important class of codes. There are numerous constructions of self-dual codes from combinatorial objects. In this work we have given two methods for generating self-dual codes from 3-class association schemes, namely pure construction and bordered construction. Binary self-dual codes are generated by using these two methods from non-symmetric 3-class association schemes and self-dual codes from rectangular association schemes are generated over Zk. Borges, Dougherty and Fernández-Córdoba in 2011 presented a method to generate new Z2Z4-additive self-dual codes from the existing Z2Z4-additive selfdual codes by extending their length. In this work we have verified whether properties like separability, antipodality and code Type are retained or not, when using this method.
APA, Harvard, Vancouver, ISO, and other styles
23

King, Ian David. "Light-cone and short distance aspects of nucleon wavefunctions." Thesis, University of Southampton, 1986. https://eprints.soton.ac.uk/393993/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Woungang, Isaac. "Distances minimales de certains codes quasi cycliques." Toulon, 1994. http://www.theses.fr/1994TOUL0004.

Full text
Abstract:
Nous donnons une borne de la distance minimale d'un code quasi-cyclique a un generateur. Nous donnons egalement des bornes sur les poids de certaines classes de codes quasi cycliques sur un corps fini, deduits de codes cycliques irreductibles sur une extension. En outre, nous caracterisons une classe de codes quasi cycliques sur un corps fini, qui sont demultiplies de codes quasi cycliques sur une extension, et particulierement, tous les codes quasi cycliques sur corps fini qui sont demultiplies de codes cycliques sur une extension
APA, Harvard, Vancouver, ISO, and other styles
25

Chan, Evelyn Yu-San. "Heuristic optimisation for the minimum distance problem." Thesis, Nottingham Trent University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Harney, Isaiah H. "Colorings of Hamming-Distance Graphs." UKnowledge, 2017. http://uknowledge.uky.edu/math_etds/49.

Full text
Abstract:
Hamming-distance graphs arise naturally in the study of error-correcting codes and have been utilized by several authors to provide new proofs for (and in some cases improve) known bounds on the size of block codes. We study various standard graph properties of the Hamming-distance graphs with special emphasis placed on the chromatic number. A notion of robustness is defined for colorings of these graphs based on the tolerance of swapping colors along an edge without destroying the properness of the coloring, and a complete characterization of the maximally robust colorings is given for certain parameters. Additionally, explorations are made into subgraph structures whose identification may be useful in determining the chromatic number.
APA, Harvard, Vancouver, ISO, and other styles
27

Kumar, Santosh. "Upper bounds on minimum distance of nonbinary quantum stabilizer codes." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/2744.

Full text
Abstract:
The most popular class of quantum error correcting codes is stabilizer codes. Binary quantum stabilizer codes have been well studied, and Calderbank, Rains, Shor and Sloane (July 1998) have constructed a table of upper bounds on the minimum distance of these codes using linear programming methods. However, not much is known in the case of nonbinary stabilizer codes. In this thesis, we establish a bridge between selforthogonal classical codes over the finite field containing q2 elements and quantum codes, extending and unifying previous work by Matsumoto and Uyematsu (2000), Ashikhmin and Knill (November 2001), Kim and Walker (2004). We construct a table of upper bounds on the minimum distance of the stabilizer codes using linear programming methods that are tighter than currently known bounds. Finally, we derive code construction techniques that will help us find new codes from existing ones. All these results help us to gain a better understanding of the theory of nonbinary stabilizer codes.
APA, Harvard, Vancouver, ISO, and other styles
28

Cadic, Emmanuel. "Construction de Turbo Codes courts possédant de bonnes propriétés de distance minimale." Limoges, 2003. http://aurore.unilim.fr/theses/nxfile/default/2c131fa5-a15a-4726-8d49-663621bd2daf/blobholder:0/2003LIMO0018.pdf.

Full text
Abstract:
L'objectif de cette thèse est de réaliser des turbo codes possédant de bonnes distances minimales et de contribuer ainsi à repousser le phénomène ``d'error floor'' qui correspond à un seuil de l'ordre de 10-6 pour le taux d'erreur résiduelles binaires en dessous duquel la pente de la courbe de TEB diminue de façon significative. Ce problème s'est sensiblement amélioré avec l'apparition des codes duo-binaires de Berrou [11] qui permettent notamment d'obtenir de meilleures distances minimales. Pour obtenir de bonnes distances minimales avec des turbo codes courts (longueur inférieure à 512), la construction initialement utilisée et étudiée dans cette thèse a été celle proposée par Carlach et Vervoux [26] qui permet d'obtenir d'excellentes distances minimales mais qui malheureusement s'avère moins performante en terme de décodage notamment pour des raisons propres à la structure. Après avoir identifié les raisons qui empêchent un décodage efficace de cette famille de codes, nous faisons évoluer ces codes en utilisant des structures graphiques différentes reposant toujours sur l'assemblage de codes composants de petite complexité. L'idée est de réaliser ce changement sans pour autant perdre les qualités de distance minimale de ces codes et par conséquent il est nécessaire de comprendre pourquoi les distances minimales de cette famille initiale de codes sont bonnes et de définir un critère de choix pour les codes composants. Le critère de choix ne dépend pas de la distance minimale des codes composants mais du polynôme de transition de ces codes et permet donc de sélectionner des codes composants de très faible complexité qui sont assemblés de façon à générer des treillis cycliques à seulement 4 états. Ces treillis sont alors utilisés pour élaborer des turbo codes parallèle ou série présentant de bonnes distances minimales. Certains codes auto-duaux extrémaux sont notamment construits ainsi
This thesis is aimed at building turbo codes with good minimum distances and delaying the``error-floor'' which corespond to a threshold of 10-6 for the binary error rate. Under this threshold, the slope of the curve decreases significantly. This problem is alleviated by the use of duo-binary turbo codes [11] which guarantee better minimum distances. In order to obtain good minimum distances with short turbo codes (length inferior to 512), the first construction used and studied is the one proposed by Carlach and Vervoux [26]. It allows to obtain very good minimum distances but its decoding is unfortunately very difficult because of its structure. After identifying the reasons for this problem, we have modified these codes by using some graphicals structures which are the gathering of low complexity components codes. The idea is to realize this change without loosing the minimum distances properties, and consequently we had to understand why minimum distances are good for this familly of codes and define a new criteria to choose ``good'' components codes. This criteria is independent from the minimum distance of the component codes because it is derived from the Input-Output Weight Enumerator (IOWE) of the components codes. It allows us to choose components codes with very low complexity which are combined in order to provide 4-state tail-biting trellises. These trellises are then used to build multiple parallel concatenated and serial turbo codes with good minimum distances. Some extremal self-dual codes have been built in that way
APA, Harvard, Vancouver, ISO, and other styles
29

Aghaei, Morteza. "Near maximum distance separable codes over the field of eleven elements." Thesis, University of Sussex, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418400.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Moustrou, Philippe. "Geometric distance graphs, lattices and polytopes." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0802/document.

Full text
Abstract:
Un graphe métrique G(X;D) est un graphe dont l’ensemble des sommets est l’ensemble X des points d’un espace métrique (X; d), et dont les arêtes relient les paires fx; yg de sommets telles que d(x; y) 2 D. Dans cette thèse, nous considérons deux problèmes qui peuvent être interprétés comme des problèmes de graphes métriques dans Rn. Premièrement, nous nous intéressons au célèbre problème d’empilements de sphères, relié au graphe métrique G(Rn; ]0; 2r[) pour un rayon de sphère r donné. Récemment, Venkatesh a amélioré d’un facteur log log n la meilleure borne inférieure connue pour un empilement de sphères donné par un réseau, pour une suite infinie de dimensions n. Ici nous prouvons une version effective de ce résultat, dans le sens où l’on exhibe, pour la même suite de dimensions, des familles finies de réseaux qui contiennent un réseaux dont la densité atteint la borne de Venkatesh. Notre construction met en jeu des codes construits sur des corps cyclotomiques, relevés en réseaux grâce à un analogue de la Construction A. Nous prouvons aussi un résultat similaire pour des familles de réseaux symplectiques. Deuxièmement, nous considérons le graphe distance-unité G associé à une norme k_k. Le nombre m1 (Rn; k _ k) est défini comme le supremum des densités réalisées par les stables de G. Si la boule unité associée à k _ k pave Rn par translation, alors il est aisé de voir que m1 (Rn; k _ k) > 1 2n . C. Bachoc et S. Robins ont conjecturé qu’il y a égalité. On montre que cette conjecture est vraie pour n = 2 ainsi que pour des régions de Voronoï de plusieurs types de réseaux en dimension supérieure, ceci en se ramenant à la résolution de problèmes d’empilement dans des graphes discrets
A distance graph G(X;D) is a graph whose set of vertices is the set of points X of a metric space (X; d), and whose edges connect the pairs fx; yg such that d(x; y) 2 D. In this thesis, we consider two problems that may be interpreted in terms of distance graphs in Rn. First, we study the famous sphere packing problem, in relation with thedistance graph G(Rn; (0; 2r)) for a given sphere radius r. Recently, Venkatesh improved the best known lower bound for lattice sphere packings by a factor log log n for infinitely many dimensions n. We prove an effective version of this result, in the sense that we exhibit, for the same set of dimensions, finite families of lattices containing a lattice reaching this bound. Our construction uses codes over cyclotomic fields, lifted to lattices via Construction A. We also prove a similar result for families of symplectic lattices. Second, we consider the unit distance graph G associated with a norm k _ k. The number m1 (Rn; k _ k) is defined as the supremum of the densities achieved by independent sets in G. If the unit ball corresponding with k _ k tiles Rn by translation, then it is easy to see that m1 (Rn; k _ k) > 1 2n . C. Bachoc and S. Robins conjectured that the equality always holds. We show that this conjecture is true for n = 2 and for several Voronoï cells of lattices in higher dimensions, by solving packing problems in discrete graphs
APA, Harvard, Vancouver, ISO, and other styles
31

Gustavsson, Hans-Olof. ""Utan bok är det ingen riktig undervisning" : En studie av skolkulturella referensramar i sfi." Doctoral thesis, Stockholms universitet, Institutionen för undervisningsprocesser, kommunikation och lärande (UKL), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-7113.

Full text
Abstract:
Experiences of teachers in SFI, Swedish for (adult) Immigrants, indicate that during their schooling earlier in life, SFI students have developed skills, abilities, values, ideas and expectations about teaching and learning that differ somewhat from the prevailing communicative oriented theory of second language teaching which is emphasized in SFI. In the thesis these aspects are referred to as different school cultural frames of reference. The aim of the thesis is to generate knowledge about SFI-students’ school cultural frames of reference of relevance for SFI teaching. The considerable number of immigrants from Iraqi Kurdistan during the 1990s has led to an empirical focus related to this geographical area. From a critical perspective, in some respects a research interest of this kind can be seen as contributing to a division between ‘us’ and ‘them’, in a wider sense a part of exclusion and a maintenance of the segregated Swedish society. In a special section is given an account of this research ethic question, together with arguments from intercultural pedagogy that support a focus on school cultural frames of reference. The theoretical platform for the thesis is sociocultural theory. The concepts of social representations, pedagogical code, classification, framing, power distance, diaspora and distinctions of knowledge also are used. The thesis is based on two data materials. The first consists of data from interviews and talks with students and teachers in SFI, all from Iraqi Kurdistan. The second consists of data gathered through observations, classroom observations, interviews and talks during two visits in the KDP-administrated region of Iraqi Kurdistan, each visit being for a period of about one month. This data material also includes text materials, mainly textbooks in EFL for grade five and six, and course books about EFL teaching used in teacher education. The thesis illuminates several aspects that provide an understanding as to why SFI students from Iraqi Kurdistan can have certain abilities, values, ideas and expectations about teaching, learning materials, learning, teacher and student roles that differ from the communicative oriented second language teaching emphasized in SFI. However, results from the study also underline the importance of a ‘weak’ use of this understanding in a SFI teaching context.
APA, Harvard, Vancouver, ISO, and other styles
32

Bazzi, Louay Mohamad Jamil 1974. "Minimum distance of error correcting codes versus encoding complexity, symmetry, and pseudorandomness." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/17042.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.
Includes bibliographical references (leaves 207-214).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
We study the minimum distance of binary error correcting codes from the following perspectives: * The problem of deriving bounds on the minimum distance of a code given constraints on the computational complexity of its encoder. * The minimum distance of linear codes that are symmetric in the sense of being invariant under the action of a group on the bits of the codewords. * The derandomization capabilities of probability measures on the Hamming cube based on binary linear codes with good distance properties, and their variations. Highlights of our results include: * A general theorem that asserts that if the encoder uses linear time and sub-linear memory in the general binary branching program model, then the minimum distance of the code cannot grow linearly with the block length when the rate is nonvanishing. * New upper bounds on the minimum distance of various types of Turbo-like codes. * The first ensemble of asymptotically good Turbo like codes. We prove that depth-three serially concatenated Turbo codes can be asymptotically good. * The first ensemble of asymptotically good codes that are ideals in the group algebra of a group. We argue that, for infinitely many block lengths, a random ideal in the group algebra of the dihedral group is an asymptotically good rate half code with a high probability. * An explicit rate-half code whose codewords are in one-to-one correspondence with special hyperelliptic curves over a finite field of prime order where the number of zeros of a codeword corresponds to the number of rational points.
(cont.) * A sharp O(k-1/2) upper bound on the probability that a random binary string generated according to a k-wise independent probability measure has any given weight. * An assertion saying that any sufficiently log-wise independent probability measure looks random to all polynomially small read-once DNF formulas. * An elaborate study of the problem of derandomizability of AC₀ by any sufficiently polylog-wise independent probability measure. * An elaborate study of the problem of approximability of high-degree parity functions on binary linear codes by low-degree polynomials with coefficients in fields of odd characteristics.
by Louay M.J. Bazzi.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
33

Baldiwala, Aliasgar M. "Distance Distribution and Error Performance of Reduced Dimensional Circular Trellis Coded Modulation." Ohio University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1079387217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Siap, Irfan. "Generalized [Gamma]-fold weight enumerators for linear codes and new linear codes with improved minimum distances /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488193272067477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Luna, Ricardo, and Hrishikesh Tapse. "An Analysis on the Coverage Distance of LDPC-Coded Free-Space Optical Links." International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606240.

Full text
Abstract:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
We design irregular Low-Density Parity-Check (LDPC) codes for free-space optical (FSO) channels for different transmitter-receiver link distances and analyze the error performance for different atmospheric conditions. The design considers atmospheric absorption, laser beam divergence, and random intensity fluctuations due to atmospheric turbulence. It is found that, for the same transmit power, a system using the designed codes works over much longer link distances than a system that employs regular LDPC codes. Our analysis is particularly useful for portable optical transceivers and mobile links.
APA, Harvard, Vancouver, ISO, and other styles
36

Ahmed, Naveed, and Waqas Ahmed. "Classification of perfect codes and minimal distances in the Lee metric." Thesis, Linnaeus University, School of Computer Science, Physics and Mathematics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-6574.

Full text
Abstract:

Perfect codes and minimal distance of a code have great importance in the study of theoryof codes. The perfect codes are classified generally and in particular for the Lee metric.However, there are very few perfect codes in the Lee metric. The Lee metric hasnice properties because of its definition over the ring of integers residue modulo q. It isconjectured that there are no perfect codes in this metric for q > 3, where q is a primenumber.The minimal distance comes into play when it comes to detection and correction oferror patterns in a code. A few bounds on the number of codewords and minimal distanceof a code are discussed. Some examples for the codes are constructed and their minimaldistance is calculated. The bounds are illustrated with the help of the results obtained.

APA, Harvard, Vancouver, ISO, and other styles
37

Edsborg, Karin. "Color Coded Depth Information in Medical Volume Rendering." Thesis, Linköping University, Department of Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1823.

Full text
Abstract:

Contrast-enhanced magnetic resonance angiography (MRA) is used to obtain images showing the vascular system. To detect stenosis, which is narrowing of for example blood vessels, maximum intensity projection (MIP) is typically used. This technique often fails to demonstrate the stenosis if the projection angle is not suitably chosen. To improve identification of this region a color-coding algorithm could be helpful. The color should be carefully chosen depending on the vessel diameter.

In this thesis a segmentation to produce a binary 3d-volume is made, followed by a distance transform to approximate the Euclidean distance from the centerline of the vessel to the background. The distance is used to calculate the smallest diameter of the vessel and that value is mapped to a color. This way the color information regarding the diameter would be the same from all the projection angles.

Color-coded MIPs, where the color represents the maximum distance, are also implemented. The MIP will result in images with contradictory information depending on the angle choice. Looking in one angle you would see the actual stenosis and looking in another you would see a color representing the abnormal diameter.

APA, Harvard, Vancouver, ISO, and other styles
38

Mahmudi, Ali. "The investigation into generic VHDL implementation of generalised minimum distance decoding for Reed Solomon codes." Thesis, University of Huddersfield, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.417302.

Full text
Abstract:
This thesis is concerned with the hardware implementation in VHDL (VHSIC Hardware Description Language) of a Generalised Minimum Distance (GMD) decoder for Reed Solomon (RS) codes. The generic GMD decoder has been implemented for the Reed Solomon codes over GF(28 ). It works for a number of RS codes: RS(255, 239), RS(255, 241), RS(255, 243), RS(255, 245), RS(255, 247), RS(255, 249), and RS(255, 251). As a comparison, a Hard Decision Decoder (HDD) using the Welch-Berlekamp algorithm for the same RS codes is also implemented. The designs were first implemented in MAT LAB. Then, the designs were written in VHDL and the target device was the AItera Field Programmable Gate Array (FPGA) Stratix EP 1 S25-B672C6. The GMD decoder achieved an internal clock speed of 66.29 MHz with RS(255, 251) down to 57.24 MHz with RS(255, 239). In the case of HDD, internal clock speeds were 112.01 MHz with RS(255, 251) down to 86.23 MHz with RS(255, 239). It is concluded that the GMD needs a lot of extra hardware compared to the HDD. The decoder GMD needs as little as 35% extra hardware in the case of RS(255, 251) decoder, but it needs 100% extra hardware for the RS(255, 241) decoder. If there is an option to choose the type of RS code to use, it is preferable to use the HDD decoder rather than the GMD decoder. In real world, the type of RS code to use is usually fixed by the standard regulation. Hence, one of the alternative way to enhance the decoding performance is by using the GMD decoder
APA, Harvard, Vancouver, ISO, and other styles
39

Mykhaylyk, O. O. "Structural Characterization of Colloidal Core-shell Polymer-based Nanoparticles Using Small-angle X-ray Scattering." Thesis, Sumy State University, 2012. http://essuir.sumdu.edu.ua/handle/123456789/34779.

Full text
Abstract:
Colloidal particle complexes are often characterized by small angle X-ray scattering (SAXS) techniques. The present work demonstrates SAXS analysis of inhomogeneous core-shell nanoparticles with complex shell morphologies. Different experimental techniques such as variation of particle composition and contrast variation method, and analytical techniques such as Monte Carlo simulation and indirect Fourier transformation are applied to obtain structural parameters of polymer-based core-shell nanoparticles. It is shown that the SAXS results are consistent with other measurements performed by electron microscopy, atomic force microscopy, dynamic light scattering, thermogravimetry, helium pycnometry and BET. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/34779
APA, Harvard, Vancouver, ISO, and other styles
40

Alkhonini, Omar Ahmed. "CODA CONSONANT CLUSTER PATTERNS IN THE ARABIC NAJDI DIALECT." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/theses/1368.

Full text
Abstract:
This study examines the coda clusters in Classical Arabic and how Najdi speakers, modern inhabitants of the central area of Saudi Arabia, pronounce them. Fourteen Najdi participants were asked to read a list of thirty-one words that took into account falling, equal, and rising sonority clusters, consisting of obstruents, nasals, liquids, and glides. The instrument contained one, two, and three steps of sonority for each level of sonority (falling and rising) to determine the minimal sonority distance used in Najdi Arabic. Specifically, obstruent + nasal, nasal + liquid, and liquid + glide were included for falling sonority clusters of one step, obstruent + liquid and nasal + glide were used for falling sonority clusters of two steps, and only obstruent + glide for falling sonority clusters of three steps. To test the rising sonority clusters, the elements in the clusters were transposed for each combination; for example, instead of using obstruent + nasal, clusters of nasal + obstruent were considered. However, for equal sonority clusters, only obstruent + obstruent and nasal + nasal were examined. Obstruents were dealt with separately in the instrument at first to see whether they caused any difference in the results. The results showed that the subjects added epenthesis in the rising sonority clusters and equal sonority clusters containing sonorants. However, they did not add epenthesis in the falling sonority clusters or equal sonority clusters containing obstruents. Thus, no matter the distance in sonority between the two segments in the rising sonority clusters (one, two, or three steps), the participants always epenthesized them. In addition, no matter how many sonority steps there were between the two segments in the falling sonority clusters, the participants always produced them without modification. In case of equal sonority, when the two segments of the cluster were sonorants, the participants added epenthesis; however, when the two segments of the cluster were obstruents, the participants produced them without modification.
APA, Harvard, Vancouver, ISO, and other styles
41

Galand, Fabien. "Construction de codes Z indice p à la puissance k linéaires de bonne distance minimale et schémas de dissimulation fondés sur les codes de recouvrement." Caen, 2004. http://www.theses.fr/2004CAEN2047.

Full text
Abstract:
Cette thèse étudie deux axes de recherches reposant sur les codes. Chaque axe porte sur un paramètre particulier. Le premier axe est celui de la correction d'erreur, et nous nous intéressons à la distance minimale des codes. Notre objectif est de construire des codes sur Fp ayant une bonne distance minimale. Pour cela nous utilisons conjointement le relèvement de Hensel et la Zpk-linéarité. Nous donnons la distance minimale en petite longueur d'une généralisation des codes de Kerdock et de Preparata, ainsi que des relevés des codes de résidus quadratiques. Parmi ces codes, nous en obtenons quatre égalant les meilleurs codes linéaires. Nous donnons également une construction visant à augmenter le cardinal des codes Zpk-linéaires par ajout de translatés. Cette construction nous conduit à une borne supérieure sur le cardinaux des codes Zpk-linéaires. Le second axe, disjoint du premier dans son objectif, mais le rejoignant sur les objets étudiés, est la construction de schémas de dissimulation. Nous relions cette problématique, relevant de la stéganographie, à la construction de codes de recouvrement. Nous envisageons deux modàles de schémas. Ces modàles sont prouvés équivalents aux cette équivalence pour mettre à jour la structure des recouvrements utilisés dans les travaux déjà publiés. Cette équivalence nous sert également à déduire des bornes supérieures sur la capacité des schémas, et en donnant des constructions fondées sur les recouvrements linéaires nous obtenons des bornes inférieures.
APA, Harvard, Vancouver, ISO, and other styles
42

Vaka, Kranthi, and Karthik Narla. "The impact of maturity, scale and distribution on software quality : An industrial case study." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15626.

Full text
Abstract:
Context. In this ever-changing world of software development, the process of organizations adopting distributed development is gaining prominence. Implementing various development processes in such distributed environment is giving rise to numerous issues which affects the quality of the product. These issues could be due to the involvement of architects across national borders during the process of development. In this research, the focus is to improve software quality by addressing the impact of maturity and scale between teams and its affect on code review process. Further to identify the issues behind the distribution between teams separated by geographical, temporal and cultural distances. Objectives. The main objective of this research is to identify how different factors such as maturity on quality of deliverables, scale and distribution that impacts the code review process affecting software quality. Based on code review comments in data set, the factors that were examined in this research are evolvability of defects and difference in the quality of software developed by mature and immature teams within code review process. Later on, the issues related to the impact of geographical, temporal and cultural distances on the type of defects revealed during distributed development are identified. Methods. To achieve these objectives, a case study was conducted at Ericsson. A mixed approach has been chosen that includes, archival data and semi-structured interviews to gather useful data for this research. Archival data is one of the data collection method used for reviewing comments in data set and gather quantitative results for the study. We employed approaches such as descriptive statistics, hypothesis testing, and graphical representation to analyze data. Moreover, to strengthen these results, semi-structured group interview is conducted to triangulate the data and collect additional insights about code review process in large scale organizations. Results. By conducting this research, it is inferred that teams with a lower level of maturity produce more number of defects. It was observed that 35.11% functional, 59.03% maintainability, 0.11% compatibility, 0.028% security, 0.73% reliability, 4.96% performance efficiency, 0.014% portability of defects were found from archival data. Majority of defects were of functional and maintainability type, which impacts software quality in distributed environment. In addition to the above-mentioned results, other findings are related to evolvability of defects within immature teams which shows that there is no particular trend in increase or decrease in number of defects. Issues that occur due to distribution between teams are found out in this research. The overall results of this study are to suggest the impact of maturity and scale on software quality by making numerical assumptions and validating these finding with interviews. Interviews are also used to inquire information about the issues from dataset related to the impact of global software engineering (GSE) distances on code review process. Conclusions. At the end of this research it is concluded that in these type of projects, immature teams produce more number of defects than mature teams. This is because when large-scale projects are distributed globally, it is always harder to share and acquire knowledge between teams, increase group learning and mentor teams located in immature sites. Immature developers have problems of understanding the structure of code, new architects need to acquire knowledge on the scope and real time issues for improving quality of software. Using results presented in this thesis, researchers can find new gaps easily to extend the research on various influences on code review process in distributed environment.
APA, Harvard, Vancouver, ISO, and other styles
43

Sengupta, Avik. "Redundant residue number system based space-time block codes." Thesis, Kansas State University, 2012. http://hdl.handle.net/2097/14111.

Full text
Abstract:
Master of Science
Department of Electrical and Computer Engineering
Balasubramaniam Natarajan
Space-time coding (STC) schemes for Multiple Input Multiple Output (MIMO) systems have been an area of active research in the past decade. In this thesis, we propose a novel design of Space-Time Block Codes (STBCs) using Redundant Residue Number System (RRNS) codes, which are ideal for high data rate communication systems. Application of RRNS as a concatenated STC scheme to a MIMO wireless communication system is the main motivation for this work. We have optimized the link between residues and complex constellations by incorporating the “Direct Mapping” scheme, where residues are mapped directly to Gray coded constellations. Knowledge of apriori probabilities of residues is utilized to implement a probability based “Distance-Aware Direct Mapping” (DA) scheme, which uses a set-partitioning approach to map the most probable residues such that they are separated by the maximum possible distance. We have proposed an “Indirect Mapping” scheme, where we convert the residues back to bits before mapping them. We have also proposed an adaptive demapping scheme which utilizes the RRNS code structure to reduce the ML decoding complexity and improve the error performance. We quantify the upper bounds on codeword and bit error probabilities of both Systematic and Non-systematic RRNS-STBC and characterize the achievable coding and diversity gains assuming maximum likelihood decoding (MLD). Simulation results demonstrate that the DA Mapping scheme provides performance gain relative to a Gray coded direct mapping scheme. We show that Systematic RRNS-STBC codes provide superior performance compared to Nonsystematic RRNS-STBC, for the same code parameters, owing to more efficient binary to residue mapping. When compared to other concatenated STBC and Orthogonal STBC (OSTBC) schemes, the proposed system gives better performance at low SNRs.
APA, Harvard, Vancouver, ISO, and other styles
44

Bogaerts, Mathieu. "Codes et tableaux de permutations, construction, énumération et automorphismes." Doctoral thesis, Universite Libre de Bruxelles, 2009. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210302.

Full text
Abstract:

Un code de permutations G(n,d) un sous-ensemble C de Sym(n) tel que la distance de Hamming D entre deux éléments de C est supérieure ou égale à d. Dans cette thèse, le groupe des isométries de (Sym(n),D) est déterminé et il est prouvé que ces isométries sont des automorphismes du schéma d'association induit sur Sym(n) par ses classes de conjugaison. Ceci mène, par programmation linéaire, à de nouveaux majorants de la taille maximale des G(n,d) pour n et d fixés et n compris entre 11 et 13. Des algorithmes de génération avec rejet d'objets isomorphes sont développés. Pour classer les G(n,d) non isométriques, des invariants ont été construits et leur efficacité étudiée. Tous les G(4,3) et les G(5,4) ont été engendrés à une isométrie près, il y en a respectivement 61 et 9445 (dont 139 sont maximaux et décrits explicitement). D’autres classes de G(n,d) sont étudiées.

A permutation code G(n,d) is a subset C of Sym(n) such that the Hamming distance D between two elements of C is larger than or equal to d. In this thesis, we characterize the isometry group of the metric space (Sym(n),D) and we prove that these isometries are automorphisms of the association scheme induced on Sym(n) by the conjugacy classes. This leads, by linear programming, to new upper bounds for the maximal size of G(n,d) codes for n and d fixed and n between 11 and 13. We develop generating algorithms with rejection of isomorphic objects. In order to classify the G(n,d) codes up to isometry, we construct invariants and study their efficiency. We generate all G(4,3) and G(4,5)codes up to isometry; there are respectively 61 and 9445 of them. Precisely 139 out of the latter codes are maximal and explicitly described. We also study other classes of G(n,d)codes.


Doctorat en sciences, Spécialisation mathématiques
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
45

Console, Sarah. "Disturbi Specifici di Apprendimento: come la didattica a distanza influenza l'apprendimento della matematica." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20893/.

Full text
Abstract:
Questo elaborato vuole approfondire i Disturbi Specifici dell'Apprendimento e le ricadute nell'acquisizione di competenze matematiche, anche in rapporto alla didattica a distanza, risultata fondamentale in questa situazione emergenziale dovuta al virus Sars-Cov-2. Nel primo capitolo si analizzano i DSA e la loro classificazione con riferimenti a modalità diagnostiche e interventi in ambito riabilitativo e didattico: particolare attenzione viene posta alla discalculia evolutiva. Viene inoltre ripreso il percorso legislativo che ha portato al riconoscimento del diritto all'istruzione delle persone con DSA. Il secondo capitolo entra nel merito della didattica della matematica, evidenziando gli stili di apprendimento e le tipologie di apprendimento della matematica, aspetti fondamentali per adattare l'insegnamento alle caratteristiche degli studenti. Viene poi presentata la teoria delle situazioni di Brousseau per introdurre le situazioni a-didattiche, grazie alle quali è possibile rompere il contratto didattico e costruire un apprendimento stabile. Si conclude con alcuni accorgimenti per gli studenti con DSA. Il terzo capitolo apre alla discussione sulle potenzialità e problematicità della didattica a distanza e approfondisce l'apprendimento della matematica attraverso questa nuova modalità presentando opportunità e limiti che possono essere emersi per gli studenti con DSA. Infine, nell'ultimo capitolo della tesi, viene proposto un confronto tra percorsi didattici sull'argomento delle frazioni per una classe prima di una scuola secondaria di primo grado. Dopo un'introduzione sulle interpretazioni e misconcezioni delle frazioni, vengono esposte due unità didattiche per la didattica in presenza e le stesse vengono rivisitate in ottica della didattica a distanza. In entrambi i casi, si pone una particolare attenzione agli accorgimenti e alle modifiche che si possono apportare per aiutare l'apprendimento di questi concetti per studenti con DSA.
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Changlin. "The performance analysis and decoding of high dimensional trellis-coded modulation for spread spectrum communications." Ohio : Ohio University, 1997. http://www.ohiolink.edu/etd/view.cgi?ohiou1174616331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Zhen-Dong, and 楊振東. "The Distance of Cyclic Code." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/72140275379208257516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Chu, Ya-chi, and 祝亞琪. "USING NORMALIZED GOOGLE DISTANCE TO REFINE CODE SEARCH RESULTS." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/90838642351194086617.

Full text
Abstract:
碩士
國立中央大學
資訊管理研究所
98
With the popularity of open source software, many people have the willing to share their projects via internet. In order to enhance the efficiency of software production, program developers try to search the existing open source software on the web. Therefore a new internet service, code search engine, emerged from the network. Although search engines provide a convenient way to assist developers to reuse the existing Application Programming Interfaces, the search results obtained from the search engines do not always satisfy the requirement of developers. Numerous and complex search results make developers hard to reuse the code quickly. We proposed a system architecture which is able to solve the problem we mention above: First, we store the related data which is extracted from the search results of Koders in the local repository. Second, we convert every file into the abstract syntax tree format to get the structural data. Third, we cluster and compute every file’s normalized Google distance value through the structural data. And then we will re-rank the search results according to the Google distance value. Four, we will give some semantic tags to each cluster and hope it can help user to find the right cluster quickly. Finally, we use precision and recall value as an index to evaluate the proposed system architecture’s performance about clustering. Furthermore, we also use a case to explain whether the proposed system architecture can effectively help developers to find the useful source code, and compare with related academic research.
APA, Harvard, Vancouver, ISO, and other styles
49

Sun, Shih-Hung, and 孫士烘. "Compare the similarity of C source code using Edit-Distance." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/16239559813427360013.

Full text
Abstract:
碩士
中原大學
資訊工程研究所
90
The differences of source codes are compared by people for a long time , and the useful tools we can find is Diff. But Diff didn’t compare the syntax of source code and get the minimal difference. If we change the source code’s parameters name to difference then we will get difference result from Diff. This small change can be judged by people but without by computer. And these source codes will cost many time to judge. We started to analysis the difference of C source code using Edit-Distance for this state. In this thesis, we design a comparative system to rewrite the syntax of the same means but different expression because their cost of time and space are the same. And we deal with the space’s problems and transduce source codes to temporal files. Then we rewrite file according the code’s data flow, variable’s dependence , and code’s relationship to get the minimal Edit-Distance. Finally , we use string’s match algorithm to get the minimal Edit-Distance ( the source code’s similarity ) to find the same C source code.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Chih-Kang, and 王志剛. "Minimum Distance Optimization on Signature Code for Uplink Non-orthogonal Multiple Access." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/6u6g64.

Full text
Abstract:
碩士
國立清華大學
通訊工程研究所
107
This thesis studies the design of signature code to support non-orthogonal multiple access (NOMA), called signature coded multiple access (SIGMA). In the current orthogonal frequency division multiple access (OFDMA) system, the number of connection is limited by the number of available resource element. Also, both the scheduling latency and power consumption of the OFDMA system are too high. In SIGMA, each user chooses the codeword from the signature code book, and shares the same time-frequency resources for multiple access. By doing so, SIGMA can allow high user overloading ratio. In grant free SIGMA scheme, which will predefine a contention region let users to share this region. By the predefined contention region, the network does not need to send the control signaling (SG) and the access user does not need to send the scheduling grant signal (SR). The SIGMA scheme constructs the signature code matrix with minimum distance maximization to improve the error performance in multiple access channel. According our design idea, SIGMA uses the suboptimal signature code matrix as codebook for non-orthogonal multiple access. Our results show that SIGMA can achieve better ABER performance than Sparse code multiple access (SCMA) and Multi-user shared access (MUSA).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography