Rozprawy doktorskie na temat „IS CODE”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „IS CODE”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Kühn, Stefan. "Organic codes and their identification : is the histone code a true organic code". Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86673.
Pełny tekst źródłaENGLISH ABSTRACT: Codes are ubiquitous in culture|and, by implication, in nature. Code biology is the study of these codes. However, the term `code' has assumed a variety of meanings, sowing confusion and cynicism. The rst aim of this study is therefore to de ne what an organic code is. Following from this, I establish a set of criteria that a putative code has to conform to in order to be recognised as a true code. I then o er an information theoretical perspective on how organic codes present a viable method of dealing with biological information, as a logical extension thereof. Once this framework has been established, I proceed to review several of the current organic codes in an attempt to demonstrate how the de nition of and criteria for identifying an organic code may be used to separate the wheat from the cha . I then introduce the `regulatory code' in an e ort to demonstrate how the code biological framework may be applied to novel codes to test their suitability as organic codes and whether they warrant further investigation. Despite the prevalence of codes in the biological world, only a few have been de nitely established as organic codes. I therefore turn to the main aim of this study which is to cement the status of the histone code as a true organic code in the sense of the genetic or signal transduction codes. I provide a full review and analysis of the major histone post-translational modi cations, their biological e ects, and which protein domains are responsible for the translation between these two phenomena. Subsequently I show how these elements can be reliably mapped onto the theoretical framework of code biology. Lastly I discuss the validity of an algorithm-based approach to identifying organic codes developed by G orlich and Dittrich. Unfortunately, the current state of this algorithm and the operationalised de nition of an organic code is such that the process of identifying codes, without the neccessary investigation by a scientist with a biochemical background, is currently not viable. This study therefore demonstrates the utility of code biology as a theoretical framework that provides a synthesis between molecular biology and information theory. It cements the status of the histone code as a true organic code, and criticises the G orlich and Dittrich's method for nding codes by an algorithm based on reaction networks and contingency criteria.
AFRIKAANSE OPSOMMING: Kodes is alomteenwoordig in kultuur|en by implikasie ook in die natuur. Kodebiologie is die studie van hierdie kodes. Tog het die term `kode' 'n verskeidenheid van betekenisse en interpretasies wat heelwat verwarring veroorsaak. Die eerste doel van hierdie studie is dus om te bepaal wat 'n organiese kode is en 'n stel kriteria te formuleer wat 'n vermeende kode aan moet voldoen om as 'n ware kode erken te word. Ek ontwikkel dan 'n inligtings-teoretiese perspektief op hoe organiese kodes `n manier bied om biologiese inligting te hanteer as 'n logiese uitbreiding daarvan. Met hierdie raamwerk as agtergrond gee ek `n oorsig van 'n aantal van die huidige organiese kodes in 'n poging om aan te toon hoe die de nisie van en kriteria vir 'n organiese kode gebruik kan word om die koring van die kaf te skei. Ek stel die `regulering kode' voor in 'n poging om te wys hoe die kode-biologiese raamwerk op nuwe kodes toegepas kan word om hul geskiktheid as organiese kodes te toets en of dit die moeite werd is om hulle verder te ondersoek. Ten spyte daarvan dat kodes algemeen in die biologiese w^ereld voorkom, is relatief min van hulle onomwonde bevestig as organiese kodes. Die hoofdoel van hierdie studie is om vas te stel of die histoonkode 'n ware organiese kode is in die sin van die genetiese of seintransduksie kodes. Ek verskaf 'n volledige oorsig en ontleding van die belangrikste histoon post-translasionele modi kasies, hul biologiese e ekte, en watter prote endomeine verantwoordelik vir die vertaling tussen hierdie twee verskynsels. Ek wys dan hoe hierdie elemente perfek inpas in die teoretiese raamwerk van kodebiologie. Laastens bespreek ek die geldigheid van 'n algoritme-gebaseerde benadering tot die identi sering van organiese kodes wat deur G orlich en Dittrich ontwikkel is. Dit blyk dat hierdie algoritme en die geoperasionaliseerde de nisie van 'n organiese kode sodanig is dat die proses van die identi sering van kodes sonder die nodige ondersoek deur 'n wetenskaplike met 'n biochemiese agtergrond tans nie haalbaar is nie. Hierdie studie bevestig dus die nut van kodebiologie as 'n teoretiese raamwerk vir 'n sintese tussen molekul^ere biologie en inligtingsteorie, bevestig die status van die histoonkode as 'n ware organiese kode, en kritiseer G orlich en Dittrich se poging om organiese kodes te identi seer met 'n algoritme wat gebaseer is op reaksienetwerke en `n kontingensie kriterium.
Borchert, Thomas. "Code Profiling : Static Code Analysis". Thesis, Karlstad University, Faculty of Economic Sciences, Communication and IT, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-1563.
Pełny tekst źródłaCapturing the quality of software and detecting sections for further scrutiny within are of high interest for industry as well as for education. Project managers request quality reports in order to evaluate the current status and to initiate appropriate improvement actions and teachers feel the need of detecting students which need extra attention and help in certain programming aspects. By means of software measurement software characteristics can be quantified and the produced measures analyzed to gain an understanding about the underlying software quality.
In this study, the technique of code profiling (being the activity of creating a summary of distinctive characteristics of software code) was inspected, formulized and conducted by means of a sample group of 19 industry and 37 student programs. When software projects are analyzed by means of software measurements, a considerable amount of data is produced. The task is to organize the data and draw meaningful information from the measures produced, quickly and without high expenses.
The results of this study indicated that code profiling can be a useful technique for quick program comparisons and continuous quality observations with several application scenarios in both industry and education.
Ketkar, Avanti Ulhas. "Code constructions and code families for nonbinary quantum stabilizer code". Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/2743.
Pełny tekst źródłaKim, Han Jo. "Improving turbo codes through code design and hybrid ARQ". [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0012169.
Pełny tekst źródłaSerpa, Matheus da Silva. "Source code optimizations to reduce multi core and many core performance bottlenecks". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183139.
Pełny tekst źródłaNowadays, there are several different architectures available not only for the industry but also for final consumers. Traditional multi-core processors, GPUs, accelerators such as the Xeon Phi, or even energy efficiency-driven processors such as the ARM family, present very different architectural characteristics. This wide range of characteristics presents a challenge for the developers of applications. Developers must deal with different instruction sets, memory hierarchies, or even different programming paradigms when programming for these architectures. To optimize an application, it is important to have a deep understanding of how it behaves on different architectures. Related work proved to have a wide variety of solutions. Most of then focused on improving only memory performance. Others focus on load balancing, vectorization, and thread and data mapping, but perform them separately, losing optimization opportunities. In this master thesis, we propose several optimization techniques to improve the performance of a real-world seismic exploration application provided by Petrobras, a multinational corporation in the petroleum industry. In our experiments, we show that loop interchange is a useful technique to improve the performance of different cache memory levels, improving the performance by up to 5.3 and 3.9 on the Intel Broadwell and Intel Knights Landing architectures, respectively. By changing the code to enable vectorization, performance was increased by up to 1.4 and 6.5 . Load Balancing improved the performance by up to 1.1 on Knights Landing. Thread and data mapping techniques were also evaluated, with a performance improvement of up to 1.6 and 4.4 . We also compared the best version of each architecture and showed that we were able to improve the performance of Broadwell by 22.7 and Knights Landing by 56.7 compared to a naive version, but, in the end, Broadwell was 1.2 faster than Knights Landing.
Panagos, Adam G., i Kurt Kosbar. "A METHOD FOR FINDING BETTER SPACE-TIME CODES FOR MIMO CHANNELS". International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604782.
Pełny tekst źródłaMultiple-input multiple output (MIMO) communication systems can have dramatically higher throughput than single-input, single-output systems. Unfortunately, it can be difficult to find the space-time codes these systems need to achieve their potential. Previously published results located good codes by minimizing the maximum correlation between transmitted signals. This paper shows how this min-max method may produce sub-optimal codes. A new method which sorts codes based on the union bound of pairwise error probabilities is presented. This new technique can identify superior MIMO codes, providing higher system throughput without increasing the transmitted power or bandwidth requirements.
Nielsen, Sebastian, i David Tollemark. "Code readability: Code comments OR self-documenting code : How does the choice affect the readability of the code?" Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12739.
Pełny tekst źródłaReddy, Satischandra B. "Code optimization with stack oriented intermediate code". DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 1985. http://digitalcommons.auctr.edu/dissertations/2629.
Pełny tekst źródłaTixier, Audrey. "Reconnaissance de codes correcteurs". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066554/document.
Pełny tekst źródłaIn this PhD, we focus on the code reconstruction problem. This problem mainly arises in a non-cooperative context when a communication consisting of noisy codewords stemming from an unknown code is observed and its content has to be retrieved by recovering the code that is used for communicating and decoding with it the noisy codewords. We consider here three possible scenarios and suggest an original method for each case. In the first one, we assume that the code that is used is a turbo-code and we propose a method for reconstructing the associated interleaver (the other components of the turbo-code can be easily recovered by the existing methods). The interleaver is reconstructed step by step by searching for the most probable index at each time and by computing the relevant probabilities with the help of the BCJR decoding algorithm. In the second one, we tackle the problem of reconstructing LDPC codes by suggesting a new method for finding a list of parity-check equations of small weight that generalizes and improves upon all existing methods. Finally, in the last scenario we reconstruct an unknown interleaved convolutional code. In this method we used the previous one to find a list of parity-check equations for this code. Then, by introducing a graph representing how these parity-check equations intersect we recover at the same time the interleaver and the convolutional code
Muller, Wayne. "East City Precinct Design Code: Redevelopment through form-based codes". Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/12952.
Pełny tekst źródłaThis thesis confines itself to a consideration of urban development opportunity in the East City Precinct through the understanding of it former historical character and memory which can be implemented through Form Based Codes. It locates the design process in the sub-regional context and puts forward notional spatial proposal for the physical area of the East City Precinct and its surrounds. The application of theory is tested at precinct level and emphasis remains firmly on the public elements ordering the spatial structure. With all these considerations, this dissertation presents a piece of history of District Six and the importance of memory in relation to the East City. This contested site of memory and heritage informs the area’s contextual development amid the often-essentialising multicultural in particular to the ‘new South Africa’. In turn, an understanding of District Six’s urban quality which frames the intricacies of a restitution and redevelopment plan. It also illustrates the genuine uniqueness of its principles of urbanism, in contrast to market-oriented urban development which reproduces spaces of social fragmentation, exclusion and inequality. Indeed, the vision for the East City concerns long-term urban sustainability, an investment in a city of fluid spaces, a city of difference and meaning. This dissertation contends that there is a real role for urban and social sustainability in the redevelopment potential of the study area, with its historical, social, cultural and symbolic significance. Therefore its outline the key elements and principles for a development framework prepared for the study area and discuss the prospects for urban and social sustainability. This will inform where and how to apply form based codes with in the East City context.
Rodriguez, Fernandez Carlos Gustavo. "Machine learning quantum error correction codes : learning the toric code /". São Paulo, 2018. http://hdl.handle.net/11449/180319.
Pełny tekst źródłaBanca:Alexandre Reily Rocha
Banca: Juan Felipe Carrasquilla
Resumo: Usamos métodos de aprendizagem supervisionada para estudar a decodificação de erros em códigos tóricos de diferentes tamanhos. Estudamos múltiplos modelos de erro, e obtemos figuras da eficácia de decodificação como uma função da taxa de erro de um único qubit. Também comentamos como o tamanho das redes neurais decodificadoras e seu tempo de treinamento aumentam com o tamanho do código tórico.
Abstract: We use supervised learning methods to study the error decoding in toric codes ofdifferent sizes. We study multiple error models, and obtain figures of the decoding efficacyas a function of the single qubit error rate. We also comment on how the size of thedecoding neural networks and their training time scales with the size of the toric code
Mestre
Yamazato, Takaya, Iwao Sasase i Shinsaku Mori. "Interlace Coding System Involving Data Compression Code, Data Encryption Code and Error Correcting Code". IEICE, 1992. http://hdl.handle.net/2237/7844.
Pełny tekst źródłaTan, Peter K. S. "A translator from RISC code into MC68020 code". Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/14289.
Pełny tekst źródłaKelly, Patrick. "Demon Code". Digital Commons at Loyola Marymount University and Loyola Law School, 2021. https://digitalcommons.lmu.edu/etd/988.
Pełny tekst źródłaHawk, Zoe Alaina. "Dress code". Thesis, University of Iowa, 2011. https://ir.uiowa.edu/etd/980.
Pełny tekst źródłaHagman, Tobias. "Clean Code vs Dirty Code : Ett fältexperiment för att förklara hur Clean Code påverkar kodförståelse". Thesis, Högskolan Dalarna, Informatik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:du-18698.
Pełny tekst źródłaSummary: Big and complex codebases with inadequate understandability, is a problem which is becoming more common among companies today. Inadequate understandability leads to bigger time requirements when maintaining code, which means increased costs for a company. Clean Code is according to some people the solution to this problem. Clean Code is a collection of guidelines and principles for how to write code which is easy to understand and maintain. A gap of knowledge was identified, as there is little empirical data that investigates how Clean Code affects understandability. This lead to the following the question: How is the understandability affected when modifying source code which has been refactored according to the Clean Code principles regarding names and functions? In order to investigate how Clean Code affects understandability, a field experiment was conducted in collaboration with the company CGM Lab Scandinavia in Borlänge. In the field experiment data in the form of time and experienced understandability was collected and analyzed.The result of this study doesn’t show any clear signs of immediate improvements or worsening when it comes to understandability. This is because even though all participants prefer Clean Code, this doesn’t show in the measured time of the experiment. This leads me to the conclusion that the effects of Clean Code aren’t immediate, since developers hasn’t been able to adapt to Clean Code, and therefore are not able to utilize its benefits properly. This study gives a hint of the potential Clean Code has to improve understandability
Ly, Kevin. "Normalizer: Augmenting Code Clone Detectors using Source Code Normalization". DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1722.
Pełny tekst źródłaLjung, Kevin. "Clean Code in Practice : Developers perception of clean code". Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21443.
Pełny tekst źródłaLawson, John. "Duty specific code driven design methodology : a model for better codes". Thesis, University of Aberdeen, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274818.
Pełny tekst źródłaVeluri, Subrahmanya Pavan Kumar. "Code Verification and Numerical Accuracy Assessment for Finite Volume CFD Codes". Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/28715.
Pełny tekst źródłaPh. D.
Firmanto, Welly T. (Welly Teguh) Carleton University Dissertation Engineering Systems and Computer. "Code combining of Reed-Muller codes in an indoor wireless environment". Ottawa, 1995.
Znajdź pełny tekst źródłaJie, Cao, i Xie Qiu-cheng. "THE SEARCHING METHOD OF QUASI-OPTIMUM GROUP SYNC CODES ON THE SUBSET OF PN SEQUENCES". International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613438.
Pełny tekst źródłaAs the code length is increasing, the search of optimum group sync codes will be more and more difficult, even impossible. This paper gives the searching method of quasi-optimum group sync codes on the small subset of PN sequences -- CVT-TAIL SEARCHING METHOD and PREFIX-SUFFIX SEARCHING METHOD. We have searched out quasi-optimum group sync codes for their lengths N=32-63 by this method and compared them with corresponding optimum group sync codes for their lengths N=32-54. They are very approximative. The total searching time is only several seconds. This method may solves the problems among error sync probability, code length and searching time. So, it is a good and practicable searching method for long code.
Tysell, Sundkvist Leif, i Emil Persson. "Code Styling and its Effects on Code Readability and Interpretation". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209576.
Pełny tekst źródłaKods läsbarhet har en betydande effekt på en produkts livscykel. Det är viktigt att en kod är lätt att underhålla, återanvända och att det är smidigt för en programmerare att bekanta sig med främmande kod. Tidigare studier har använts för att visa korrelationer mellan kods läsbarhet och kodstilisering. Eye tracking-teknologi har också använts f ̈or att kunna studera ögonrörelser och fokus hos personer i en datorgenererad miljö. Med hjälp av en kombination of Eye tracking-teknologi och kodstiliseringsverktyg såsom syntax highlighting, logiska variabel- och funktionsnamn, kodindrag samt kommenterad kod har korrelationen mellan kods la ̈sbarhet och kodstilisering ytterligare kunnat angripas och studeras. Denna rapport studerar ett antal testpersoner som har deltagit i en serie av experiment i vilka de tillordnats problem som involverar tolkning av kod samtidigt som deras ögonrörelser studerats med hjälp av Eye tracking-utrustning- och mjukvara. Datat från experimenten har sedan sammanfattats med Heatmap-baserade bilder som plottar o ̈gats ro ̈relser p ̊a skärmen. Experimenten visade att det finns korrelationer mellan hur kodstilisering används och hur deltagarna valde att angripa och lösa de givna problemen. Slutsatsen av denna rapport är att kods läsbarhet förbättrades i och med introduktionen av kodstiliseringsfunktioner. Vad gäller indenterad kod samt syntax highlighting observerades inga synliga förbättringar. Kommenterad kod, å andra sidan, medförde att testpersonerna undersökte metodsdignaturerna i kod mer noggrannt vilket ledde till att de också upptäckte return- type-relaterade fel i koden; en synlig förbättring. Dessutom visade det sig att logiska variabelnamn medför att programmeraren ej behöver ödsla tid eller energi för att läsa igenom hela kodstycken som i själva verket kunde ha förklarats av ett klyftigt utvalt metod- eller variabelnamn, vilket också innebar en förbättring för läsbarheten för kod.
Lemay, Frédérick. "Instrumentation optimisée de code pour prévenir l'exécution de code malicieux". Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/29030/29030.pdf.
Pełny tekst źródłaHan, Sangmok. "Improved source code editing for effective ad-hoc code reuse". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/67583.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (p. 111-113).
Code reuse is essential for productivity and software quality. Code reuse based on abstraction mechanisms in programming languages is a standard approach, but programmers also reuse code by taking an ad-hoc approach, in which text of code is reused without abstraction. This thesis focuses on improving two common ad-hoc code reuse approaches-code template reuse and code phrase reuse because they are not only frequent, but also, more importantly, they pose a risk to quality and productivity in software development, the original aims of code reuse. The first ad-hoc code reuse approach, code template reuse refers to programmers reusing an existing code fragment as a structural template for similar code fragments. Programmers use the code reuse approach because using abstraction mechanisms requires extra code and preplanning. When similar code fragments, which are only different by several code tokens, are reused just a couple of times, it makes sense to reuse text of one of the code fragments as a template for others. Unfortunately, code template reuse poses a risk to software quality because it requires repetitive and tedious editing steps. Should a programmer forget to perform any of the editing steps, he may introduce program bugs, which are difficult to detect by visual inspection, code compilers, or other existing bug detection methods. The second ad-hoc code reuse approach, code phrase reuse refers to programmers reusing common code phrases by retyping them, often regularly, using code completion. Programmers use the code reuse approach because no abstraction mechanism is available for reusing short yet common code phrases. Unfortunately, code phrase reuse poses a limitation on productivity because retyping the same code phrases is time-consuming even when a code completion system is used. Existing code completion systems completes only one word at a time. As a result, programmers have to repeatedly invoke code completion, review code completion candidates, and select a correct candidate as many times as the number of words in a code phrase. This thesis presents new models, algorithms, and user interfaces for effective ad-hoc code reuse. First, to address the risk posed by code template reuse, it develops a method for detecting program bugs in similar code fragments by analyzing sequential patterns of code tokens. To proactively reduce program bugs introduced during code template reuse, this thesis proposes an error-preventive code editing method that reduces the number of code editing steps based on cell-based text editing. Second, to address the productivity limitation posed by code phrase reuse, this thesis develops an efficient code phrase completion method. The code phrase completion accelerates reuse of common code phrases by taking non-predefined abbreviated input and expanding it into a full code phrase. The code phrase completion method utilizes a statistical model called Hidden Markov model trained on a corpus of code and abbreviation examples. Finally, the new methods for bug detection and code phrase completion are evaluated through corpus and user studies. In 7 well-maintained open source projects, the bug detection method found 87 previously unknown program bugs. The ratio of actual bugs to bug warnings (precision) was 47% on average, eight times higher than previous similar methods. The code phrase completion method is evaluated on the basis of accuracy and time savings. It achieved 99.3% accuracy in a corpus study and achieved 30.4% time savings and 40.8% keystroke savings in a user study when compared to a conventional code completion method. At a higher level, this work demonstrates the power of a simple sequence-based model of source code. Analyzing vertical sequences of code tokens across similar code fragments is found useful for accurate bug detection; learning to infer horizontal sequences of code tokens is found useful for efficient code completion. Ultimately, this work may aid the development of other sequence-based models of source code, as well as different analysis and inference techniques, which can solve previously difficult software engineering problems.
by Sangmok Han.
Ph.D.
Vouk, William F. "Concelebration development from the 1917 Code through the 1983 Code /". Theological Research Exchange Network (TREN), 1992. http://www.tren.com.
Pełny tekst źródłaColeman, Anita Sundaram. "A Code for Classifiers: Whatever Happened to Merrillâ s Code?" Ergon-Verlag, 2004. http://hdl.handle.net/10150/105839.
Pełny tekst źródłaEriksson, Mattias. "Integrated Code Generation". Doctoral thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-67471.
Pełny tekst źródłaWahab, Matthew. "Object code verification". Thesis, University of Warwick, 1998. http://wrap.warwick.ac.uk/61068/.
Pełny tekst źródłaHoffmann, Ceilidh 1969. "Code-division multiplexing". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28746.
Pełny tekst źródłaIncludes bibliographical references (p. 395-404).
(cont.) counterpart. Among intra-cell orthogonal schemes, we show that the most efficient broadcast signal is a linear superposition of many binary orthogonal waveforms. The information set is also binary. Each orthogonal waveform is generated by modulating a periodic stream of finite-length chip pulses with a receiver-specific signature code that is derived from a special class of binary antipodal, superimposed recursive orthogonal code sequences. With the imposition of practical pulse shapes for carrier modulation, we show that multi-carrier format using cosine functions has higher bandwidth efficiency than the single-carrier format, even in an ideal Gaussian channel model. Each pulse is shaped via a prototype baseband filter such that when the demodulated signal is detected through a baseband matched filter, the resulting output samples satisfy the Generalized Nyquist criterion. Specifically, we propose finite-length, time overlapping orthogonal pulse shapes that are g-Nyquist. They are derived from extended and modulated lapped transforms by proving the equivalence between Perfect Reconstruction and Generalized Nyquist criteria. Using binary data modulation format, we measure and analyze the accuracy of various Gaussian approximation methods for spread-spectrum modulated (SSM) signalling ...
We study forward link performance of a multi-user cellular wireless network. In our proposed cellular broadcast model, the receiver population is partitioned into smaller mutually exclusive subsets called cells. In each cell an autonomous transmitter with average transmit power constraint communicates to all receivers in its cell by broadcasting. The broadcast signal is a multiplex of independent information from many remotely located sources. Each receiver extracts its desired information from the composite signal, which consists of a distorted version of the desired signal, interference from neighboring cells and additive white Gaussian noise. Waveform distortion is caused by time and frequency selective linear time-variant channel that exists between every transmitter-receiver pair. Under such system and design constraints, and a fixed bandwidth for the entire network, we show that the most efficient resource allocation policy for each transmitter based on information theoretic measures such as channel capacity, simultaneously achievable rate regions and sum-rate is superposition coding with successive interference cancellation. The optimal policy dominates over its sub-optimal alternatives at the boundaries of the capacity region. By taking into account practical constraints such as finite constellation sets, frequency translation via carrier modulation, pulse shaping and real-time signal processing and decoding of finite-length waveforms and fairness in rate distribution, we argue that sub-optimal orthogonal policies are preferred. For intra-cell multiplexing, all orthogonal schemes based on frequency, time and code division are equivalent. For inter-cell multiplexing, non-orthogonal code-division has a larger capacity than its orthogonal
by Ceilidh Hoffmann.
Ph.D.
Lieber, Thomas (Thomas Alan). "Understanding asynchronous code". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82411.
Pełny tekst źródłaCataloged from PDF version of thesis.
Includes bibliographical references (p. 61-64).
JavaScript on the web is difficult to debug due to its asynchronous and dynamic nature. Traditional debuggers are often little help because the language's idioms rely heavily on non-linear control flow via function pointers. The aim of this work is to create a debugging interface that helps users understand complicated control flow in languages like JavaScript. This thesis presents a programming editor extension called Theseus that uses program tracing to provide real-time in-editor feedback so that programmers can answer questions quickly as they write new code and interact with their application. Theseus augments the call graph with semantic edges that allow users to make intuitive leaps through program traces, such as from the start of an asynchronous network request to its response. Participants in lab and classroom studies found Theseus to be a usable replacement for traditional breakpoint and logging tools, though no significant difference was found in their ability to complete programming tasks.
by Thomas Lieber.
S.M.
Arzoglou, Jordan. "Langue contre code". Paris 8, 2008. http://www.theses.fr/2008PA082872.
Pełny tekst źródłaTwo attitudes seem to be possible towards the world : "lato sensu" vision, which may overlook the principle of non-contradiction, and "strito sensu" vision, which is subject to that principle. "Lato sensu" vision corresponds to poetical sensivity and allows deictic knowledge. By definition, language is the means used by poetry. "Stricto sensu" vision leads to a codification of thought. A code has no deictic value, unless such value is conferred to by speech act. Sense is awareness of this dual perception, which consists of "lato sensu" vision and "stricto sensu" vision. Formal logic and Chomskian theory are par excellence examples of "stricto sensu" vision. These disciplines therefore do not allow for a dual perception, whose lack leads to nonsense. This becomes evident in translation, whose purpose is in fact the transmission of sense. In short, sense may be transmitted only through uncodified language
Franková, Anna. "MY CODE/WORLD". Master's thesis, Vysoké učení technické v Brně. Fakulta výtvarných umění, 2017. http://www.nusl.cz/ntk/nusl-316051.
Pełny tekst źródłaLoureiro, Sergio. "Mobile code protection /". Paris : École nationale supérieure des télécommunications, 2001. http://catalogue.bnf.fr/ark:/12148/cb38828305r.
Pełny tekst źródłaRatley, Desirée Page. "Impacts of lateral code changes associated with the 2006 International Building Code and the 2008 California Building Code". Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39276.
Pełny tekst źródłaIncludes bibliographical references (leaves 94-96).
The 2008 California Building Code (CBC) will adopt the structural section of the 2006 International Building Code (IBC), which includes alterations to the procedure to determine earthquake design loading, and a drastic move to a complicated method to determine design wind pressures. The implementation of the revised 2006 International Building Code, and the subsequent California adoption of the structural section will have significant effects on the design and construction of structures not only in California, but also the rest of the country. Through a comparison of the design of a steel moment-resisting frame low-rise structure, it was determined that the new code will result in design values that differ from those resulting from the previous codes. In order to compare the relevant codes in different areas of the country, this thesis considers three design scenarios for the low-rise structure: seismic loading in Southern California to compare the 2001 CBC, the 2003 and the 2006 IBC, seismic loading in the Midwest to compare the 2003 IBC and the 2006 IBC, and wind loading in Northern California to compare the 2001 CBC and the 2006 IBC.
(cont.) In the first case, the change from the 2001 CBC to the 2003 IBC was an 8 percent increase in base shear, but a 2 percent decrease from the 2001 CBC to the 2006 IBC. The second case resulted in a 29 percent increase in base shear from the 2003 IBC to the 2006 IBC. The result of the third case was design wind pressures that decreased 20 percent from the 2001 CBC to the 2006 IBC. These design differences will change the design of the lateral force resisting system, especially the later two cases. In addition, the design engineers in California will have to learn a new, greatly more complicated method to design for wind loading. These combined effects of the code changes will impact both engineers and the resulting building designs in all parts of the country.
by Desirée Page Ratley.
M.Eng.
Želtuchin, Alexander. "Orthographic codes and code-switching : a study in 16th century Swedish ortography". Doctoral thesis, Stockholms universitet, Institutionen för nordiska språk, 1996. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-82682.
Pełny tekst źródłaParttimaa, T. (Tuomas). "Test suite optimisation based on response status codes and measured code coverage". Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201305201293.
Pełny tekst źródłaOhjelmistojen testisarja koostuu usein tuhansista erilaisista testitapauksista. Testisarjan suorittamisessa voi mennä hukkaan arvokasta aikaa ja resursseja, jos testisarjaa ei optimoida. Testisarja tulee optimoida siten, että se sisältää vähän testitapauksia, jotta epäolennaisten testitapausten tarpeeton suoritus vältetään. Tämä diplomityö keskittyy kaupallisesti saatavilla olevan Hypertext Transfer Protocol (HTTP) -palvelimien fuzz-testisarjan optimointiyrityksiin. Optimointia varten luotiin testiympäristö, joka koostuu fuzz-testaussarjasta, testikohteina käytetyistä viidestä HTTP-palvelimesta ja koodikattavuuden mittaustyökalusta. Ennalta määriteltyä testisarjaa käytettiin testikohteille suoritetuissa testiajoissa, joiden aikana mitattiin testikohteista koodikattavuus. Diplomityössä toteutettiin kolme erilaista testisarjan optimointialgoritmia. Alkuperäinen testisarja optimoitiin käyttämällä algoritmeja testiajojen tuloksiin. Optimoiduilla osajoukkosarjoilla suoritettiin uudet testiajot, joiden aikana mitattiin testikohteista koodikattavuus. Kaikki koodikattavuusmittaustulokset on esitetty ja analysoitu. Kattavuusanalyysin perusteella arvioitiin testisarjan optimointialgoritmien toimivuus ja saavutettiin seuraavat tutkimustulokset. Koodikattavuusanalyysin avulla saatiin varmuus siitä, että muutos vastausviesteissä ilmaisee, mitkä testitapaukset oikeasti käyttävät testikohdetta. Analyysilla saatiin myös kohtalainen varmuus siitä, että optimoidulla osajoukkosarjalla voidaan saavuttaa sama koodikattavuus, joka saavutettiin alkuperäisellä testisarjalla
Ward, Richard Peter. "Evolution of loosely synchronized spreading codes in code-division multiple-access systems". Thesis, University of South Wales, 2008. https://pure.southwales.ac.uk/en/studentthesis/evolution-of-loosely-synchronized-spreading-codes-in-codedivision-multipleaccess-systems(2bf79128-319c-49c1-8246-91ee0d3533c4).html.
Pełny tekst źródłaLi, Pei. "Unified system of code transformation and execution for heterogeneous multi-core architectures". Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0441/document.
Pełny tekst źródłaHeterogeneous architectures have been widely used in the domain of high performance computing. However developing applications on heterogeneous architectures is time consuming and error-prone because going from a single accelerator to multiple ones indeed requires to deal with potentially non-uniform domain decomposition, inter-accelerator data movements, and dynamic load balancing. The aim of this thesis is to propose a solution of parallel programming for novice developers, to ease the complex coding process and guarantee the quality of code. We lighted and analysed the shortcomings of existing solutions and proposed a new programming tool called STEPOCL along with a new domain specific language designed to simplify the development of an application for heterogeneous architectures. We evaluated both the performance and the usefulness of STEPOCL. The result show that: (i) the performance of an application written with STEPOCL scales linearly with the number of accelerators, (ii) the performance of an application written using STEPOCL competes with an handwritten version, (iii) larger workloads run on multiple devices that do not fit in the memory of a single device, (iv) thanks to STEPOCL, the number of lines of code required to write an application for multiple accelerators is roughly divided by ten
Zheltukhin, Alexander. "Orthographic codes and code-switching : a study in 16th century Swedish orthography /". Stockholm : Almqvist & Wiksell, 1996. http://catalogue.bnf.fr/ark:/12148/cb37164838m.
Pełny tekst źródłaMoriggl, Irene. "Intelligent Code Inspection using Static Code Features : An approach for Java". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4149.
Pełny tekst źródłaOIZUMI, WILLIAN NALEPA. "SYNTHESIS OF CODE ANOMALIES: REVEALING DESIGN PROBLEMS IN THE SOURCE CODE". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=25718@1.
Pełny tekst źródłaCOORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
PROGRAMA DE EXCELENCIA ACADEMICA
Problemas de projeto afetam quase todo sistema de software, fazendo com que a sua manutenção seja cara e impeditiva. Como documentos de projeto raramente estão disponíveis, desenvolvedores frequentemente precisam identificar problemas de projeto a partir do código fonte. Entretanto, a identificação de problemas de projeto não é uma tarefa trivial por diversas razões. Por exemplo, a materialização de problemas de projeto tende a ser espalhada por diversos elementos de código anômalos na implementação. Infelizmente, trabalhos prévios assumiram erroneamente que cada anomalia de código individual – popularmente conhecida como code smell – pode ser usada como um indicador preciso de problema de projeto. Porém, evidências empíricas recentes mostram que diversos tipos de problemas de projeto são frequentemente relacionados a um conjunto de anomalias de código inter-relacionadas, conhecidas como aglomerações de anomalias de código. Neste contexto, esta dissertação propõe uma nova técnica para a síntese de aglomerações de anomalias de código. A técnica tem como objetivo: (i) buscar formas variadas de aglomeração em um programa, e (ii) sumarizar diferentes tipos de informação sobre cada aglomeração. A avaliação da técnica de síntese baseou-se na análise de diversos projetos de software da indústria e em um experimento controlado com desenvolvedores profissionais. Ambos estudos sugerem que o uso da técnica de síntese ajudou desenvolvedores a identificar problemas de projeto mais relevantes do que o uso de técnicas convencionais.
Design problems affect almost all software projects and make their maintenance expensive and impeditive. As design documents are rarely available, programmers often need to identify design problems from the source code. However, the identification of design problems is not a trivial task for several reasons. For instance, the reification of a design problem tends to be scattered through several anomalous code elements in the implementation. Unfortunately, previous work has wrongly assumed that each single code anomaly - popularly known as code smell - can be used as an accurate indicator of a design problem. There is growing empirical evidence showing that several types of design problems are often related to a set of inter-related code anomalies, the so-called code-anomaly agglomerations, rather than individual anomalies only. In this context, this dissertation proposes a new technique for the synthesis of code-anomaly agglomerations. The technique is intended to: (i) search for varied forms of agglomeration in a program, and (ii) summarize different types of information about each agglomeration. The evaluation of the synthesis technique was based on the analysis of several industry-strength software projects and a controlled experiment with professional programmers. Both studies suggest the use of the synthesis technique helped programmers to identify more relevant design problems than the use of conventional techniques.
Ragkhitwetsagul, Chaiyong. "Code similarity and clone search in large-scale source code data". Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10057538/.
Pełny tekst źródłaLomüller, Victor. "Générateur de code multi-temps et optimisation de code multi-objectifs". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM050/document.
Pełny tekst źródłaCompilation is an essential step to create efficient applications.This step allows the use of high-level and target independent languages while maintaining good performances.However, many obstacle prevent compilers to fully optimize applications.For static compilers, the major obstacle is the poor knowledge of the execution context, particularly knowledge on the architecture and data.This knowledge is progressively known during the application life cycle.Compilers progressively integrated dynamic code generation techniques to be able to use this knowledge.However, those techniques usually focuses on improvement of hardware capabilities usage but don't take data into account.In this thesis, we investigate data usage in applications optimization process on Nvidia GPU.We present a method that uses different moments in the application life cycle to create adaptive libraries able to take into account data size.Those libraries can therefore provide more adapted kernels.With the GEMM algorithm, the method is able to provide gains up to 100~\% while avoiding code size explosion.The thesis also investigate runtime code generation gains and costs from the execution speed, memory footprint and energy consumption point of view.We present and study 2 light-weight runtime code generation approaches that can specialize code.We show that those 2 approaches can obtain comparable, and even superior, gains compared to LLVM but at a lower cost
CAMBIER, JEAN-PIERRE. "Code noir, code de nuremberg, code genetique : de l'esclavage a la nationalisation des corps. essai de decodage du biopouvoir". Toulouse 2, 1993. http://www.theses.fr/1993TOU2A027.
Pełny tekst źródłaFormerly slaves "by nature", blacks "intermediaries between man and monkey" (toqueville), presently "human guinea pigs" situated "between man and the experimental animals" (pr milhaud) : philosophy, law, of ethics have very rarely risen above the demands of their time, political, colonial or scientific. Could the present difficulty of french law to guarantee the security of the human body be due to the neglect of natural law ? must we reroot human rights in a modernized natural right, and practise "biopolitics" (barret-kriegel) ? however, as an agent of "biopower", medical science often reduces man as a manipulable object, to correct the errors of a nature whose standards and aims it claims to interpret, going as far as contemplating access to genetic code. In realizing biopolitics, the welfarestate aims, certainly, to guarantee a right to life, but in the midst of an "insured" society which has become "sovereign master of death" (f. Ewald). A tragical drifting, which we see at work in the french law of 20 12 88, regulating the experimental use of human body, regardless of the free consent of the subjects. We challenge ewald's optimism, founded on the erroneous assimilation of the sociopolitical norm to the biological norm. Michel foucault rightly fears a dramatical contemporary resurgence of the archaic "right of dealth", generating new forms of racism and social exclusion
Trumble, Brandon. "Using Code Inspection, Code Modification, and Machine Learning to prevent SQL Injection". Thesis, Kutztown University of Pennsylvania, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1590429.
Pełny tekst źródłaModern day databases store invaluable information about everyone. This information is assumed to be safe, secure, and confidential. However, as technology has become more widespread, more people are able to abuse and exploit this information for personal gain. While the ideal method to combat this issue is the enhanced education of developers, that still leaves a large amount of time where this information is insecure. This thesis outlines two potential solutions to the problem that SQL Injection presents in the context of databases. The first modifies an existing code base to use safe prepared statements rather than unsafe standard queries. The second is a neural network application that sits between the user-facing part of a web application and the application itself. The neural network is designed to analyze data being submitted by a user and detect attempts at SQL injection.
Boije, Niklas, i Kristoffer Borg. "Semi-automatic code-to-code transformer for Java : Transformation of library calls". Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129861.
Pełny tekst źródłaЯворська, Христина Володимирівна, i Khrystyna Yavorska. "Методи і засоби проектування платформ “Low code/No code” в комп’ютерних системах". Master's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2021. http://elartu.tntu.edu.ua/handle/lib/36706.
Pełny tekst źródłaQualification work is devoted to the research of methods and tools of environmental development, which will help to create products faster and more accessible. the analysis of LC / NC development environments and their classification is carried out. The technologies and means of platform design development are also studied and the analysis of software development features is analyzed. Formalization of objects and processes is proposed. Prototypes of blocks are developed and arithmetic operations are described. An algorithm for the operation of the if else operator and the for loop has been developed. The Low code / No code platform was designed and implemented on the basis of certain functional requirements, algorithms of application operation were built and its verification was provided by various types of software testing.
ПЕРЕЛІК ОСНОВНИХ ПОЗНАЧЕНЬ І СКОРОЧЕНЬ 6 ВСТУП 7 РОЗДІЛ 1 АНАЛІЗ СУЧАСНОГО СТАНУ ДОСЛІДЖЕНЬ ПРИ ПРОЕКТУВАННІ NO CODE/LO CODE ПЛАТФОРМИ 10 1.1. Класифікація NC/LC платформ 10 1.2. Технології розробки 15 1.3. Аналіз особливостей NC/LC середовищ 18 1.4. Висновки до розділу 21 РОЗДІЛ 2 ФОРМАЛІЗАЦІЯ ОБ’ЄКТІВ ПЛАТФОРМИ 22 2.1. Особливості навчання користувачів 22 2.2. Формалізація об’єктів платформи low code/no code 29 2.2. Формалізація арифметичних операцій 30 2.3. формалізація операцій порівняння 32 2.4. формалізація оператору if else 33 2.5. формалізація циклу for 35 2.6. Висновки до розділу 38 РОЗДІЛ 3 РОЗРОБКА ТА ТЕСТУВАННЯ СЕРЕДОВИЩА 39 3.1. Визначення вимог 39 3.2. Проектування архітектури 43 3.3. Тестування мульти-інтерфейсного середовища 43 3.4. Висновки до розділу 54 РОЗДІЛ 4 ОХОРОНА ПРАЦІ ТА БЕЗПЕКА В НАДЗВИЧАЙНИХ СИТУАЦІЯХ 55 4.1. Охорона праці 55 4.2. Організація оповіщення і зв’язку у надзвичайних ситуаціях техногенного та природного характеру. 58 ВИСНОВКИ 62 ПЕРЕЛІК ВИКОРИСТАНИХ ДЖЕРЕЛ 63 Додаток А 65
Tseng, Da-Wen, i 曾大文. "Using MMX code within MPEG Audio Codec". Thesis, 2002. http://ndltd.ncl.edu.tw/handle/05116004993667118681.
Pełny tekst źródła國立臺灣大學
電機工程學研究所
90
The Moving Picture Experts Group (MPEG) is a working group of ISO/IEC in charge of the development of standards for coded representation of digital audio and video. In the audio part, we always use CD to store WAVE file, but it takes lots of hard disk space. So the MPEG Audio Encoder compresses wav. to mp2. or mp3. files. It just throws the voice that humans can’t hear and makes the file smaller so as to store more ones. In the codec, at bit rates from 64kb/s up to 192kb/s per channel, MPEG Audio Layer II can provide a sound quality that is competitive to any perceptual coding scheme using the same bit rate. However, it’s found that we could use MMX code in the filter bank of the audio codec because it’s similar to DCT. Intel's MMX technology is designed to accelerate multimedia and communications applications especially. In this thesis, we use MMX to accelerate the MPEG Audio Codec. It contains not only encoding but also decoding.
Das, S. S. "Code Obfuscation using Code Splitting with Self-modifying Code". Thesis, 2014. http://ethesis.nitrkl.ac.in/5736/1/212CS3370-3.pdf.
Pełny tekst źródła