To see the other types of publications on this topic, follow the link: IS CODE.

Dissertations / Theses on the topic 'IS CODE'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'IS CODE.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kühn, Stefan. "Organic codes and their identification : is the histone code a true organic code." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86673.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Codes are ubiquitous in culture|and, by implication, in nature. Code biology is the study of these codes. However, the term `code' has assumed a variety of meanings, sowing confusion and cynicism. The rst aim of this study is therefore to de ne what an organic code is. Following from this, I establish a set of criteria that a putative code has to conform to in order to be recognised as a true code. I then o er an information theoretical perspective on how organic codes present a viable method of dealing with biological information, as a logical extension thereof. Once this framework has been established, I proceed to review several of the current organic codes in an attempt to demonstrate how the de nition of and criteria for identifying an organic code may be used to separate the wheat from the cha . I then introduce the `regulatory code' in an e ort to demonstrate how the code biological framework may be applied to novel codes to test their suitability as organic codes and whether they warrant further investigation. Despite the prevalence of codes in the biological world, only a few have been de nitely established as organic codes. I therefore turn to the main aim of this study which is to cement the status of the histone code as a true organic code in the sense of the genetic or signal transduction codes. I provide a full review and analysis of the major histone post-translational modi cations, their biological e ects, and which protein domains are responsible for the translation between these two phenomena. Subsequently I show how these elements can be reliably mapped onto the theoretical framework of code biology. Lastly I discuss the validity of an algorithm-based approach to identifying organic codes developed by G orlich and Dittrich. Unfortunately, the current state of this algorithm and the operationalised de nition of an organic code is such that the process of identifying codes, without the neccessary investigation by a scientist with a biochemical background, is currently not viable. This study therefore demonstrates the utility of code biology as a theoretical framework that provides a synthesis between molecular biology and information theory. It cements the status of the histone code as a true organic code, and criticises the G orlich and Dittrich's method for nding codes by an algorithm based on reaction networks and contingency criteria.
AFRIKAANSE OPSOMMING: Kodes is alomteenwoordig in kultuur|en by implikasie ook in die natuur. Kodebiologie is die studie van hierdie kodes. Tog het die term `kode' 'n verskeidenheid van betekenisse en interpretasies wat heelwat verwarring veroorsaak. Die eerste doel van hierdie studie is dus om te bepaal wat 'n organiese kode is en 'n stel kriteria te formuleer wat 'n vermeende kode aan moet voldoen om as 'n ware kode erken te word. Ek ontwikkel dan 'n inligtings-teoretiese perspektief op hoe organiese kodes `n manier bied om biologiese inligting te hanteer as 'n logiese uitbreiding daarvan. Met hierdie raamwerk as agtergrond gee ek `n oorsig van 'n aantal van die huidige organiese kodes in 'n poging om aan te toon hoe die de nisie van en kriteria vir 'n organiese kode gebruik kan word om die koring van die kaf te skei. Ek stel die `regulering kode' voor in 'n poging om te wys hoe die kode-biologiese raamwerk op nuwe kodes toegepas kan word om hul geskiktheid as organiese kodes te toets en of dit die moeite werd is om hulle verder te ondersoek. Ten spyte daarvan dat kodes algemeen in die biologiese w^ereld voorkom, is relatief min van hulle onomwonde bevestig as organiese kodes. Die hoofdoel van hierdie studie is om vas te stel of die histoonkode 'n ware organiese kode is in die sin van die genetiese of seintransduksie kodes. Ek verskaf 'n volledige oorsig en ontleding van die belangrikste histoon post-translasionele modi kasies, hul biologiese e ekte, en watter prote endomeine verantwoordelik vir die vertaling tussen hierdie twee verskynsels. Ek wys dan hoe hierdie elemente perfek inpas in die teoretiese raamwerk van kodebiologie. Laastens bespreek ek die geldigheid van 'n algoritme-gebaseerde benadering tot die identi sering van organiese kodes wat deur G orlich en Dittrich ontwikkel is. Dit blyk dat hierdie algoritme en die geoperasionaliseerde de nisie van 'n organiese kode sodanig is dat die proses van die identi sering van kodes sonder die nodige ondersoek deur 'n wetenskaplike met 'n biochemiese agtergrond tans nie haalbaar is nie. Hierdie studie bevestig dus die nut van kodebiologie as 'n teoretiese raamwerk vir 'n sintese tussen molekul^ere biologie en inligtingsteorie, bevestig die status van die histoonkode as 'n ware organiese kode, en kritiseer G orlich en Dittrich se poging om organiese kodes te identi seer met 'n algoritme wat gebaseer is op reaksienetwerke en `n kontingensie kriterium.
APA, Harvard, Vancouver, ISO, and other styles
2

Borchert, Thomas. "Code Profiling : Static Code Analysis." Thesis, Karlstad University, Faculty of Economic Sciences, Communication and IT, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-1563.

Full text
Abstract:

Capturing the quality of software and detecting sections for further scrutiny within are of high interest for industry as well as for education. Project managers request quality reports in order to evaluate the current status and to initiate appropriate improvement actions and teachers feel the need of detecting students which need extra attention and help in certain programming aspects. By means of software measurement software characteristics can be quantified and the produced measures analyzed to gain an understanding about the underlying software quality.

In this study, the technique of code profiling (being the activity of creating a summary of distinctive characteristics of software code) was inspected, formulized and conducted by means of a sample group of 19 industry and 37 student programs. When software projects are analyzed by means of software measurements, a considerable amount of data is produced. The task is to organize the data and draw meaningful information from the measures produced, quickly and without high expenses.

The results of this study indicated that code profiling can be a useful technique for quick program comparisons and continuous quality observations with several application scenarios in both industry and education.

APA, Harvard, Vancouver, ISO, and other styles
3

Ketkar, Avanti Ulhas. "Code constructions and code families for nonbinary quantum stabilizer code." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/2743.

Full text
Abstract:
Stabilizer codes form a special class of quantum error correcting codes. Nonbinary quantum stabilizer codes are studied in this thesis. A lot of work on binary quantum stabilizer codes has been done. Nonbinary stabilizer codes have received much less attention. Various results on binary stabilizer codes such as various code families and general code constructions are generalized to the nonbinary case in this thesis. The lower bound on the minimum distance of a code is nothing but the minimum distance of the currently best known code. The focus of this research is to improve the lower bounds on this minimum distance. To achieve this goal, various existing quantum codes are studied that have good minimum distance. Some new families of nonbinary stabilizer codes such as quantum BCH codes are constructed. Different ways of constructing new codes from the existing ones are also found. All these constructions together help improve the lower bounds.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Han Jo. "Improving turbo codes through code design and hybrid ARQ." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0012169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Serpa, Matheus da Silva. "Source code optimizations to reduce multi core and many core performance bottlenecks." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/183139.

Full text
Abstract:
Atualmente, existe uma variedade de arquiteturas disponíveis não apenas para a indústria, mas também para consumidores finais. Processadores multi-core tradicionais, GPUs, aceleradores, como o Xeon Phi, ou até mesmo processadores orientados para eficiência energética, como a família ARM, apresentam características arquiteturais muito diferentes. Essa ampla gama de características representa um desafio para os desenvolvedores de aplicações. Os desenvolvedores devem lidar com diferentes conjuntos de instruções, hierarquias de memória, ou até mesmo diferentes paradigmas de programação ao programar para essas arquiteturas. Para otimizar uma aplicação, é importante ter uma compreensão profunda de como ela se comporta em diferentes arquiteturas. Os trabalhos relacionados provaram ter uma ampla variedade de soluções. A maioria deles se concentrou em melhorar apenas o desempenho da memória. Outros se concentram no balanceamento de carga, na vetorização e no mapeamento de threads e dados, mas os realizam separadamente, perdendo oportunidades de otimização. Nesta dissertação de mestrado, foram propostas várias técnicas de otimização para melhorar o desempenho de uma aplicação de exploração sísmica real fornecida pela Petrobras, uma empresa multinacional do setor de petróleo. Os experimentos mostram que loop interchange é uma técnica útil para melhorar o desempenho de diferentes níveis de memória cache, melhorando o desempenho em até 5,3 e 3,9 nas arquiteturas Intel Broadwell e Intel Knights Landing, respectivamente. Ao alterar o código para ativar a vetorização, o desempenho foi aumentado em até 1,4 e 6,5 . O balanceamento de carga melhorou o desempenho em até 1,1 no Knights Landing. Técnicas de mapeamento de threads e dados também foram avaliadas, com uma melhora de desempenho de até 1,6 e 4,4 . O ganho de desempenho do Broadwell foi de 22,7 e do Knights Landing de 56,7 em comparação com uma versão sem otimizações, mas, no final, o Broadwell foi 1,2 mais rápido que o Knights Landing.
Nowadays, there are several different architectures available not only for the industry but also for final consumers. Traditional multi-core processors, GPUs, accelerators such as the Xeon Phi, or even energy efficiency-driven processors such as the ARM family, present very different architectural characteristics. This wide range of characteristics presents a challenge for the developers of applications. Developers must deal with different instruction sets, memory hierarchies, or even different programming paradigms when programming for these architectures. To optimize an application, it is important to have a deep understanding of how it behaves on different architectures. Related work proved to have a wide variety of solutions. Most of then focused on improving only memory performance. Others focus on load balancing, vectorization, and thread and data mapping, but perform them separately, losing optimization opportunities. In this master thesis, we propose several optimization techniques to improve the performance of a real-world seismic exploration application provided by Petrobras, a multinational corporation in the petroleum industry. In our experiments, we show that loop interchange is a useful technique to improve the performance of different cache memory levels, improving the performance by up to 5.3 and 3.9 on the Intel Broadwell and Intel Knights Landing architectures, respectively. By changing the code to enable vectorization, performance was increased by up to 1.4 and 6.5 . Load Balancing improved the performance by up to 1.1 on Knights Landing. Thread and data mapping techniques were also evaluated, with a performance improvement of up to 1.6 and 4.4 . We also compared the best version of each architecture and showed that we were able to improve the performance of Broadwell by 22.7 and Knights Landing by 56.7 compared to a naive version, but, in the end, Broadwell was 1.2 faster than Knights Landing.
APA, Harvard, Vancouver, ISO, and other styles
6

Panagos, Adam G., and Kurt Kosbar. "A METHOD FOR FINDING BETTER SPACE-TIME CODES FOR MIMO CHANNELS." International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604782.

Full text
Abstract:
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Multiple-input multiple output (MIMO) communication systems can have dramatically higher throughput than single-input, single-output systems. Unfortunately, it can be difficult to find the space-time codes these systems need to achieve their potential. Previously published results located good codes by minimizing the maximum correlation between transmitted signals. This paper shows how this min-max method may produce sub-optimal codes. A new method which sorts codes based on the union bound of pairwise error probabilities is presented. This new technique can identify superior MIMO codes, providing higher system throughput without increasing the transmitted power or bandwidth requirements.
APA, Harvard, Vancouver, ISO, and other styles
7

Nielsen, Sebastian, and David Tollemark. "Code readability: Code comments OR self-documenting code : How does the choice affect the readability of the code?" Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12739.

Full text
Abstract:
Context: Code readability is something every software developer tackles every day. In order for efficient maintainability and learning of a program the documentation needs to be of high quality. Objectives: This thesis is attempting to show what the general perspective the software developers have. We investigate which method that is preferred and why. We will also look at readability and what similarities and differences there are among students and IT professionals. Conclusion: We have collected data from several sources. Firstly from a literature review where different papers within the field are presented. Secondly from two separate surveys conducted on students and IT professionals. From our results we found that documentation is something that software developers heavily rely on and that the need for extensive documentation differs with working experience.
APA, Harvard, Vancouver, ISO, and other styles
8

Reddy, Satischandra B. "Code optimization with stack oriented intermediate code." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 1985. http://digitalcommons.auctr.edu/dissertations/2629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tixier, Audrey. "Reconnaissance de codes correcteurs." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066554/document.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au problème de la reconnaissance de code. Ce problème se produit principalement lorsqu'une communication est observée dans un milieu non-coopératif. Une liste de mots bruités issus d'un code inconnu est obtenue, l'objectif est alors de retrouver l'information contenue dans ces mots bruités. Pour cela, le code utilisé est reconstruit afin de décoder les mots observés. Nous considérons ici trois instances de ce problème et proposons pour chacune d'elle une nouvelle méthode. Dans la première, nous supposons que le code utilisé est un turbo-code et nous proposons une méthode pour reconstruire la permutation interne (les autres éléments du turbo-codeur pouvant être facilement reconstruits grâce aux méthodes existantes). Cette permutation est reconstruite pas à pas en recherchant l'indice le plus probable à chaque instant. Plus précisément, la probabilité de chaque indice est déterminée avec l'aide de l'algorithme de décodage BCJR. Dans la seconde, nous traitons le problème de la reconnaissance des codes LDPC en suggérant une nouvelle méthode pour retrouver une liste d'équations de parité de petits poids. Celle-ci généralise et améliore les méthodes existantes. Finalement, avec la dernière méthode, nous reconstruisons un code convolutif entrelacé. Cette méthode fait appel à la précédente pour retrouver une liste d'équations de parité satisfaites par le code entrelacé. Puis, en introduisant une représentation sous forme de graphe de l'intersection de ces équations de parité, nous retrouvons simultanément l'entrelaceur et le code convolutif
In this PhD, we focus on the code reconstruction problem. This problem mainly arises in a non-cooperative context when a communication consisting of noisy codewords stemming from an unknown code is observed and its content has to be retrieved by recovering the code that is used for communicating and decoding with it the noisy codewords. We consider here three possible scenarios and suggest an original method for each case. In the first one, we assume that the code that is used is a turbo-code and we propose a method for reconstructing the associated interleaver (the other components of the turbo-code can be easily recovered by the existing methods). The interleaver is reconstructed step by step by searching for the most probable index at each time and by computing the relevant probabilities with the help of the BCJR decoding algorithm. In the second one, we tackle the problem of reconstructing LDPC codes by suggesting a new method for finding a list of parity-check equations of small weight that generalizes and improves upon all existing methods. Finally, in the last scenario we reconstruct an unknown interleaved convolutional code. In this method we used the previous one to find a list of parity-check equations for this code. Then, by introducing a graph representing how these parity-check equations intersect we recover at the same time the interleaver and the convolutional code
APA, Harvard, Vancouver, ISO, and other styles
10

Muller, Wayne. "East City Precinct Design Code: Redevelopment through form-based codes." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/12952.

Full text
Abstract:
Includes bibliographical references.
This thesis confines itself to a consideration of urban development opportunity in the East City Precinct through the understanding of it former historical character and memory which can be implemented through Form Based Codes. It locates the design process in the sub-regional context and puts forward notional spatial proposal for the physical area of the East City Precinct and its surrounds. The application of theory is tested at precinct level and emphasis remains firmly on the public elements ordering the spatial structure. With all these considerations, this dissertation presents a piece of history of District Six and the importance of memory in relation to the East City. This contested site of memory and heritage informs the area’s contextual development amid the often-essentialising multicultural in particular to the ‘new South Africa’. In turn, an understanding of District Six’s urban quality which frames the intricacies of a restitution and redevelopment plan. It also illustrates the genuine uniqueness of its principles of urbanism, in contrast to market-oriented urban development which reproduces spaces of social fragmentation, exclusion and inequality. Indeed, the vision for the East City concerns long-term urban sustainability, an investment in a city of fluid spaces, a city of difference and meaning. This dissertation contends that there is a real role for urban and social sustainability in the redevelopment potential of the study area, with its historical, social, cultural and symbolic significance. Therefore its outline the key elements and principles for a development framework prepared for the study area and discuss the prospects for urban and social sustainability. This will inform where and how to apply form based codes with in the East City context.
APA, Harvard, Vancouver, ISO, and other styles
11

Rodriguez, Fernandez Carlos Gustavo. "Machine learning quantum error correction codes : learning the toric code /." São Paulo, 2018. http://hdl.handle.net/11449/180319.

Full text
Abstract:
Orientador: Mario Leandro Aolita
Banca:Alexandre Reily Rocha
Banca: Juan Felipe Carrasquilla
Resumo: Usamos métodos de aprendizagem supervisionada para estudar a decodificação de erros em códigos tóricos de diferentes tamanhos. Estudamos múltiplos modelos de erro, e obtemos figuras da eficácia de decodificação como uma função da taxa de erro de um único qubit. Também comentamos como o tamanho das redes neurais decodificadoras e seu tempo de treinamento aumentam com o tamanho do código tórico.
Abstract: We use supervised learning methods to study the error decoding in toric codes ofdifferent sizes. We study multiple error models, and obtain figures of the decoding efficacyas a function of the single qubit error rate. We also comment on how the size of thedecoding neural networks and their training time scales with the size of the toric code
Mestre
APA, Harvard, Vancouver, ISO, and other styles
12

Yamazato, Takaya, Iwao Sasase, and Shinsaku Mori. "Interlace Coding System Involving Data Compression Code, Data Encryption Code and Error Correcting Code." IEICE, 1992. http://hdl.handle.net/2237/7844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tan, Peter K. S. "A translator from RISC code into MC68020 code." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/14289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kelly, Patrick. "Demon Code." Digital Commons at Loyola Marymount University and Loyola Law School, 2021. https://digitalcommons.lmu.edu/etd/988.

Full text
Abstract:
Demon Code (One-Hour, Sci-Fi) - After a teenaged hacker summons a sardonic demon through her computer, the two band together to hunt down legions of escaped hellspawn in order restore her mother’s sanity.
APA, Harvard, Vancouver, ISO, and other styles
15

Hawk, Zoe Alaina. "Dress code." Thesis, University of Iowa, 2011. https://ir.uiowa.edu/etd/980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hagman, Tobias. "Clean Code vs Dirty Code : Ett fältexperiment för att förklara hur Clean Code påverkar kodförståelse." Thesis, Högskolan Dalarna, Informatik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:du-18698.

Full text
Abstract:
Stora och komplexa kodbaser med bristfällig kodförståelse är ett problem som blir allt vanligare bland företag idag. Bristfällig kodförståelse resulterar i längre tidsåtgång vid underhåll och modifiering av koden, vilket för ett företag leder till ökade kostnader. Clean Code anses enligt somliga vara lösningen på detta problem. Clean Code är en samling riktlinjer och principer för hur man skriver kod som är enkel att förstå och underhålla. Ett kunskapsglapp identifierades vad gäller empirisk data som undersöker Clean Codes påverkan på kodförståelse. Studiens frågeställning var: Hur påverkas förståelsen vid modifiering av kod som är refaktoriserad enligt Clean Code principerna för namngivning och att skriva funktioner? För att undersöka hur Clean Code påverkar kodförståelsen utfördes ett fältexperiment tillsammans med företaget CGM Lab Scandinavia i Borlänge, där data om tidsåtgång och upplevd förståelse hos testdeltagare samlades in och analyserades. Studiens resultat visar ingen tydlig förbättring eller försämring av kodförståelsen då endast den upplevda kodförståelsen verkar påverkas. Alla testdeltagare föredrar Clean Code framför Dirty Code även om tidsåtgången inte påverkas. Detta leder fram till slutsatsen att Clean Codes effekter kanske inte är omedelbara då utvecklare inte hunnit anpassa sig till Clean Code, och därför inte kan utnyttja det till fullo. Studien ger en fingervisning om Clean Codes potential att förbättra kodförståelsen.
Summary: Big and complex codebases with inadequate understandability, is a problem which is becoming more common among companies today. Inadequate understandability leads to bigger time requirements when maintaining code, which means increased costs for a company. Clean Code is according to some people the solution to this problem. Clean Code is a collection of guidelines and principles for how to write code which is easy to understand and maintain. A gap of knowledge was identified, as there is little empirical data that investigates how Clean Code affects understandability. This lead to the following the question: How is the understandability affected when modifying source code which has been refactored according to the Clean Code principles regarding names and functions? In order to investigate how Clean Code affects understandability, a field experiment was conducted in collaboration with the company CGM Lab Scandinavia in Borlänge. In the field experiment data in the form of time and experienced understandability was collected and analyzed.The result of this study doesn’t show any clear signs of immediate improvements or worsening when it comes to understandability. This is because even though all participants prefer Clean Code, this doesn’t show in the measured time of the experiment. This leads me to the conclusion that the effects of Clean Code aren’t immediate, since developers hasn’t been able to adapt to Clean Code, and therefore are not able to utilize its benefits properly. This study gives a hint of the potential Clean Code has to improve understandability
APA, Harvard, Vancouver, ISO, and other styles
17

Ly, Kevin. "Normalizer: Augmenting Code Clone Detectors using Source Code Normalization." DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1722.

Full text
Abstract:
Code clones are duplicate fragments of code that perform the same task. As software code bases increase in size, the number of code clones also tends to increase. These code clones, possibly created through copy-and-paste methods or unintentional duplication of effort, increase maintenance cost over the lifespan of the software. Code clone detection tools exist to identify clones where a human search would prove unfeasible, however the quality of the clones found may vary. I demonstrate that the performance of such tools can be improved by normalizing the source code before usage. I developed Normalizer, a tool to transform C source code to normalized source code where the code is written as consistently as possible. By maintaining the code's function while enforcing a strict format, the variability of the programmer's style will be taken out. Thus, code clones may be easier to detect by tools regardless of how it was written. Reordering statements, removing useless code, and renaming identifiers are used to achieve normalized code. Normalizer was used to show that more clones can be found in Introduction to Computer Networks assignments by normalizing the source code versus the original source code using a small variety of code clone detection tools.
APA, Harvard, Vancouver, ISO, and other styles
18

Ljung, Kevin. "Clean Code in Practice : Developers perception of clean code." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21443.

Full text
Abstract:
Context. There is a need for developers to write clean code and code that adheres to a high-quality standard. We need developers not to introduce technical debt and code smells to the code. From a business perspective, developers that introduce technical debt to the code will make the code more difficult to maintain, meaning that the cost for the project will increase. Objectives. The main objective of this study is to gain an understanding about the perception the developers have about clean code and how they use it in practice. There is not much information about how clean code is perceived by developers and applied in practice, and this thesis will extend the information about those two areas. It is an effort to understand developers' perception of clean code in practice and what they think about it. Realization (Method). To understand the state-of-the-art in the area of clean code, we first performed a systematic literature review using snowballing. To delve into developers' perception about clean code and how it is used in practice. We have developed and sent out a questionnaire survey to developers within companies and shared the survey via social networks. We ask if developers believe that clean code eases the process of reading, modifying, reusing, or maintaining code. We also investigate whether developers write clean code initially or refactor it to become clean code, or do none of these. Finally, we ask developers in practice what clean code principles they agree or disagree with. Asking this will help identify which clean code principles developers think are helpful and which are not. Results. The results from the investigation are that the developers strongly believe in clean code and that it affects reading, modifying, reusing, and maintaining code, positively. Also, developers do not write clean code initially but rather refactor unclean code to become clean code. Only a small portion of developers write clean code initially, and some do what suits the situation, while some do neither of these. The last result is that developers agree with most of the clean code principles listed in the questionnaire survey and that there are also some principles that they discard, but these fewer. Conclusions. From the first research question, we know that developers strongly believe that clean code makes the code more readable, understandable, modifiable, or reusable. Also, developers check that the code is readable using code reviews, peer reviews, or pull requests. Regarding the second research question, we know that developers mostly refactor unclean code rather than write clean code initially. The challenges are that to write clean code initially, a developer must have a solid understanding of the problem and obstacles in advance, and a developer will not always know what the code should look like in advance. The last research question showed that most developers agree with most of the clean code principles and that only a small portion of developers disagree with some of them. Static code analysis and code quality gates can ensure that developers follow these clean code practices and principles.
APA, Harvard, Vancouver, ISO, and other styles
19

Lawson, John. "Duty specific code driven design methodology : a model for better codes." Thesis, University of Aberdeen, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274818.

Full text
Abstract:
The thesis examines Engineering Design and its methodology in general before examining and comparing the principal differences between Inventive Design and Development Design.  This latter branch of design is described in detail, recognising that it is executed in accordance with recognised specifications, design codes and standards.  Design Codes and Standards are analysed in terms of the service they provide to the professional design engineer who will normally work under the procedures and accepted standards of a Professional Design House.  Professional design is an important part of all disciplines in the engineering profession.  Such design work is executed by specialists, invariably guided in their work by recognised Specifications, Design Standards or Codes of Practice published by recognised reputable bodies who appoint working parties or independent committees to write and maintain these documents. Design Standards and Codes of Practice are at best unclear and at worst confusing if not down right contradictory within themselves. Usually there is more than one such Standard or Code available to the professional design engineer often based on geographical location; BSI in the UK, DIN or ISO in Europe and perhaps ASME ANSI in the USA. There are of course several others. The professional design process is analysed and described in order to demonstrate the commercial and project constraints associated with professional development design.  The model usually adopted in the preparation and presentation of these codes and standards is critiqued and a better model proposed for standard adoption.
APA, Harvard, Vancouver, ISO, and other styles
20

Veluri, Subrahmanya Pavan Kumar. "Code Verification and Numerical Accuracy Assessment for Finite Volume CFD Codes." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/28715.

Full text
Abstract:
A detailed code verification study of an unstructured finite volume Computational Fluid Dynamics (CFD) code is performed. The Method of Manufactured Solutions is used to generate exact solutions for the Euler and Navier-Stokes equations to verify the correctness of the code through order of accuracy testing. The verification testing is performed on different mesh types which include triangular and quadrilateral elements in 2D and tetrahedral, prismatic, and hexahedral elements in 3D. The requirements of systematic mesh refinement are discussed, particularly in regards to unstructured meshes. Different code options verified include the baseline steady state governing equations, transport models, turbulence models, boundary conditions and unsteady flows. Coding mistakes, algorithm inconsistencies, and mesh quality sensitivities uncovered during the code verification are presented. In recent years, there has been significant work on the development of algorithms for the compressible Navier-Stokes equations on unstructured grids. One of the challenging tasks during the development of these algorithms is the formulation of consistent and accurate diffusion operators. The robustness and accuracy of diffusion operators depends on mesh quality. A survey of diffusion operators for compressible CFD solvers is conducted to understand different formulation procedures for diffusion fluxes. A patch-wise version of the Method of Manufactured Solutions is used to test the accuracy of selected diffusion operators. This testing of diffusion operators is limited to cell-centered finite volume methods which are formally second order accurate. These diffusion operators are tested and compared on different 2D mesh topologies to study the effect of mesh quality (stretching, aspect ratio, skewness, and curvature) on their numerical accuracy. Quantities examined include the numerical approximation errors and order of accuracy associated with face gradient reconstruction. From the analysis, defects in some of the numerical formulations are identified along with some robust and accurate diffusion operators.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Firmanto, Welly T. (Welly Teguh) Carleton University Dissertation Engineering Systems and Computer. "Code combining of Reed-Muller codes in an indoor wireless environment." Ottawa, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
22

Jie, Cao, and Xie Qiu-cheng. "THE SEARCHING METHOD OF QUASI-OPTIMUM GROUP SYNC CODES ON THE SUBSET OF PN SEQUENCES." International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613438.

Full text
Abstract:
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
As the code length is increasing, the search of optimum group sync codes will be more and more difficult, even impossible. This paper gives the searching method of quasi-optimum group sync codes on the small subset of PN sequences -- CVT-TAIL SEARCHING METHOD and PREFIX-SUFFIX SEARCHING METHOD. We have searched out quasi-optimum group sync codes for their lengths N=32-63 by this method and compared them with corresponding optimum group sync codes for their lengths N=32-54. They are very approximative. The total searching time is only several seconds. This method may solves the problems among error sync probability, code length and searching time. So, it is a good and practicable searching method for long code.
APA, Harvard, Vancouver, ISO, and other styles
23

Tysell, Sundkvist Leif, and Emil Persson. "Code Styling and its Effects on Code Readability and Interpretation." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209576.

Full text
Abstract:
Code readability has a considerable effect upon the life-cycle of software products. It is important that the code is maintainable, reusable and that it is easy for a programmer to get acquainted with unfamiliar code. Previous studies have been used to show correlations between code readability and code styling. Eye tracking technology has also been used in order to study the movements of the eye and the focus of a subject in a computer gener- ated environment. By using a combination of Eye tracking technology and code styling features such as syntax highlighting, logical variable- and func- tion names, code indentation and code commenting the correlations between code readability and code styling has been further addressed and examined. This thesis studies subjects that have participated in a series of experiments in which they have been given the assignments of examining code whilst their eye movements have been tracked using Eye tracking tools and software. The tracked data was assembled into heatmap-based images plotting movement of the eye on screen. The experiments showed that there is indeed correla- tions between how code styling is used and how the participants addressed the given assignments. The conclusion of this report is that the readability and interpretation of code was improved by the introduction of certain code styling features. As for code indentation and syntax highlighting, no visible improvement was observed. Code commenting, however, caused the subjects to examine the method sig- natures in a code more thoroughly and thus detecting return-type-related errors hidden within; a visible improvement. Furthermore, logical variable naming rids the programmer of the trouble of having to read entire pieces of code that could otherwise, when used cleverly, be explained by a method or variable name itself, and thus improved readability and interpretation as well.
Kods läsbarhet har en betydande effekt på en produkts livscykel. Det är viktigt att en kod är lätt att underhålla, återanvända och att det är smidigt för en programmerare att bekanta sig med främmande kod. Tidigare studier har använts för att visa korrelationer mellan kods läsbarhet och kodstilisering. Eye tracking-teknologi har också använts f ̈or att kunna studera ögonrörelser och fokus hos personer i en datorgenererad miljö. Med hjälp av en kombination of Eye tracking-teknologi och kodstiliseringsverktyg såsom syntax highlighting, logiska variabel- och funktionsnamn, kodindrag samt kommenterad kod har korrelationen mellan kods la ̈sbarhet och kodstilisering ytterligare kunnat angripas och studeras. Denna rapport studerar ett antal testpersoner som har deltagit i en serie av experiment i vilka de tillordnats problem som involverar tolkning av kod samtidigt som deras ögonrörelser studerats med hjälp av Eye tracking-utrustning- och mjukvara. Datat från experimenten har sedan sammanfattats med Heatmap-baserade bilder som plottar o ̈gats ro ̈relser p ̊a skärmen. Experimenten visade att det finns korrelationer mellan hur kodstilisering används och hur deltagarna valde att angripa och lösa de givna problemen. Slutsatsen av denna rapport är att kods läsbarhet förbättrades i och med introduktionen av kodstiliseringsfunktioner. Vad gäller indenterad kod samt syntax highlighting observerades inga synliga förbättringar. Kommenterad kod, å andra sidan, medförde att testpersonerna undersökte metodsdignaturerna i kod mer noggrannt vilket ledde till att de också upptäckte return- type-relaterade fel i koden; en synlig förbättring. Dessutom visade det sig att logiska variabelnamn medför att programmeraren ej behöver ödsla tid eller energi för att läsa igenom hela kodstycken som i själva verket kunde ha förklarats av ett klyftigt utvalt metod- eller variabelnamn, vilket också innebar en förbättring för läsbarheten för kod.
APA, Harvard, Vancouver, ISO, and other styles
24

Lemay, Frédérick. "Instrumentation optimisée de code pour prévenir l'exécution de code malicieux." Thesis, Université Laval, 2012. http://www.theses.ulaval.ca/2012/29030/29030.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Han, Sangmok. "Improved source code editing for effective ad-hoc code reuse." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/67583.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 111-113).
Code reuse is essential for productivity and software quality. Code reuse based on abstraction mechanisms in programming languages is a standard approach, but programmers also reuse code by taking an ad-hoc approach, in which text of code is reused without abstraction. This thesis focuses on improving two common ad-hoc code reuse approaches-code template reuse and code phrase reuse because they are not only frequent, but also, more importantly, they pose a risk to quality and productivity in software development, the original aims of code reuse. The first ad-hoc code reuse approach, code template reuse refers to programmers reusing an existing code fragment as a structural template for similar code fragments. Programmers use the code reuse approach because using abstraction mechanisms requires extra code and preplanning. When similar code fragments, which are only different by several code tokens, are reused just a couple of times, it makes sense to reuse text of one of the code fragments as a template for others. Unfortunately, code template reuse poses a risk to software quality because it requires repetitive and tedious editing steps. Should a programmer forget to perform any of the editing steps, he may introduce program bugs, which are difficult to detect by visual inspection, code compilers, or other existing bug detection methods. The second ad-hoc code reuse approach, code phrase reuse refers to programmers reusing common code phrases by retyping them, often regularly, using code completion. Programmers use the code reuse approach because no abstraction mechanism is available for reusing short yet common code phrases. Unfortunately, code phrase reuse poses a limitation on productivity because retyping the same code phrases is time-consuming even when a code completion system is used. Existing code completion systems completes only one word at a time. As a result, programmers have to repeatedly invoke code completion, review code completion candidates, and select a correct candidate as many times as the number of words in a code phrase. This thesis presents new models, algorithms, and user interfaces for effective ad-hoc code reuse. First, to address the risk posed by code template reuse, it develops a method for detecting program bugs in similar code fragments by analyzing sequential patterns of code tokens. To proactively reduce program bugs introduced during code template reuse, this thesis proposes an error-preventive code editing method that reduces the number of code editing steps based on cell-based text editing. Second, to address the productivity limitation posed by code phrase reuse, this thesis develops an efficient code phrase completion method. The code phrase completion accelerates reuse of common code phrases by taking non-predefined abbreviated input and expanding it into a full code phrase. The code phrase completion method utilizes a statistical model called Hidden Markov model trained on a corpus of code and abbreviation examples. Finally, the new methods for bug detection and code phrase completion are evaluated through corpus and user studies. In 7 well-maintained open source projects, the bug detection method found 87 previously unknown program bugs. The ratio of actual bugs to bug warnings (precision) was 47% on average, eight times higher than previous similar methods. The code phrase completion method is evaluated on the basis of accuracy and time savings. It achieved 99.3% accuracy in a corpus study and achieved 30.4% time savings and 40.8% keystroke savings in a user study when compared to a conventional code completion method. At a higher level, this work demonstrates the power of a simple sequence-based model of source code. Analyzing vertical sequences of code tokens across similar code fragments is found useful for accurate bug detection; learning to infer horizontal sequences of code tokens is found useful for efficient code completion. Ultimately, this work may aid the development of other sequence-based models of source code, as well as different analysis and inference techniques, which can solve previously difficult software engineering problems.
by Sangmok Han.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
26

Vouk, William F. "Concelebration development from the 1917 Code through the 1983 Code /." Theological Research Exchange Network (TREN), 1992. http://www.tren.com.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Coleman, Anita Sundaram. "A Code for Classifiers: Whatever Happened to Merrillâ s Code?" Ergon-Verlag, 2004. http://hdl.handle.net/10150/105839.

Full text
Abstract:
This is a preprint of the article published in Knowledge Organization 31 (3): 161-176. The work titled "Code for Classifiers" by William Stetson Merrill is examined. The development of Merrill's Code over a period of 27 years, 1912-1939 is traced by examining bibliographic, attribution, conceptual and contextual differences. The general principles advocated, the differences between variants, and three controversial features of the Code: 1) the distinction between classifying vs. classification, 2) borrowing of the bibliographic principle of authorial intention, and 3) use of Dewey Decimal class numbers for classified sequence of topics, are also discussed. The paper reveals the importance of the Code in its own time, the complexities of its presentation and assessment by its contemporaries, and itâ s status today.
APA, Harvard, Vancouver, ISO, and other styles
28

Eriksson, Mattias. "Integrated Code Generation." Doctoral thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-67471.

Full text
Abstract:
Code generation in a compiler is commonly divided into several phases: instruction selection, scheduling, register allocation, spill code generation, and, in the case of clustered architectures, cluster assignment. These phases are interdependent; for instance, a decision in the instruction selection phase affects how an operation can be scheduled. We examine the effect of this separation of phases on the quality of the generated code. To study this we have formulated optimal methods for code generation with integer linear programming; first for acyclic code and then we extend this method to modulo scheduling of loops. In our experiments we compare optimal modulo scheduling, where all phases are integrated, to modulo scheduling where instruction selection and cluster assignment are done in a separate phase. The results show that, for an architecture with two clusters, the integrated method finds a better solution than the non-integrated method for 39% of the instances. Our algorithm for modulo scheduling iteratively considers schedules with increasing number of schedule slots. A problem with such an iterative method is that if the initiation interval is not equal to the lower bound there is no way to determine whether the found solution is optimal or not. We have proven that for a class of architectures that we call transfer free, we can set an upper bound on the schedule length. I.e., we can prove when a found modulo schedule with initiation interval larger than the lower bound is optimal. Another code generation problem that we study is how to optimize the usage of the address generation unit in simple processors that have very limited addressing modes. In this problem the subtasks are: scheduling, address register assignment and stack layout. Also for this problem we compare the results of integrated methods to the results of non-integrated methods, and we find that integration is beneficial when there are only a few (1 or 2) address registers available.
APA, Harvard, Vancouver, ISO, and other styles
29

Wahab, Matthew. "Object code verification." Thesis, University of Warwick, 1998. http://wrap.warwick.ac.uk/61068/.

Full text
Abstract:
Object code is a program of a processor language and can be directly executed on a machine. Program verification constructs a formal proof that a program correctly implements its specification. Verifying object code therefore ensures that the program which is to be executed on a machine is correct. However, the nature of processor languages makes it difficult to specify and reason about object code programs in a formal system of logic. Furthermore, a proof of the correctness of an object code program will often be too large to construct manually because of the size of object code programs. The presence of pointers and computed jumps in object code programs constrains the use of automated tools to simplify object code verification. This thesis develops an abstract language which is expressive enough to describe any sequential object code program. The abstract language supports the definition of program logics in which to specify and verify object code programs. This allows the object code programs of any processor language to be verified in a single system of logic. The abstract language is expressive enough that a single command is enough to describe the behaviour of any processor instruction. An object code program can therefore be translated to the abstract language by replacing each instruction with the equivalent command of the abstract language. This ensures that the use of the abstract language does not increase the difficulty of verifying an object code program. The verification of an object code program can be simplified by constructing an abstraction of the program and showing that the abstraction correctly implements the program specification. Methods for abstracting programs of the abstract language are developed which consider only the text of a program. These methods are based on describing a finite sequence of commands as a single, equivalent, command of the abstract language. This is used to define transformations which abstract a program by replacing groups of program commands with a single command. The abstraction of a program formed in this way can be verified in the same system of logic as the original program. Because the transformations consider only the program text, they are suitable for efficient mechanisation in an automated proof tool. By reducing the number of commands which must be considered, these methods can reduce the manual work needed to verify a program. The use of an abstract language allows object code programs to be specified and verified in a system of logic while the use of abstraction to simplify programs makes verification practical. As examples, object code programs for two different processors are modelled, abstracted and verified in terms of the abstract language. Features of processor languages and of object code programs which affect verification and abstraction are also summarised.
APA, Harvard, Vancouver, ISO, and other styles
30

Hoffmann, Ceilidh 1969. "Code-division multiplexing." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28746.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (p. 395-404).
(cont.) counterpart. Among intra-cell orthogonal schemes, we show that the most efficient broadcast signal is a linear superposition of many binary orthogonal waveforms. The information set is also binary. Each orthogonal waveform is generated by modulating a periodic stream of finite-length chip pulses with a receiver-specific signature code that is derived from a special class of binary antipodal, superimposed recursive orthogonal code sequences. With the imposition of practical pulse shapes for carrier modulation, we show that multi-carrier format using cosine functions has higher bandwidth efficiency than the single-carrier format, even in an ideal Gaussian channel model. Each pulse is shaped via a prototype baseband filter such that when the demodulated signal is detected through a baseband matched filter, the resulting output samples satisfy the Generalized Nyquist criterion. Specifically, we propose finite-length, time overlapping orthogonal pulse shapes that are g-Nyquist. They are derived from extended and modulated lapped transforms by proving the equivalence between Perfect Reconstruction and Generalized Nyquist criteria. Using binary data modulation format, we measure and analyze the accuracy of various Gaussian approximation methods for spread-spectrum modulated (SSM) signalling ...
We study forward link performance of a multi-user cellular wireless network. In our proposed cellular broadcast model, the receiver population is partitioned into smaller mutually exclusive subsets called cells. In each cell an autonomous transmitter with average transmit power constraint communicates to all receivers in its cell by broadcasting. The broadcast signal is a multiplex of independent information from many remotely located sources. Each receiver extracts its desired information from the composite signal, which consists of a distorted version of the desired signal, interference from neighboring cells and additive white Gaussian noise. Waveform distortion is caused by time and frequency selective linear time-variant channel that exists between every transmitter-receiver pair. Under such system and design constraints, and a fixed bandwidth for the entire network, we show that the most efficient resource allocation policy for each transmitter based on information theoretic measures such as channel capacity, simultaneously achievable rate regions and sum-rate is superposition coding with successive interference cancellation. The optimal policy dominates over its sub-optimal alternatives at the boundaries of the capacity region. By taking into account practical constraints such as finite constellation sets, frequency translation via carrier modulation, pulse shaping and real-time signal processing and decoding of finite-length waveforms and fairness in rate distribution, we argue that sub-optimal orthogonal policies are preferred. For intra-cell multiplexing, all orthogonal schemes based on frequency, time and code division are equivalent. For inter-cell multiplexing, non-orthogonal code-division has a larger capacity than its orthogonal
by Ceilidh Hoffmann.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
31

Lieber, Thomas (Thomas Alan). "Understanding asynchronous code." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/82411.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 61-64).
JavaScript on the web is difficult to debug due to its asynchronous and dynamic nature. Traditional debuggers are often little help because the language's idioms rely heavily on non-linear control flow via function pointers. The aim of this work is to create a debugging interface that helps users understand complicated control flow in languages like JavaScript. This thesis presents a programming editor extension called Theseus that uses program tracing to provide real-time in-editor feedback so that programmers can answer questions quickly as they write new code and interact with their application. Theseus augments the call graph with semantic edges that allow users to make intuitive leaps through program traces, such as from the start of an asynchronous network request to its response. Participants in lab and classroom studies found Theseus to be a usable replacement for traditional breakpoint and logging tools, though no significant difference was found in their ability to complete programming tasks.
by Thomas Lieber.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
32

Arzoglou, Jordan. "Langue contre code." Paris 8, 2008. http://www.theses.fr/2008PA082872.

Full text
Abstract:
Deux attitudes envers le monde apparaissent possibles : la vision "lato sensu", qui peut ignorer le principe de la non-contradiction, et la vision "stricto sensu", qui est soumise à ce principe. La vision "lato sensu" correspond à la sensibilité poétique et rend possible la connaissance déictique. Par définition, le moyen utilisé par la poésie est la langue. La vision "stricto sensu" conduit à la codification de la pensée. Un code n'a pas de valeur déictique à moins qu'elle lui soit conférée par un acte de parole. Le sens est la conscience de cette perception duelle, constituée par la vision "lato sensu" et la vision "stricto sensu". La logique formelle et la théorie Chomskienne sont des exemples éminents de la vision "stricto sensu". Elles ne permettent donc pas la perception duelle. L'absence de cette perception conduit au non-sens. Ceci devient évident dans le domaine de la traduction, où l'objectif est justement de transmettre le sens. En somme, le sens ne peut être transmis que par une langue non codifiée
Two attitudes seem to be possible towards the world : "lato sensu" vision, which may overlook the principle of non-contradiction, and "strito sensu" vision, which is subject to that principle. "Lato sensu" vision corresponds to poetical sensivity and allows deictic knowledge. By definition, language is the means used by poetry. "Stricto sensu" vision leads to a codification of thought. A code has no deictic value, unless such value is conferred to by speech act. Sense is awareness of this dual perception, which consists of "lato sensu" vision and "stricto sensu" vision. Formal logic and Chomskian theory are par excellence examples of "stricto sensu" vision. These disciplines therefore do not allow for a dual perception, whose lack leads to nonsense. This becomes evident in translation, whose purpose is in fact the transmission of sense. In short, sense may be transmitted only through uncodified language
APA, Harvard, Vancouver, ISO, and other styles
33

Franková, Anna. "MY CODE/WORLD." Master's thesis, Vysoké učení technické v Brně. Fakulta výtvarných umění, 2017. http://www.nusl.cz/ntk/nusl-316051.

Full text
Abstract:
My Code/World is a personal artistic research of the environment in which I work as a programmer - not a physical environment, but the virtual environment of a computer interface. This research has been taking place since roughly October 2016 and its result is a collection of loosely connected pieces (sketches, experiments), that will be presented as an installation within the studio space of the Studio Graphic Design 2, Faculty of Fine Arts, BUT.
APA, Harvard, Vancouver, ISO, and other styles
34

Loureiro, Sergio. "Mobile code protection /." Paris : École nationale supérieure des télécommunications, 2001. http://catalogue.bnf.fr/ark:/12148/cb38828305r.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Ratley, Desirée Page. "Impacts of lateral code changes associated with the 2006 International Building Code and the 2008 California Building Code." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39276.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2007.
Includes bibliographical references (leaves 94-96).
The 2008 California Building Code (CBC) will adopt the structural section of the 2006 International Building Code (IBC), which includes alterations to the procedure to determine earthquake design loading, and a drastic move to a complicated method to determine design wind pressures. The implementation of the revised 2006 International Building Code, and the subsequent California adoption of the structural section will have significant effects on the design and construction of structures not only in California, but also the rest of the country. Through a comparison of the design of a steel moment-resisting frame low-rise structure, it was determined that the new code will result in design values that differ from those resulting from the previous codes. In order to compare the relevant codes in different areas of the country, this thesis considers three design scenarios for the low-rise structure: seismic loading in Southern California to compare the 2001 CBC, the 2003 and the 2006 IBC, seismic loading in the Midwest to compare the 2003 IBC and the 2006 IBC, and wind loading in Northern California to compare the 2001 CBC and the 2006 IBC.
(cont.) In the first case, the change from the 2001 CBC to the 2003 IBC was an 8 percent increase in base shear, but a 2 percent decrease from the 2001 CBC to the 2006 IBC. The second case resulted in a 29 percent increase in base shear from the 2003 IBC to the 2006 IBC. The result of the third case was design wind pressures that decreased 20 percent from the 2001 CBC to the 2006 IBC. These design differences will change the design of the lateral force resisting system, especially the later two cases. In addition, the design engineers in California will have to learn a new, greatly more complicated method to design for wind loading. These combined effects of the code changes will impact both engineers and the resulting building designs in all parts of the country.
by Desirée Page Ratley.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
36

Želtuchin, Alexander. "Orthographic codes and code-switching : a study in 16th century Swedish ortography." Doctoral thesis, Stockholms universitet, Institutionen för nordiska språk, 1996. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-82682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Parttimaa, T. (Tuomas). "Test suite optimisation based on response status codes and measured code coverage." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201305201293.

Full text
Abstract:
A software test suite often comprises of thousands of distinct test cases. Therefore, the execution of an unoptimised test suite might waste valuable time and resources. In order to avoid the unnecessary execution of redundant test cases, the test suite should be optimised to contain fewer test cases. This thesis focuses on the optimisation efforts of a commercially available Hypertext Transfer Protocol (HTTP) fuzzing test suite. A test setup was created for the optimisation purposes. The test setup consisted of the given fuzzing test suite, five different HTTP server implementations used as test subjects, and a code coverage measurement tool. Test runs were executed against the five test subjects with the test suite, and at the same time code coverage was measured from the test subjects. In this thesis, three different types of test suite optimisation algorithms were implemented. The original test suite was optimised by applying the optimisation algorithms to the results of the test runs. Another set of test runs were performed with the optimised subset suites, while again measuring code coverage from the same test subjects. All of the coverage measurement results are presented and analysed. Based on the code coverage analysis, the test suite optimisation algorithms were assessed and the following research results were obtained. Code coverage analysis demonstrated with a strong degree of certainty that a variation in the response messages indicates which test cases actually exercise the test subject. The analysis showed also with a quite strong degree of certainty that an optimised test suite can achieve the same level of code coverage which was attained with the original test suite
Ohjelmistojen testisarja koostuu usein tuhansista erilaisista testitapauksista. Testisarjan suorittamisessa voi mennä hukkaan arvokasta aikaa ja resursseja, jos testisarjaa ei optimoida. Testisarja tulee optimoida siten, että se sisältää vähän testitapauksia, jotta epäolennaisten testitapausten tarpeeton suoritus vältetään. Tämä diplomityö keskittyy kaupallisesti saatavilla olevan Hypertext Transfer Protocol (HTTP) -palvelimien fuzz-testisarjan optimointiyrityksiin. Optimointia varten luotiin testiympäristö, joka koostuu fuzz-testaussarjasta, testikohteina käytetyistä viidestä HTTP-palvelimesta ja koodikattavuuden mittaustyökalusta. Ennalta määriteltyä testisarjaa käytettiin testikohteille suoritetuissa testiajoissa, joiden aikana mitattiin testikohteista koodikattavuus. Diplomityössä toteutettiin kolme erilaista testisarjan optimointialgoritmia. Alkuperäinen testisarja optimoitiin käyttämällä algoritmeja testiajojen tuloksiin. Optimoiduilla osajoukkosarjoilla suoritettiin uudet testiajot, joiden aikana mitattiin testikohteista koodikattavuus. Kaikki koodikattavuusmittaustulokset on esitetty ja analysoitu. Kattavuusanalyysin perusteella arvioitiin testisarjan optimointialgoritmien toimivuus ja saavutettiin seuraavat tutkimustulokset. Koodikattavuusanalyysin avulla saatiin varmuus siitä, että muutos vastausviesteissä ilmaisee, mitkä testitapaukset oikeasti käyttävät testikohdetta. Analyysilla saatiin myös kohtalainen varmuus siitä, että optimoidulla osajoukkosarjalla voidaan saavuttaa sama koodikattavuus, joka saavutettiin alkuperäisellä testisarjalla
APA, Harvard, Vancouver, ISO, and other styles
38

Ward, Richard Peter. "Evolution of loosely synchronized spreading codes in code-division multiple-access systems." Thesis, University of South Wales, 2008. https://pure.southwales.ac.uk/en/studentthesis/evolution-of-loosely-synchronized-spreading-codes-in-codedivision-multipleaccess-systems(2bf79128-319c-49c1-8246-91ee0d3533c4).html.

Full text
Abstract:
Loosely Synchronized (LS) codes can be used as spreading codes in quasi­ synchronous code-division multiple-access (QS-CDMA) systems. In such CDMA systems, close control of synchronization is achieved at the chip level, interme­ diate between that in synchronous CDMA and that in asynchronous CDMA. The LS code can then capitalize on zero correlation in a limited synchronization window to reduce code correlations and so reduce interference. LS codes are {O, +1, -1} codes constructed using Hadamard matrices and Golay pairs. A variation of LS codes inserts short strings of zeros between the components of the Golay pairs to increase the number of codewords, with only limited dete­ rioration in the correlations. These strings of zeros are known as internal padding. One of the advantages normally claimed for CDMA systems is resistance to eavesdropping and jamming. It might appear at first sight that the structure of LS codes is rather predictable in comparison with codes constructed using linear feedback shift registers, such as m-sequences or Gold codes. One way to overcome any such difficulty would be to evolve the code very quickly, in such a way that by the time a generation of the code is determined (or determined to a moderate correlation value) it is too late to exploit it. This thesis explores the way that LS codes can be evolved in order to achieve resistance to eavesdropping and jamming. The thesis starts with a detailed account of the necessary background and of the construction of Loosely Synchronized codes. The early part of the thesis then concentrates on showing that many generations of LS code can be constructed in such a way that the correlation between distinct generations is small. This prevents one observed generation of the code from being used for jamming or prediction in another generation. Specifically: •The construction of Golay pairs is investigated and a search is carried out over all possible Golay pairs and their mates to find a set of pairs that leads to the satisfaction of a suitable correlation criterion; •Bent functions, almost bent functions and other second order Boolean functions are used to create sets of Hadamard matrices that are guaranteed to satisfy the same correlation criterion; •A sequential search method to generate a set of arrangements of the internal padding that satisfies the same correlation criterion is described. Later in the thesis this approach is replaced by a recency list approach. This ensures that the correlation criterion is satisfied against recently used generations of the code, in place of all generations of the code; •The way in which these evolutions of the components combine together is also explored. Attention turns in the second part of the thesis to the mechanisms for evolution and the way that these might be predicted by a third party observer. Transform methods that the third party might use are described. Detailed simulations quantify the ability of the third party to identify the code during the transmission of a single bit. It is shown that theoretical resistance to early code prediction is not possible, although it might be possible to demonstrate security arising from the relative speed of the necessary computations for the user and the observer. This would require a detailed hardware study, and this is listed as future work. In fact it is shown here that LS codes are actually better than linear feedback shift register codes, as a result of the Berlekamp-Massey algorithm. Attention is also focussed on the scenario in which details of the algorithms of one user are obtained by the third party. Only the Hadamard matrix provides protection against this scenario, as all other components of the construction are shared between all users. From this second viewpoint the true weakness of LS codes becomes apparent. Although the Hadamard matrix constructions are satisfactory if the order of the Hadamard matrix is not too small, it seems that the sequence of Hadamard matrix rows of each user must be computed centrally and distributed to users as private keys if this scenario is not to remain a major concern. The volume of private key distribution necessary may seem unattractive to operators. Ultimately it seems that evolution of the Golay pairs may have little real role except to increase the workload of the observer. The recency list based evolution of internal padding can take the main role in ensuring low correlation between close generations of the code. The evolution of the Hadamard matrix should be designed to concentrate on the second viewpoint, where the third party has obtained details of the algorithms of one user.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Pei. "Unified system of code transformation and execution for heterogeneous multi-core architectures." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0441/document.

Full text
Abstract:
Architectures hétérogènes sont largement utilisées dans le domaine de calcul haute performance. Cependant, le développement d'applications sur des architectures hétérogènes est indéniablement fastidieuse et sujette à erreur pour un programmeur même expérimenté. Pour passer une application aux architectures multi-cœurs hétérogènes, les développeurs doivent décomposer les données de l'entrée, gérer les échanges de valeur intermédiaire au moment d’exécution et garantir l'équilibre de charge de système. L'objectif de cette thèse est de proposer une solution de programmation parallèle pour les programmeurs novices, qui permet de faciliter le processus de codage et garantir la qualité de code. Nous avons comparé et analysé les défauts de solutions existantes, puis nous proposons un nouvel outil de programmation STEPOCL avec un nouveau langage de domaine spécifique qui est conçu pour simplifier la programmation sur les architectures hétérogènes. Nous avons évalué la performance de STEPOCL sur trois cas d'application classiques : un stencil 2D, une multiplication de matrices et un problème à N corps. Le résultat montre que : (i) avec l'aide de STEPOCL, la performance d'application varie linéairement selon le nombre d'accélérateurs, (ii) la performance de code généré par STEPOCL est comparable à celle de la version manuscrite. (iii) les charges de travail, qui sont trop grandes pour la mémoire d'un seul accélérateur, peuvent être exécutées en utilisant plusieurs accélérateurs. (iv) grâce à STEPOCL, le nombre de lignes de code manuscrite est considérablement réduit
Heterogeneous architectures have been widely used in the domain of high performance computing. However developing applications on heterogeneous architectures is time consuming and error-prone because going from a single accelerator to multiple ones indeed requires to deal with potentially non-uniform domain decomposition, inter-accelerator data movements, and dynamic load balancing. The aim of this thesis is to propose a solution of parallel programming for novice developers, to ease the complex coding process and guarantee the quality of code. We lighted and analysed the shortcomings of existing solutions and proposed a new programming tool called STEPOCL along with a new domain specific language designed to simplify the development of an application for heterogeneous architectures. We evaluated both the performance and the usefulness of STEPOCL. The result show that: (i) the performance of an application written with STEPOCL scales linearly with the number of accelerators, (ii) the performance of an application written using STEPOCL competes with an handwritten version, (iii) larger workloads run on multiple devices that do not fit in the memory of a single device, (iv) thanks to STEPOCL, the number of lines of code required to write an application for multiple accelerators is roughly divided by ten
APA, Harvard, Vancouver, ISO, and other styles
40

Zheltukhin, Alexander. "Orthographic codes and code-switching : a study in 16th century Swedish orthography /." Stockholm : Almqvist & Wiksell, 1996. http://catalogue.bnf.fr/ark:/12148/cb37164838m.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Moriggl, Irene. "Intelligent Code Inspection using Static Code Features : An approach for Java." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4149.

Full text
Abstract:
Effective defect detection is still a hot issue when it comes to software quality assurance. Static source code analysis plays thereby an important role, since it offers the possibility for automated defect detection in early stages of the development. As detecting defects can be seen as a classification problem, machine learning is recently investigated to be used for this purpose. This study presents a new model for automated defect detection by means of machine learn- ers based on static Java code features. The model comprises the extraction of necessary features as well as the application of suitable classifiers to them. It is realized by a prototype for the feature extraction and a study on the prototype’s output in order to identify the most suitable classifiers. Finally, the overall approach is evaluated in a using an open source project. The suitability study and the evaluation show, that several classifiers are suitable for the model and that the Rotation Forest, Multilayer Perceptron and the JRip classifier make the approach most effective. They detect defects with an accuracy higher than 96%. Although the approach comprises only a prototype, it shows the potential to become an effective alternative to nowa- days defect detection methods.
APA, Harvard, Vancouver, ISO, and other styles
42

OIZUMI, WILLIAN NALEPA. "SYNTHESIS OF CODE ANOMALIES: REVEALING DESIGN PROBLEMS IN THE SOURCE CODE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=25718@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
PROGRAMA DE EXCELENCIA ACADEMICA
Problemas de projeto afetam quase todo sistema de software, fazendo com que a sua manutenção seja cara e impeditiva. Como documentos de projeto raramente estão disponíveis, desenvolvedores frequentemente precisam identificar problemas de projeto a partir do código fonte. Entretanto, a identificação de problemas de projeto não é uma tarefa trivial por diversas razões. Por exemplo, a materialização de problemas de projeto tende a ser espalhada por diversos elementos de código anômalos na implementação. Infelizmente, trabalhos prévios assumiram erroneamente que cada anomalia de código individual – popularmente conhecida como code smell – pode ser usada como um indicador preciso de problema de projeto. Porém, evidências empíricas recentes mostram que diversos tipos de problemas de projeto são frequentemente relacionados a um conjunto de anomalias de código inter-relacionadas, conhecidas como aglomerações de anomalias de código. Neste contexto, esta dissertação propõe uma nova técnica para a síntese de aglomerações de anomalias de código. A técnica tem como objetivo: (i) buscar formas variadas de aglomeração em um programa, e (ii) sumarizar diferentes tipos de informação sobre cada aglomeração. A avaliação da técnica de síntese baseou-se na análise de diversos projetos de software da indústria e em um experimento controlado com desenvolvedores profissionais. Ambos estudos sugerem que o uso da técnica de síntese ajudou desenvolvedores a identificar problemas de projeto mais relevantes do que o uso de técnicas convencionais.
Design problems affect almost all software projects and make their maintenance expensive and impeditive. As design documents are rarely available, programmers often need to identify design problems from the source code. However, the identification of design problems is not a trivial task for several reasons. For instance, the reification of a design problem tends to be scattered through several anomalous code elements in the implementation. Unfortunately, previous work has wrongly assumed that each single code anomaly - popularly known as code smell - can be used as an accurate indicator of a design problem. There is growing empirical evidence showing that several types of design problems are often related to a set of inter-related code anomalies, the so-called code-anomaly agglomerations, rather than individual anomalies only. In this context, this dissertation proposes a new technique for the synthesis of code-anomaly agglomerations. The technique is intended to: (i) search for varied forms of agglomeration in a program, and (ii) summarize different types of information about each agglomeration. The evaluation of the synthesis technique was based on the analysis of several industry-strength software projects and a controlled experiment with professional programmers. Both studies suggest the use of the synthesis technique helped programmers to identify more relevant design problems than the use of conventional techniques.
APA, Harvard, Vancouver, ISO, and other styles
43

Ragkhitwetsagul, Chaiyong. "Code similarity and clone search in large-scale source code data." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10057538/.

Full text
Abstract:
Software development is tremendously benefited from the Internet by having online code corpora that enable instant sharing of source code and online developer's guides and documentation. Nowadays, duplicated code (i.e., code clones) not only exists within or across software projects but also between online code repositories and websites. We call them "online code clones."' They can lead to license violations, bug propagation, and re-use of outdated code similar to classic code clones between software systems. Unfortunately, they are difficult to locate and fix since the search space in online code corpora is large and no longer confined to a local repository. This thesis presents a combined study of code similarity and online code clones. We empirically show that many code snippets on Stack Overflow are cloned from open source projects. Several of them become outdated or violate their original license and are possibly harmful to reuse. To develop a solution for finding online code clones, we study various code similarity techniques to gain insights into their strengths and weaknesses. A framework, called OCD, for evaluating code similarity and clone search tools is introduced and used to compare 34 state-of-the-art techniques on pervasively modified code and boiler-plate code. We also found that clone detection techniques can be enhanced by compilation and decompilation. Using the knowledge from the comparison of code similarity analysers, we create and evaluate Siamese, a scalable token-based clone search technique via multiple code representations. Our evaluation shows that Siamese scales to large-scale source code data of 365 million lines of code and offers high search precision and recall. Its clone search precision is comparable to seven state-of-the-art clone detection tools on the OCD framework. Finally, we demonstrate the usefulness of Siamese by applying the tool to find online code clones, automatically analyse clone licenses, and recommend tests for reuse.
APA, Harvard, Vancouver, ISO, and other styles
44

Lomüller, Victor. "Générateur de code multi-temps et optimisation de code multi-objectifs." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM050/document.

Full text
Abstract:
La compilation est une étape indispensable dans la création d'applications performantes.Cette étape autorise l'utilisation de langages de haut niveau et indépendants de la cible tout en permettant d'obtenir de bonnes performances.Cependant, de nombreux freins empêchent les compilateurs d'optimiser au mieux les applications.Pour les compilateurs statiques, le frein majeur est la faible connaissance du contexte d'exécution, notamment sur l'architecture et les données utilisées.Cette connaissance du contexte se fait progressivement pendant le cycle de vie de l'application.Pour tenter d'utiliser au mieux les connaissances du contexte d'exécution, les compilateurs ont progressivement intégré des techniques de génération de code dynamique.Cependant ces techniques ne se focalisent que sur l'utilisation optimale du matériel et n'utilisent que très peu les données.Dans cette thèse, nous nous intéressons à l'utilisation des données dans le processus d'optimisation d'applications pour GPU Nvidia.Nous proposons une méthode utilisant différents moments pour créer des bibliothèques adaptatives capables de prendre en compte la taille des données.Ces bibliothèques peuvent alors fournir les noyaux de calcul les plus adapté au contexte.Sur l'algorithme de la GEMM, la méthode permet d'obtenir des gains pouvant atteindre 100~\% tout en évitant une explosion de la taille du code.La thèse s'intéresse également aux gains et coûts de la génération de code lors de l'exécution, et ce du point de vue de la vitesse d'exécution, de l'empreinte mémoire et de la consommation énergétique.Nous proposons et étudions 2 approches de génération de code à l'exécution permettant la spécialisation de code avec un faible surcoût.Nous montrons que ces 2 approches permettent d'obtenir des gains en vitesse et en consommation comparables, voire supérieurs, à LLVM mais avec un coût moindre
Compilation is an essential step to create efficient applications.This step allows the use of high-level and target independent languages while maintaining good performances.However, many obstacle prevent compilers to fully optimize applications.For static compilers, the major obstacle is the poor knowledge of the execution context, particularly knowledge on the architecture and data.This knowledge is progressively known during the application life cycle.Compilers progressively integrated dynamic code generation techniques to be able to use this knowledge.However, those techniques usually focuses on improvement of hardware capabilities usage but don't take data into account.In this thesis, we investigate data usage in applications optimization process on Nvidia GPU.We present a method that uses different moments in the application life cycle to create adaptive libraries able to take into account data size.Those libraries can therefore provide more adapted kernels.With the GEMM algorithm, the method is able to provide gains up to 100~\% while avoiding code size explosion.The thesis also investigate runtime code generation gains and costs from the execution speed, memory footprint and energy consumption point of view.We present and study 2 light-weight runtime code generation approaches that can specialize code.We show that those 2 approaches can obtain comparable, and even superior, gains compared to LLVM but at a lower cost
APA, Harvard, Vancouver, ISO, and other styles
45

CAMBIER, JEAN-PIERRE. "Code noir, code de nuremberg, code genetique : de l'esclavage a la nationalisation des corps. essai de decodage du biopouvoir." Toulouse 2, 1993. http://www.theses.fr/1993TOU2A027.

Full text
Abstract:
Esclaves "par nature" des anciens noirs "intermediaires entre l'homme et le singe" (toqueville), modernes "cobayes humains" situes "entre l'homme et l'animal d'experience" (pr milhaud) : la philosophie, le droit, ou l'ethique se sont bien rarement eleves au dessus des exigences de leur temps, politiques, coloniales, ou scientifiques. La difficulte actuelle du droit francais a garantir la surete du corps humain serait-elle due a l'abandon de la loi naturelle ? faut-il reenraciner les droits de l'homme dans un droit naturel modernise et pratiquer une "biopolitique" (barretkriegel) ? pourtant, comme agent du "biopouvoir", la medecine reduit souvent l'homme a un objet manipulable, pour redresser les erreurs d'une nature dont elle pretend interpreter les normes et les fins, jusqu'a envisager l'acces au code genetique. Realisation de cette biopolitique, l'etat-providence vise, certes, a garantir un droit a la vie, mais au sein d'une societe "assurantielle" devenue "souveraine maitresse de la mort" (f. Ewald). Derive tragique que nous voyons a l'oeuvre dans la loi francaise du 20 12 88 reglementant l'usage experimental du corps humain au mepris du libre consentement des "sujets". Nous recusons l'optimisme d'ewald, fonde sur l'assimilation erronee de la norme socio-politique a la norme biologique. C'est a juste titre que michel foucault redoutait la dramatique resurgence contemporaine de l'archaique "droit de mort", generatrice de formes nouvelles de racisme et d'exclusion sociale
Formerly slaves "by nature", blacks "intermediaries between man and monkey" (toqueville), presently "human guinea pigs" situated "between man and the experimental animals" (pr milhaud) : philosophy, law, of ethics have very rarely risen above the demands of their time, political, colonial or scientific. Could the present difficulty of french law to guarantee the security of the human body be due to the neglect of natural law ? must we reroot human rights in a modernized natural right, and practise "biopolitics" (barret-kriegel) ? however, as an agent of "biopower", medical science often reduces man as a manipulable object, to correct the errors of a nature whose standards and aims it claims to interpret, going as far as contemplating access to genetic code. In realizing biopolitics, the welfarestate aims, certainly, to guarantee a right to life, but in the midst of an "insured" society which has become "sovereign master of death" (f. Ewald). A tragical drifting, which we see at work in the french law of 20 12 88, regulating the experimental use of human body, regardless of the free consent of the subjects. We challenge ewald's optimism, founded on the erroneous assimilation of the sociopolitical norm to the biological norm. Michel foucault rightly fears a dramatical contemporary resurgence of the archaic "right of dealth", generating new forms of racism and social exclusion
APA, Harvard, Vancouver, ISO, and other styles
46

Trumble, Brandon. "Using Code Inspection, Code Modification, and Machine Learning to prevent SQL Injection." Thesis, Kutztown University of Pennsylvania, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1590429.

Full text
Abstract:

Modern day databases store invaluable information about everyone. This information is assumed to be safe, secure, and confidential. However, as technology has become more widespread, more people are able to abuse and exploit this information for personal gain. While the ideal method to combat this issue is the enhanced education of developers, that still leaves a large amount of time where this information is insecure. This thesis outlines two potential solutions to the problem that SQL Injection presents in the context of databases. The first modifies an existing code base to use safe prepared statements rather than unsafe standard queries. The second is a neural network application that sits between the user-facing part of a web application and the application itself. The neural network is designed to analyze data being submitted by a user and detect attempts at SQL injection.

APA, Harvard, Vancouver, ISO, and other styles
47

Boije, Niklas, and Kristoffer Borg. "Semi-automatic code-to-code transformer for Java : Transformation of library calls." Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129861.

Full text
Abstract:
Having the ability to perform large automatic software changes in a code base gives new possibilities for software restructuring and cost savings. The possibility of replacing software libraries in a semi-automatic way has been studied. String metrics are used to find equivalents between two libraries by looking at class- and method names. Rules based on the equivalents are then used to describe how to apply the transformation to the code base. Using the abstract syntax tree, locations for replacements are found and transformations are performed. After the transformations have been performed, an evaluation of the saved effort of doing the replacement automatically versus manually is made. It shows that a large part of the cost can be saved. An additional evaluation calculating the maintenance cost saved annually by changing libraries is also performed in order to prove the claim that an exchange can reduce the annual cost for the project.
APA, Harvard, Vancouver, ISO, and other styles
48

Яворська, Христина Володимирівна, and Khrystyna Yavorska. "Методи і засоби проектування платформ “Low code/No code” в комп’ютерних системах." Master's thesis, Тернопільський національний технічний університет імені Івана Пулюя, 2021. http://elartu.tntu.edu.ua/handle/lib/36706.

Full text
Abstract:
Кваліфікаційна робота присвячена питанню дослідження методів і засобів розробки середовища, яке допоможе швидше та доступніше створювати продукти. проведено аналіз середовищ LС/NC розробки та їх класифікацію. Також досліджено технології і засоби розробки проектування платформи і проаналізовано аналіз особливостей розробки програм. Запропоновано формалізацію об’єктів та процесів. Розроблено прототипи блоків а також описано арифметичні операції. Розроблено алгоримт роботи оператору if else і циклу for. Спроектовано та реалізовано Low code/No code платформу на основі визначених функціональних вимог, побудовано алгоритми роботи додатку і забезпечено його верифікацію шляхом різних видів тестування ПЗ.
Qualification work is devoted to the research of methods and tools of environmental development, which will help to create products faster and more accessible. the analysis of LC / NC development environments and their classification is carried out. The technologies and means of platform design development are also studied and the analysis of software development features is analyzed. Formalization of objects and processes is proposed. Prototypes of blocks are developed and arithmetic operations are described. An algorithm for the operation of the if else operator and the for loop has been developed. The Low code / No code platform was designed and implemented on the basis of certain functional requirements, algorithms of application operation were built and its verification was provided by various types of software testing.
ПЕРЕЛІК ОСНОВНИХ ПОЗНАЧЕНЬ І СКОРОЧЕНЬ 6 ВСТУП 7 РОЗДІЛ 1 АНАЛІЗ СУЧАСНОГО СТАНУ ДОСЛІДЖЕНЬ ПРИ ПРОЕКТУВАННІ NO CODE/LO CODE ПЛАТФОРМИ 10 1.1. Класифікація NC/LC платформ 10 1.2. Технології розробки 15 1.3. Аналіз особливостей NC/LC середовищ 18 1.4. Висновки до розділу 21 РОЗДІЛ 2 ФОРМАЛІЗАЦІЯ ОБ’ЄКТІВ ПЛАТФОРМИ 22 2.1. Особливості навчання користувачів 22 2.2. Формалізація об’єктів платформи low code/no code 29 2.2. Формалізація арифметичних операцій 30 2.3. формалізація операцій порівняння 32 2.4. формалізація оператору if else 33 2.5. формалізація циклу for 35 2.6. Висновки до розділу 38 РОЗДІЛ 3 РОЗРОБКА ТА ТЕСТУВАННЯ СЕРЕДОВИЩА 39 3.1. Визначення вимог 39 3.2. Проектування архітектури 43 3.3. Тестування мульти-інтерфейсного середовища 43 3.4. Висновки до розділу 54 РОЗДІЛ 4 ОХОРОНА ПРАЦІ ТА БЕЗПЕКА В НАДЗВИЧАЙНИХ СИТУАЦІЯХ 55 4.1. Охорона праці 55 4.2. Організація оповіщення і зв’язку у надзвичайних ситуаціях техногенного та природного характеру. 58 ВИСНОВКИ 62 ПЕРЕЛІК ВИКОРИСТАНИХ ДЖЕРЕЛ 63 Додаток А 65
APA, Harvard, Vancouver, ISO, and other styles
49

Tseng, Da-Wen, and 曾大文. "Using MMX code within MPEG Audio Codec." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/05116004993667118681.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
90
The Moving Picture Experts Group (MPEG) is a working group of ISO/IEC in charge of the development of standards for coded representation of digital audio and video. In the audio part, we always use CD to store WAVE file, but it takes lots of hard disk space. So the MPEG Audio Encoder compresses wav. to mp2. or mp3. files. It just throws the voice that humans can’t hear and makes the file smaller so as to store more ones. In the codec, at bit rates from 64kb/s up to 192kb/s per channel, MPEG Audio Layer II can provide a sound quality that is competitive to any perceptual coding scheme using the same bit rate. However, it’s found that we could use MMX code in the filter bank of the audio codec because it’s similar to DCT. Intel's MMX technology is designed to accelerate multimedia and communications applications especially. In this thesis, we use MMX to accelerate the MPEG Audio Codec. It contains not only encoding but also decoding.
APA, Harvard, Vancouver, ISO, and other styles
50

Das, S. S. "Code Obfuscation using Code Splitting with Self-modifying Code." Thesis, 2014. http://ethesis.nitrkl.ac.in/5736/1/212CS3370-3.pdf.

Full text
Abstract:
Code Obfuscation is a protection technique that transforms the software into a semantically equivalent one which is strenuous to reverse engineer. As a part of software protection and security, code obfuscation got commercial interest from both vendors' side to keep their proprietary as secret and customers' side to have a trusted software that don't leek or destroy their personal information. Today most of the software distributions contain complete source code in the form of machine code, which are easy to decompile and increase the risk of malicious reverse engineering. The basic idea of the obfuscating technique that has been described in this research work is to hide the proprietary code section through preventive design obfuscation and insertion of self-modifying code at binary level. In this proposed technique the combination, while complementing each other, provides protection against all kind of reverse engineering.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography