Дисертації з теми "Source code quality"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-19 дисертацій для дослідження на тему "Source code quality".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Lee, Young Chang Kai-Hsiung. "Automated source code measurement environment for software quality." Auburn, Ala., 2007. http://repo.lib.auburn.edu/2007%20Fall%20Dissertations/Lee_Young_28.pdf.
Повний текст джерелаThummalapenta, Suresh. "Improving Software Productivity and Quality via Mining Source Code." NORTH CAROLINA STATE UNIVERSITY, 2011. http://pqdtopen.proquest.com/#viewpdf?dispub=3442531.
Повний текст джерелаHrynko, Alina. "Source code quality in connection to self-admitted technical debt." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97977.
Повний текст джерелаTévar, Hernández Helena. "Evolution of SoftwareDocumentation Over Time : An analysis of the quality of softwaredocumentation." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97561.
Повний текст джерелаRibeiro, Athos Coimbra. "Ranking source code static analysis warnings for continuous monitoring of free/libre/open source software repositories." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-20082018-170140/.
Повний текст джерелаEmbora exista grande variedade de analisadores estáticos de código-fonte disponíveis no mercado, tanto com licenças proprietárias, quanto com licenças livres, cada uma dessas ferramentas mostra melhor desempenho em um pequeno conjunto de problemas distinto, dificultando a escolha de uma única ferramenta de análise estática para analisar um programa. A combinação das análises de diferentes ferramentas pode reduzir o número de falsos negativos, mas gera um aumento no número de falsos positivos (que já é alto para muitas dessas ferramentas). Uma solução interessante é filtrar esses resultados para identificar os problemas com menores probabilidades de serem falsos positivos. Este trabalho apresenta kiskadee, um sistema para promover o uso da análise estática de código fonte durante o ciclo de desenvolvimento de software provendo relatórios de análise estática ranqueados. Primeiramente, kiskadee roda diversos analisadores estáticos no código-fonte. Em seguida, utilizando um modelo de classificação, os potenciais bugs detectados pelos analisadores estáticos são ranqueados conforme sua importância, onde defeitos críticos são colocados no topo de uma lista, e potenciais falsos positivos, ao fim da mesma lista. Para treinar o modelo de classificação do kiskadee, realizamos uma pós-análise nos relatórios gerados por três analisadores estáticos ao analisarem casos de teste sintéticos disponibilizados pelo National Institute of Standards and Technology (NIST) dos Estados Unidos. Para tornar a técnica apresentada o mais genérica possível, limitamos nossos dados às informações contidas nos relatórios de análise estática das três ferramentas, não utilizando outras informações, como históricos de mudança ou métricas extraídas do código-fonte dos programas inspecionados. As características extraídas desses relatórios foram utilizadas para treinar um conjunto de árvores de decisão utilizando o algoritmo AdaBoost para gerar um classificador mais forte, atingindo uma acurácia de classificação de 0,8 (a taxa de falsos positivos das ferramentas utilizadas foi de 0,61, quando combinadas). Finalmente, utilizamos esse classificador para ranquear os alarmes dos analisadores estáticos nos baseando na probabilidade de um dado alarme ser de fato um bug no código-fonte. Resultados experimentais mostram que, em média, quando inspecionando alarmes ranqueados pelo kiskadee, encontram-se 5,2 vezes menos falsos positivos antes de se encontrar cada bug quando a mesma inspeção é realizada para uma lista ordenada de forma aleatória.
VETRO', ANTONIO. "EMPIRICAL ASSESSMENT OF THE IMPACT OF USING AUTOMATIC STATIC ANALYSIS ON CODE QUALITY." Doctoral thesis, Politecnico di Torino, 2013. http://hdl.handle.net/11583/2506350.
Повний текст джерелаCome, David. "Analyse de la qualité de code via une approche logique et application à la robotique." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30008.
Повний текст джерелаThe quality of source code depends not only on its functional correctness but also on its readability, intelligibility and maintainability. This is currently an important problem in robotics where many open-source frameworks do not spread well in the industry because of uncertainty about the quality of the code. Code analysis and search tools are effective in improving these aspects. It is important that they let the user specify what she is looking for in order to be able to take into account the specific features of the project and of the domain. There exist two main representations of the source code : its Abstract Syntax Tree (AST) and the Control Flow Graph (CFG) of its functions. Existing specification mechanisms only use one of these representations, which is unfortunate because they offer complementaty information. The objective of this work is therefore to develop a method for verifying code compliance with user rules that can take benefit from both the AST and the CFG. The method is underpinned by a new logic we developed in this work : FO++ , which is a temporal extension of first-order logic. Relying on this logic has two advantages. First of all, it is independent of any programming language and has a formal semantics. Then, once instantiated for a given programming language, it can be used as a mean to formalize user provided properties. Finally, the study of its model-checking problem provides a mechanism for the automatic and correct verification of code compliance. These different concepts have been implemented in Pangolin, a tool for the C++ language. Given the code to be checked and a specification (which corresponds to an FO++ formula, written using Pangolin language), the tool indicates whether or not the code meets the specification. It also offers a summary of the evaluation in order to be able to find the code that violate the property as well as a certificate of the result correctness. Pangolin and FO++ have been applied to the field of robotics through the analysis of the quality of ROS packages and the formalization of a ROS-specific design-pattern. As a second and more general application to the development of programs in C++, we have formalized various good practice rules for this language. Finally, we have showed how it is possible to specify and verify rules that are closely related to a specific project by checking properties on the source code of Pangolin itself
Lissy, Alexandre. "Utilisation de méthodes formelles pour garantir des propriétés de logiciels au sein d'une distribution : exemple du noyau Linux." Thesis, Tours, 2014. http://www.theses.fr/2014TOUR4019/document.
Повний текст джерелаIn this thesis we are interested in integrating to the Linux distribution produced by Mandriva quality assurance level that allows ensuring user-Defined properties on the source code used. The core work of a distribution and its producer is to create a meaningful aggregate from software available. Those softwares are free and open source, hence it is possible to adapt it to improve end user’s experience. Hence, there is less control over the source code. Manual audit can of course be used to make sure it has good properties. Examples of such properties are often referring to security, but one could think of others. However, more and more software are getting integrated into distributions and each is showing an increase in source code volume: tools are needed to make quality assurance achievable. We start by providing a study of the distribution itself to document the current status. We use it to select some packages that we consider critical, and for which we can improve things with the condition that packages which are similar enough to the rest of the distribution will be considered first. This leads us to concentrating on the Linux kernel: we provide a state of the art overview of code verification applied to this piece of the distribution. We identify a need for a better understanding of the structure of the source code. To address those needs we propose to use a graph as a representation of the source code and use it to help document and understand its structure. Specifically we study applying some state of the art community detection algorithm to help handle the combinatory explosion. We also propose a distribution’s build system-Integrated architecture for executing, collecting and handling the analysis of data produced by verifications tools
Pavlíčková, Jarmila. "Model zralosti zdrojového kódu objektových aplikací." Doctoral thesis, Vysoká škola ekonomická v Praze, 2007. http://www.nusl.cz/ntk/nusl-191816.
Повний текст джерелаLavesson, Alexander, and Christina Luostarinen. "OAuth 2.0 Authentication Plugin for SonarQube." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-67526.
Повний текст джерелаHabchi, Sarra. "Understanding mobile-specific code smells." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I089.
Повний текст джерелаObject-Oriented code smells are well-known concepts in software engineering. They refer to bad design and development practices commonly observed in software systems. With the emergence of mobile apps, new classes of code smells have been identified to refer to bad development practices that are specific to mobile platforms. These mobile-specific code smells differ from object-oriented ones by focusing on performance issues reported in the documentation or developer guidelines. Since their identification, many research works approached mobile-specific code smells to propose detection tools and study them. Nonetheless, most of these studies only focused on measuring the performance impact of such code smells and did not provide any insights about their motives and potential solutions. In particular, we lack knowledge about (i) the rationales behind the accrual of mobile code smells, (ii) the developers’ perception of mobile code smells, and (iii) the generalizability of code smells across different mobile platforms. These lacks hinder the understanding of mobile code smells and consequently prevent the design of adequate solutions for them. Therefore, we conduct in this thesis a series of empirical studies with the aim of understanding mobile code smells. First, we study the expansion of code smells in different mobile platforms. Then, we conduct a large-scale study to analyze the change history of mobile apps and discern the factors that favor the introduction and survival of code smells. To consolidate these studies, we also perform a user study to investigate developers’ perception of code smells and the adequacy of static analyzers as a solution for coping with them. Finally, we perform a qualitative study to question the established foundation about the definition and detection of mobile code smells. The results of these studies revealed important research findings. Notably, we showed that pragmatism, prioritization, and individual attitudes are not relevant factors for the accrual of mobile code smells. The problem is rather caused by ignorance and oversight, which are prevalent among mobile developers. Furthermore, we highlighted several flaws in the code smell definitions that are currently adopted by the research community. These results allowed us to elaborate some recommendations for researchers and tool makers willing to design detection and refactoring tools for mobile code smells. On top of that, our results opened perspectives for research works about the identification of mobile code smells and development practices in general
Boulanger, Isabelle. "Lillgrund Wind Farm Modelling and Reactive Power Control." Thesis, KTH, Elektriska energisystem, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-119256.
Повний текст джерелаHecht, Geoffrey. "Détection et analyse de l'impact des défauts de code dans les applications mobiles." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10133/document.
Повний текст джерелаMobile applications are becoming complex software systems that must be developed quickly and evolve continuously to fit new user requirements and execution contexts. However, addressing these constraints may result in poor low-level design choices, known as code smells. The presence of code smells within software systems may incidentally degrade their quality and performance, and hinder their maintenance and evolution. Thus, it is important to know this smells but also to detect and correct them. While code smells are well-known in object-oriented applications, their study in mobile applications is still in their infancy. Moreover there is a lack of tools to detect and correct them. That is why we present a classification of 17 code smells that may appear in Android applications, as well as a tool to detect and correct code smells on Android. We apply and validate our approach on large amounts of applications (over 3000) in two studies evaluating the presence and evolution of the number of code smells in popular applications. In addition, we also present two approaches to assess the impact of the correction of code smells on performance and energy consumption. These approaches have allowed us to observe that the correction of code smells is beneficial in most cases
Kiendys, Petrus, and Shadi Al-Zara. "Minimumkrav för ett CI-system." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20216.
Повний текст джерелаWhen a group of developers work on the same code base, conflicts may arise regarding the implementation of modules or subsystems that developers individually work on. These conflicts have to be resolved quickly in order for the project to advance at a steady pace. Developers who do not communicate changes or other necessary deviations may find themselves in a situation where new or modified modules or subsystems are impossible or very difficult to integrate into the mainline code-base. This often leads to so called “integration hell” where it could take huge amounts of time to adapt new code into the current state of the code-base. One strategy, which can be deployed to counteract this trend is called “continuous integration”. This practice offers a wide range of advantages when a group of developers collaborates on writing clean and stable code. Continuous integration can be put into practice without the use of any tools as it is a “way to do things” rather than an actual tool. With that said, it is possible to support the practice with a tangible tool called a CI-system.This study aims to give insight into the makings of a CI-system and what it fundamentally consists of and has to be able to do. A study of contemporary research reports regarding the subject and a survey was performed in order to substantiate claims and conclusions. Core characteristics of CI-systems are grouped into “functional requirements” and “non-functional requirements (quality attributes)”. By doing this, it is possible to quantify and categorize various core components and functionalities of a typical CI-system. This study also contains an attachment which provides instructions of how to get started with implementing your own CI-server using the CI-system software ”TeamCity”. The conclusion of this study is that a CI-system is an important tool that enables a more efficient software development process. By making use of CI-systems developers can refine the development process by preventing integration problems, automating some parts of the work process (build, test, feedback, deployment) and quickly finding and solving integration issues.
Halbach, Till. "Error-robust coding and transformation of compressed hybered hybrid video streams for packet-switched wireless networks." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-136.
Повний текст джерелаThis dissertation considers packet-switched wireless networks for transmission of variable-rate layered hybrid video streams. Target applications are video streaming and broadcasting services. The work can be divided into two main parts.
In the first part, a novel quality-scalable scheme based on coefficient refinement and encoder quality constraints is developed as a possible extension to the video coding standard H.264. After a technical introduction to the coding tools of H.264 with the main focus on error resilience features, various quality scalability schemes in previous research are reviewed. Based on this discussion, an encoder decoder framework is designed for an arbitrary number of quality layers, hereby also enabling region-of-interest coding. After that, the performance of the new system is exhaustively tested, showing that the bit rate increase typically encountered with scalable hybrid coding schemes is, for certain coding parameters, only small to moderate. The double- and triple-layer constellations of the framework are shown to perform superior to other systems.
The second part considers layered code streams as generated by the scheme of the first part. Various error propagation issues in hybrid streams are discussed, which leads to the definition of a decoder quality constraint and a segmentation of the code stream to transmit. A packetization scheme based on successive source rate consumption is drafted, followed by the formulation of the channel code rate optimization problem for an optimum assignment of available codes to the channel packets. Proper MSE-based error metrics are derived, incorporating the properties of the source signal, a terminate-on-error decoding strategy, error concealment, inter-packet dependencies, and the channel conditions. The Viterbi algorithm is presented as a low-complexity solution to the optimization problem, showing a great adaptivity of the joint source channel coding scheme to the channel conditions. An almost constant image qualiity is achieved, also in mismatch situations, while the overall channel code rate decreases only as little as necessary as the channel quality deteriorates. It is further shown that the variance of code distributions is only small, and that the codes are assigned irregularly to all channel packets.
A double-layer constellation of the framework clearly outperforms other schemes with a substantial margin.
Keywords — Digital lossy video compression, visual communication, variable bit rate (VBR), SNR scalability, layered image processing, quality layer, hybrid code stream, predictive coding, progressive bit stream, joint source channel coding, fidelity constraint, channel error robustness, resilience, concealment, packet-switched, mobile and wireless ATM, noisy transmission, packet loss, binary symmetric channel, streaming, broadcasting, satellite and radio links, H.264, MPEG-4 AVC, Viterbi, trellis, unequal error protection
MOURA, JOAO A. "Desenvolvimento e construção de sistema automatizados para controle de qualidade na produção de sementes de iodo-125." reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26454.
Повний текст джерелаMade available in DSpace on 2016-07-01T11:08:37Z (GMT). No. of bitstreams: 0
Tese (Doutorado em Tecnologia Nuclear)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
Mhamdi, Maroua. "Méthodes de transmission d'images optimisées utilisant des techniques de communication numériques avancées pour les systèmes multi-antennes." Thesis, Poitiers, 2017. http://www.theses.fr/2017POIT2281/document.
Повний текст джерелаThis work is devoted to improve the coding/ decoding performance of a transmission scheme over noisy and realistic channels. For this purpose, we propose the development of optimized image transmission methods by focusing on both application and physical layers of wireless networks. In order to ensure a better quality of services, efficient compression algorithms (JPEG2000 and JPWL) are used in terms of the application layer enabling the receiver to reconstruct the images with maximum fidelity. Furthermore, to insure a transmission on wireless channels with a minimum BER at reception, some transmission, coding and advanced modulation techniques are used in the physical layer (MIMO-OFDM system, adaptive modulation, FEC, etc). First, we propose a robust transmission system of JPWL encoded images integrating a joint source-channel decoding scheme based on soft input decoding techniques. Next, the optimization of an image transmission scheme on a realistic MIMO-OFDM channel is considered. The optimized image transmission strategy is based on soft input decoding techniques and a link adaptation approach. The proposed transmission scheme offers the possibility of jointly implementing, UEP, UPA, adaptive modulation, adaptive source coding and joint decoding strategies, in order to improve the image visual quality at the reception. Then, we propose a robust transmission system for embedded bit streams based on concatenated block coding mechanism offering an unequal error protection strategy. Thus, the novelty of this study consists in proposing efficient solutions for the global optimization of wireless communication system to improve transmission quality
Hu, Chao-Hsiang, and 胡朝翔. "An Instructional Design that Improves Students'' Source Code Quality by Reducing Bad Smells." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/87rwb7.
Повний текст джерела國立臺北科技大學
資訊工程系研究所
100
Programs with correct functionalities do not suggest that they have good code quality as well. Therefore, for teaching, it is desirable to guide students to pay more attention to the code quality when writing programming assignments. Consequently, the grading policy for students’ programming assignments should consider both correctness and quality. However, for grading of the code quality, if the teaching assistant (TA) simply assigns an objective score level instead of providing students with an in-depth feedback, students will have difficulty improving their code quality. This thesis proposes a pedagogical method that examines the existence of bad smells as part of the grading policy. Students are provided with detailed feedback that reveals the location and the kind of bad smells within the code, along with a peer code review that reduces bad smells. This thesis takes the students of 2011 Windows Programming course of National Taipei University Technology as the subject of research, assesses the bad smell density from students’ programming assignments, and surveys the class participants with a designed questionnaire to verify the effectiveness of the proposed method on improving the code quality. The proposed method requires the TA to spend a large amount of time grading programming assignments. Though demanding, the grading can be done within the working hours of a half-time TA.
Rangarajan, Sarathkumar. "QOS-aware Web service discovery, selection, composition and application." Thesis, 2020. https://vuir.vu.edu.au/42153/.
Повний текст джерела