Dissertations / Theses on the topic 'Source code quality'

To see the other types of publications on this topic, follow the link: Source code quality.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 19 dissertations / theses for your research on the topic 'Source code quality.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lee, Young Chang Kai-Hsiung. "Automated source code measurement environment for software quality." Auburn, Ala., 2007. http://repo.lib.auburn.edu/2007%20Fall%20Dissertations/Lee_Young_28.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Thummalapenta, Suresh. "Improving Software Productivity and Quality via Mining Source Code." NORTH CAROLINA STATE UNIVERSITY, 2011. http://pqdtopen.proquest.com/#viewpdf?dispub=3442531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hrynko, Alina. "Source code quality in connection to self-admitted technical debt." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97977.

Full text
Abstract:
The importance of software code quality is increasing rapidly. With more code being written every day, its maintenance and support are becoming harder and more expensive. New automatic code review tools are developed to reach quality goals. One of these tools is SonarQube. However, people keep their leading role in the development process. Sometimes they sacrifice quality in order to speed up the development. This is called Technical Debt. In some particular cases, this process can be admitted by the developer. This is called Self-Admitted Technical Debt (SATD). Code quality can also be measured by such static code analysis tools as SonarQube. On this occasion, different issues can be detected. The purpose of this study is to find a connection between code quality issues, found by SonarQube and those marked as SATD. The research questions include: 1) Is there a connection between the size of the project and the SATD percentage? 2) Which types of issues are the most widespread in the code, marked by SATD? 3) Did the introduction of SATD influence the bug fixing time? As a result of research, a certain percentage of SATD was found. It is between 0%–20.83%. No connection between the size of the project and the percentage of SATD was found. There are certain issues that seem to relate to the SATD, such as “Duplicated code”, “Unused method parameters should be removed”, “Cognitive Complexity of methods should not be too high”, etc. The introduction of SATD has a minor positive effect on bug fixing time. We hope that our findings can help to improve the code quality evaluation approaches and development policies
APA, Harvard, Vancouver, ISO, and other styles
4

Tévar, Hernández Helena. "Evolution of SoftwareDocumentation Over Time : An analysis of the quality of softwaredocumentation." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-97561.

Full text
Abstract:
Software developers, maintainers, and testers rely on documentation to understandthe code they are working with. However, software documentation is perceivedas a waste of effort because it is usually outdated. How documentation evolvesthrough a set of releases may show whether there is any relationship between timeand quality. The results could help future developers and managers to improvethe quality of their documentation and decrease the time developers use to analyzecode. Previous studies showed that documentation used to be scarce and low inquality, thus, this research has investigated different variables to check if the qualityof the documentation changes over time. Therefore, we have created a tool thatwould extract and calculate the quality of the comments in code blocks, classes,and methods. The results have agreed with the previous studies. The quality of thedocumentation is affected to some extent through the releases, with a tendency todecrease.
APA, Harvard, Vancouver, ISO, and other styles
5

Ribeiro, Athos Coimbra. "Ranking source code static analysis warnings for continuous monitoring of free/libre/open source software repositories." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-20082018-170140/.

Full text
Abstract:
While there is a wide variety of both open source and proprietary source code static analyzers available in the market, each of them usually performs better in a small set of problems, making it hard to choose one single tool to rely on when examining a program. Combining the analysis of different tools may reduce the number of false negatives, but yields a corresponding increase in the number of false positives (which is already high for many tools). An interesting solution, then, is to filter these results to identify the issues least likely to be false positives. This work presents kiskadee, a system to support the usage of static analysis during software development by providing carefully ranked static analysis reports. First, it runs multiple static analyzers on the source code. Then, using a classification model, the potential bugs detected by the static analyzers are ranked based on their importance, with critical flaws ranked first, and potential false positives ranked last. To train kiskadee\'s classification model, we post-analyze the reports generated by three tools on synthetic test cases provided by the US National Institute of Standards and Technology. To make our technique as general as possible, we limit our data to the reports themselves, excluding other information such as change histories or code metrics. The features extracted from these reports are used to train a set of decision trees using AdaBoost to create a stronger classifier, achieving 0.8 classification accuracy (the combined false positive rate from the used tools was 0.61). Finally, we use this classifier to rank static analyzer alarms based on the probability of a given alarm being an actual bug. Our experimental results show that, on average, when inspecting warnings ranked by kiskadee, one hits 5.2 times less false positives before each bug than when using a randomly sorted warning list.
Embora exista grande variedade de analisadores estáticos de código-fonte disponíveis no mercado, tanto com licenças proprietárias, quanto com licenças livres, cada uma dessas ferramentas mostra melhor desempenho em um pequeno conjunto de problemas distinto, dificultando a escolha de uma única ferramenta de análise estática para analisar um programa. A combinação das análises de diferentes ferramentas pode reduzir o número de falsos negativos, mas gera um aumento no número de falsos positivos (que já é alto para muitas dessas ferramentas). Uma solução interessante é filtrar esses resultados para identificar os problemas com menores probabilidades de serem falsos positivos. Este trabalho apresenta kiskadee, um sistema para promover o uso da análise estática de código fonte durante o ciclo de desenvolvimento de software provendo relatórios de análise estática ranqueados. Primeiramente, kiskadee roda diversos analisadores estáticos no código-fonte. Em seguida, utilizando um modelo de classificação, os potenciais bugs detectados pelos analisadores estáticos são ranqueados conforme sua importância, onde defeitos críticos são colocados no topo de uma lista, e potenciais falsos positivos, ao fim da mesma lista. Para treinar o modelo de classificação do kiskadee, realizamos uma pós-análise nos relatórios gerados por três analisadores estáticos ao analisarem casos de teste sintéticos disponibilizados pelo National Institute of Standards and Technology (NIST) dos Estados Unidos. Para tornar a técnica apresentada o mais genérica possível, limitamos nossos dados às informações contidas nos relatórios de análise estática das três ferramentas, não utilizando outras informações, como históricos de mudança ou métricas extraídas do código-fonte dos programas inspecionados. As características extraídas desses relatórios foram utilizadas para treinar um conjunto de árvores de decisão utilizando o algoritmo AdaBoost para gerar um classificador mais forte, atingindo uma acurácia de classificação de 0,8 (a taxa de falsos positivos das ferramentas utilizadas foi de 0,61, quando combinadas). Finalmente, utilizamos esse classificador para ranquear os alarmes dos analisadores estáticos nos baseando na probabilidade de um dado alarme ser de fato um bug no código-fonte. Resultados experimentais mostram que, em média, quando inspecionando alarmes ranqueados pelo kiskadee, encontram-se 5,2 vezes menos falsos positivos antes de se encontrar cada bug quando a mesma inspeção é realizada para uma lista ordenada de forma aleatória.
APA, Harvard, Vancouver, ISO, and other styles
6

VETRO', ANTONIO. "EMPIRICAL ASSESSMENT OF THE IMPACT OF USING AUTOMATIC STATIC ANALYSIS ON CODE QUALITY." Doctoral thesis, Politecnico di Torino, 2013. http://hdl.handle.net/11583/2506350.

Full text
Abstract:
Automatic static analysis (ASA) tools analyze the source or compiled code looking for violations of recommended programming practices (called issues) that might cause faults or might degrade some dimensions of software quality. Antonio Vetro’ has focused his PhD in studying how applying ASA impacts software quality, taking as reference point the different quality dimensions specified by the standard ISO/IEC 25010. The epistemological approach he used is that one of empirical software engineering. During his three years PhD, he’s been conducting experiments and case studies on three main areas: Functionality/Reliability, Performance and Maintainability. He empirically proved that specific ASA issues had impact on these quality characteristics in the contexts under study: thus, removing them from the code resulted in a quality improvement. Vetro’ has also investigated and proposed new research directions for this field: using ASA to improve software energy efficiency and to detect the problems deriving from the interaction of multiple languages. The contribution is enriched with the final recommendation of a generalized process for researchers and practitioners with a twofold goal: improve software quality through ASA and create a body of knowledge on the impact of using ASA on specific software quality dimensions, based on empirical evidence. This thesis represents a first step towards this goal.
APA, Harvard, Vancouver, ISO, and other styles
7

Come, David. "Analyse de la qualité de code via une approche logique et application à la robotique." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30008.

Full text
Abstract:
La qualité d'un code informatique passe à la fois par sa correction fonctionnelle mais aussi par des critères de lisibilité, compréhension et maintenabilité. C'est une problématique actuellement importante en robotique où de nombreux frameworks open-source se diffusent mal dans l'industrie en raison d'incertitudes sur la qualité du code. Les outils d'analyse et de recherche de code sont efficaces pour améliorer ces aspects. Il est important qu'ils laissent l'utilisateur spécifier ce qu'il recherche afin pouvoir prendre en compte les spécificités de chaque projet et du domaine. Il existe deux principales représentations du code : son arbre de syntaxe abstraite (Abstract Syntax Tree en anglais, ou AST) et le graphe de flot de contrôle (Control Flow Graph en anglais, ou CFG) des fonctions qui sont définies dans le code. Les mécanismes de spécification existants utilisent exclusivement l'une ou l'autre de ces représentations, ce qui est dommage car elles offrent des informations complémentaires. L'objectif de ce travail est donc de développer une méthode de vérification de la conformité du code avec des règles utilisateurs qui puissent exploiter conjointement l'AST et le CFG. La méthode repose sur une nouvelle logique développée dans le cadre de ces travaux : FO++ , qui une extension temporelle de la logique du premier ordre. Cette logique a plusieurs avantages. Tout d'abord, elle est indépendante de tout langage de programmation et dotée d'une sémantique formelle. Ensuite, elle peut être utilisée comme moyen de formaliser les règles utilisateurs une fois instanciée pour un langage de programmation donné. Enfin, l'étude de son problème de model-checking offre un mécanisme de vérification automatique et correct de la conformité du code. Ces différents concepts ont été implémentés dans Pangolin, un outil pour le langage C++. Étant donné le code à vérifier et une spécification (qui correspond à une formule de FO++ , écrite dans le langage utilisateur de Pangolin), l'outil indique si oui ou non le code respecte la spécification. Il offre de plus un résumé synthétique de l'évaluation pour pouvoir retrouver le potentiel code fautif ainsi qu'un certificat de correction du résultat. Pangolin et FO++ ont trouvé une première application dans le domaine de la robotique via l'analyse de la qualité des paquets ROS et formalisation d'un design-pattern spécifique à ROS. Une seconde application plus générale concerne le développement de programme en C++ avec la formalisation de diverses règles de bonnes pratiques pour ce langage. Enfin, on montre comment il est possible de spécifier et vérifier des règles intimement liées à un projet en vérifiant des propriétés sur Pangolin lui-même
The quality of source code depends not only on its functional correctness but also on its readability, intelligibility and maintainability. This is currently an important problem in robotics where many open-source frameworks do not spread well in the industry because of uncertainty about the quality of the code. Code analysis and search tools are effective in improving these aspects. It is important that they let the user specify what she is looking for in order to be able to take into account the specific features of the project and of the domain. There exist two main representations of the source code : its Abstract Syntax Tree (AST) and the Control Flow Graph (CFG) of its functions. Existing specification mechanisms only use one of these representations, which is unfortunate because they offer complementaty information. The objective of this work is therefore to develop a method for verifying code compliance with user rules that can take benefit from both the AST and the CFG. The method is underpinned by a new logic we developed in this work : FO++ , which is a temporal extension of first-order logic. Relying on this logic has two advantages. First of all, it is independent of any programming language and has a formal semantics. Then, once instantiated for a given programming language, it can be used as a mean to formalize user provided properties. Finally, the study of its model-checking problem provides a mechanism for the automatic and correct verification of code compliance. These different concepts have been implemented in Pangolin, a tool for the C++ language. Given the code to be checked and a specification (which corresponds to an FO++ formula, written using Pangolin language), the tool indicates whether or not the code meets the specification. It also offers a summary of the evaluation in order to be able to find the code that violate the property as well as a certificate of the result correctness. Pangolin and FO++ have been applied to the field of robotics through the analysis of the quality of ROS packages and the formalization of a ROS-specific design-pattern. As a second and more general application to the development of programs in C++, we have formalized various good practice rules for this language. Finally, we have showed how it is possible to specify and verify rules that are closely related to a specific project by checking properties on the source code of Pangolin itself
APA, Harvard, Vancouver, ISO, and other styles
8

Lissy, Alexandre. "Utilisation de méthodes formelles pour garantir des propriétés de logiciels au sein d'une distribution : exemple du noyau Linux." Thesis, Tours, 2014. http://www.theses.fr/2014TOUR4019/document.

Full text
Abstract:
Dans cette thèse nous nous intéressons à intégrer dans la distribution Linux produite par Mandriva une assurance qualité permettant de proposer des garanties de propriétés sur le code exécuté. Le processus de création d’une distribution implique l’utilisation de logiciels de provenances diverses pour proposer un assemblage cohérent et présentant une valeur ajoutée pour l’utilisateur. Ceci engendre une moindre maîtrise potentielle sur le code. Un audit manuel permet de s’assurer que celui-Ci présente de bonnes propriétés, par exemple, en matière de sécurité. Le nombre croissant de composants à intégrer, et la croissance de la quantité de code de chacun amènent à avoir besoin d’outils pour permettre une assurance qualité. Après une étude de la distribution nous choisissons de nous concentrer sur un paquet critique, le noyau Linux : nous proposons un état de l’art des méthodes de vérifications appliquées à ce contexte particulier, et identifions le besoin d’améliorer la compréhension de la structure du code source, la question de l’explosion combinatoire et le manque d’intégration des outils d’analyse de l’état de l’art. Pour répondre à ces besoins nous proposons une représentation du code source sous la forme d’un graphe, et l’utilisons pour aider à la documentation et à la compréhension de l’architecture du code. Des méthodes de détection de communautés sont évaluées sur ce cas pour répondre au besoin de l’explosion combinatoire. Enfin nous proposons une architecture intégrée dans le système de construction de la distribution permettant d’intégrer des outils d’analyse et de vérification de code
In this thesis we are interested in integrating to the Linux distribution produced by Mandriva quality assurance level that allows ensuring user-Defined properties on the source code used. The core work of a distribution and its producer is to create a meaningful aggregate from software available. Those softwares are free and open source, hence it is possible to adapt it to improve end user’s experience. Hence, there is less control over the source code. Manual audit can of course be used to make sure it has good properties. Examples of such properties are often referring to security, but one could think of others. However, more and more software are getting integrated into distributions and each is showing an increase in source code volume: tools are needed to make quality assurance achievable. We start by providing a study of the distribution itself to document the current status. We use it to select some packages that we consider critical, and for which we can improve things with the condition that packages which are similar enough to the rest of the distribution will be considered first. This leads us to concentrating on the Linux kernel: we provide a state of the art overview of code verification applied to this piece of the distribution. We identify a need for a better understanding of the structure of the source code. To address those needs we propose to use a graph as a representation of the source code and use it to help document and understand its structure. Specifically we study applying some state of the art community detection algorithm to help handle the combinatory explosion. We also propose a distribution’s build system-Integrated architecture for executing, collecting and handling the analysis of data produced by verifications tools
APA, Harvard, Vancouver, ISO, and other styles
9

Pavlíčková, Jarmila. "Model zralosti zdrojového kódu objektových aplikací." Doctoral thesis, Vysoká škola ekonomická v Praze, 2007. http://www.nusl.cz/ntk/nusl-191816.

Full text
Abstract:
The goal of this disertation was to develop a maturity model for source code of object-oriented applications and to use this model to verify the quality of the source code of student's applications. The starting point of the thesis was devoted to the analysis of existing standards, norms, methodologies and summaries of best practice for assessing the quality of software products and analysis of the factors that affect the quality of the source code of object applications. To validate the results of the analysis of these factors, the analysis was complemented with a field research conducted among specialist with programming experience. The model to determine the maturity of source code of object-oriented applications was designed according to the analysis and the questionnaire. The statistical method of cluster analysis was used in the design of the model. The model was designed and the procedure of its use in the evaluation of source code was described. This model was pilot tested in education program at the University of Economics in Prague.
APA, Harvard, Vancouver, ISO, and other styles
10

Lavesson, Alexander, and Christina Luostarinen. "OAuth 2.0 Authentication Plugin for SonarQube." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-67526.

Full text
Abstract:
Many web services today give users the opportunity to sign in using an account belonging to a different service. Letting users authenticate themselves using another service eliminates the need of a user having to create a new identity for each service they use. Redpill Linpro uses the open source platform SonarQube for code quality inspection. Since developers in the company are registered users of another open source platform named OpenShift, they would like to authenticate themselves to SonarQube using their OpenShift identity. Our task was to create a plugin that offers users the functionality to authenticate themselves to SonarQube using OpenShift as their identity provider by applying the authentication framework OAuth. Theproject resulted in a plugin of high code quality according to SonarQube’s assessment. RedpillLinpro will use the plugin to easily access SonarQube’s functionality when using theapplication in their developer platform.
APA, Harvard, Vancouver, ISO, and other styles
11

Habchi, Sarra. "Understanding mobile-specific code smells." Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I089.

Full text
Abstract:
Au cours des dernières années, les applications mobiles sont devenues indispensables dans notre vie quotidienne. Ces applications ont pour particularité de fonctionner sur des téléphones mobiles, souvent limités en ressources (mémoire, puissance de calcul, batterie, etc). Ainsi, il est impératif de surveiller leur code source afin de s'assurer de l’absence de défauts de code, c.à.d., des pratiques de développement qui dégradent la performance. Plusieurs études ont été menées pour analyser les défauts de code des applications mobile et évaluer leur présence et leur impact sur la performance et l’efficience énergétique. Néanmoins, ces études n’ont pas examiné les caractéristiques et les motifs de ces défauts alors que ces éléments sont nécessaires pour la compréhension et la résolution de ce phénomène. L’objectif de cette thèse est de répondre à ce besoin en apportant des connaissances sur les défauts de code mobile. Pour cela, nous avons mené diverses études empiriques à caractère quantitatif et qualitatif pour comprendre ce phénomène et ses possibles résolutions. Les résultats de ces études montrent que les défauts de code mobile ne sont pas dus au pragmatisme, à la priorisation, ou aux comportements individuels des développeurs. En effet, l’essentielle raison derrière l’accumulation des défauts de code est l’ignorance générale de ces pratiques parmi les développeurs. Nos études ont aussi montré que le linter peut être un outil adéquat pour l’analyse des défauts de code et l’amélioration des performances. Cependant, certains défauts nécessitent une analyse dynamique pour une détection plus précise. Enfin, nous avons montré qu’il existe un écart entre les défauts de code étudiés par la communauté scientifique et les réelles mauvaises pratiques adoptées par les développeurs d’applications mobiles. Pour remédier à cet écart, nous recommandons à la communauté d’impliquer les développeurs dans le processus d’identification de défauts de code
Object-Oriented code smells are well-known concepts in software engineering. They refer to bad design and development practices commonly observed in software systems. With the emergence of mobile apps, new classes of code smells have been identified to refer to bad development practices that are specific to mobile platforms. These mobile-specific code smells differ from object-oriented ones by focusing on performance issues reported in the documentation or developer guidelines. Since their identification, many research works approached mobile-specific code smells to propose detection tools and study them. Nonetheless, most of these studies only focused on measuring the performance impact of such code smells and did not provide any insights about their motives and potential solutions. In particular, we lack knowledge about (i) the rationales behind the accrual of mobile code smells, (ii) the developers’ perception of mobile code smells, and (iii) the generalizability of code smells across different mobile platforms. These lacks hinder the understanding of mobile code smells and consequently prevent the design of adequate solutions for them. Therefore, we conduct in this thesis a series of empirical studies with the aim of understanding mobile code smells. First, we study the expansion of code smells in different mobile platforms. Then, we conduct a large-scale study to analyze the change history of mobile apps and discern the factors that favor the introduction and survival of code smells. To consolidate these studies, we also perform a user study to investigate developers’ perception of code smells and the adequacy of static analyzers as a solution for coping with them. Finally, we perform a qualitative study to question the established foundation about the definition and detection of mobile code smells. The results of these studies revealed important research findings. Notably, we showed that pragmatism, prioritization, and individual attitudes are not relevant factors for the accrual of mobile code smells. The problem is rather caused by ignorance and oversight, which are prevalent among mobile developers. Furthermore, we highlighted several flaws in the code smell definitions that are currently adopted by the research community. These results allowed us to elaborate some recommendations for researchers and tool makers willing to design detection and refactoring tools for mobile code smells. On top of that, our results opened perspectives for research works about the identification of mobile code smells and development practices in general
APA, Harvard, Vancouver, ISO, and other styles
12

Boulanger, Isabelle. "Lillgrund Wind Farm Modelling and Reactive Power Control." Thesis, KTH, Elektriska energisystem, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-119256.

Full text
Abstract:
The installation of wind power plant has significantly increased since several years due to the recent necessity of creating renewable and clean energy sources. Before the accomplishment of a wind power project many pre-studies are required in order to verify the possibility of integrating a wind power plant in the electrical network. The creation of models in different software and their simulation can bring the insurance of a secure operation that meets the numerous requirements imposed by the electrical system. Hence, this Master thesis work consists in the creation of a wind turbine model. This model represents the turbines installed at Lillgrund wind farm, the biggest wind power plant in Sweden. The objectives of this project are to first develop an accurate model of the wind turbines installed at Lillgrund wind farm and further to use it in different kinds of simulations. Those simulations test the wind turbine operating according to different control modes. Also, a power quality analysis is carried out studying in particular two power quality phenomena, namely, the response to voltage sags and the harmonic distortion. The model is created in the software PSCAD that enables the dynamic and static simulations of electromagnetic and electromechanical systems. The model of the wind turbine contains the electrical machine, the power electronics (converters), and the controls of the wind turbine. Especially, three different control modes, e.g., voltage control, reactive power control and power factor control, are implemented, tested and compared. The model is tested according to different cases of voltage sag and the study verifies the fault-ride through capability of the turbine. Moreover, a harmonics analysis is done. Eventually the work concludes about two power quality parameters.
APA, Harvard, Vancouver, ISO, and other styles
13

Hecht, Geoffrey. "Détection et analyse de l'impact des défauts de code dans les applications mobiles." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10133/document.

Full text
Abstract:
Les applications mobiles deviennent des logiciels complexes qui doivent être développés rapidement tout en évoluant de manière continue afin de répondre aux nouveaux besoins des utilisateurs ainsi qu'à des mises à jour régulières. S'adapter à ces contraintes peut provoquer la présence de mauvais choix d'implémentation ou de conception que nous appelons défauts de code. La présence de défauts de code au sein d'une application peut dégrader la qualité et les performances d'une application. Il est alors important de connaître ces défauts mais aussi de pouvoir les détecter et les corriger. Les défauts de code sont bien connus pour les applications orientés objets et de nombreux outils permettent leurs détections, mais ce n'est pas le cas pour les applications mobiles. Les connaissances concernant les défauts de code dans les applications mobiles sont lacunaires, de plus les outils permettant la détection et la correction des défauts sont inexistants ou peu matures. Nous présentons donc ici une classification de 17 défauts de code pouvant apparaître dans les applications Android, ainsi qu'un outil permettant la détection et la correction des défauts de code sur Android. Nous appliquons et validons notre méthode sur de grandes quantités d'applications (plus de 3000) dans deux études qui évaluent la présence et l'évolution du nombre des défauts de code dans des applications populaires. De plus, nous présentons aussi deux approches destinées à évaluer l'impact de la correction des défauts de code sur les performances et la consommation d'énergie. Ces approches nous ont permis d'observer que la correction des défauts de code est bénéfique dans la plupart des cas
Mobile applications are becoming complex software systems that must be developed quickly and evolve continuously to fit new user requirements and execution contexts. However, addressing these constraints may result in poor low-level design choices, known as code smells. The presence of code smells within software systems may incidentally degrade their quality and performance, and hinder their maintenance and evolution. Thus, it is important to know this smells but also to detect and correct them. While code smells are well-known in object-oriented applications, their study in mobile applications is still in their infancy. Moreover there is a lack of tools to detect and correct them. That is why we present a classification of 17 code smells that may appear in Android applications, as well as a tool to detect and correct code smells on Android. We apply and validate our approach on large amounts of applications (over 3000) in two studies evaluating the presence and evolution of the number of code smells in popular applications. In addition, we also present two approaches to assess the impact of the correction of code smells on performance and energy consumption. These approaches have allowed us to observe that the correction of code smells is beneficial in most cases
APA, Harvard, Vancouver, ISO, and other styles
14

Kiendys, Petrus, and Shadi Al-Zara. "Minimumkrav för ett CI-system." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20216.

Full text
Abstract:
När en grupp utvecklare jobbar med samma kodbas kan konflikter uppstå med avseende på implementationen av moduler eller delsystem som varje utvecklare individuellt jobbar på. Dessa konflikter måste snabbt lösas för att projektet ska fortskrida och inte stagnera. Utvecklare som sällan kommunicerar framför ofta okompatibla moduler eller delsystem som kan vara svåra eller omöjliga att integrera i kodbasen, detta leder ofta till s.k. “integration hell” där det kan ta väldigt lång tid att anpassa ny kod till en befintlig kodbas.En strategi som man kan ta till är “continuous integration”, ett arbetssätt som erbjuder en rad fördelar när man jobbar i grupp på en gemensam kodbas. Continuous integration är möjligt att tillämpa utan verktyg eftersom detta är ett arbetssätt. Däremot kan processen stödjas av ett s.k. “CI-system” som är något av en teknisk implementation eller påtagligt införlivande och stöd för arbetsmetoden “continuous integration”.Denna rapport syftar till att ge en inblick i vad ett CI-system är och vad den principiellt består av. Vi undersöker vad ett CI-system absolut måste bestå av genom en litteraturundersökning och en marknadsundersökning. Vi ställer upp dessa beståndsdelar som “funktionella” och “icke-funktionella” krav för ett typiskt CI-system. Vi kan på så vis kvantifiera och kategorisera olika komponenter och funktionaliteter som bör innefattas i ett typiskt CI-system. I denna rapport finns även ett bihang som visar hur man kommer igång med att bygga en egen CI-server mha. CI-systemmjukvaran “TeamCity”.Slutsatsen av vår rapport är att CI-system är ett viktigt redskap som kan underlätta mjukvaruutveckling. Med hjälp av CI-system kan man stödja utvecklingsprocessen genom att bl.a. förhindra integrationsproblem, automatisera vissa delar av arbetsprocessen (kompilering av källkod, testning av mjukvara, notifikation om stabilitet av kodbas och distribution av färdig mjukvara) samt snabbt hitta och lösa integrationsfel.
When a group of developers work on the same code base, conflicts may arise regarding the implementation of modules or subsystems that developers individually work on. These conflicts have to be resolved quickly in order for the project to advance at a steady pace. Developers who do not communicate changes or other necessary deviations may find themselves in a situation where new or modified modules or subsystems are impossible or very difficult to integrate into the mainline code-base. This often leads to so called “integration hell” where it could take huge amounts of time to adapt new code into the current state of the code-base. One strategy, which can be deployed to counteract this trend is called “continuous integration”. This practice offers a wide range of advantages when a group of developers collaborates on writing clean and stable code. Continuous integration can be put into practice without the use of any tools as it is a “way to do things” rather than an actual tool. With that said, it is possible to support the practice with a tangible tool called a CI-system.This study aims to give insight into the makings of a CI-system and what it fundamentally consists of and has to be able to do. A study of contemporary research reports regarding the subject and a survey was performed in order to substantiate claims and conclusions. Core characteristics of CI-systems are grouped into “functional requirements” and “non-functional requirements (quality attributes)”. By doing this, it is possible to quantify and categorize various core components and functionalities of a typical CI-system. This study also contains an attachment which provides instructions of how to get started with implementing your own CI-server using the CI-system software ”TeamCity”. The conclusion of this study is that a CI-system is an important tool that enables a more efficient software development process. By making use of CI-systems developers can refine the development process by preventing integration problems, automating some parts of the work process (build, test, feedback, deployment) and quickly finding and solving integration issues.
APA, Harvard, Vancouver, ISO, and other styles
15

Halbach, Till. "Error-robust coding and transformation of compressed hybered hybrid video streams for packet-switched wireless networks." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-136.

Full text
Abstract:

This dissertation considers packet-switched wireless networks for transmission of variable-rate layered hybrid video streams. Target applications are video streaming and broadcasting services. The work can be divided into two main parts.

In the first part, a novel quality-scalable scheme based on coefficient refinement and encoder quality constraints is developed as a possible extension to the video coding standard H.264. After a technical introduction to the coding tools of H.264 with the main focus on error resilience features, various quality scalability schemes in previous research are reviewed. Based on this discussion, an encoder decoder framework is designed for an arbitrary number of quality layers, hereby also enabling region-of-interest coding. After that, the performance of the new system is exhaustively tested, showing that the bit rate increase typically encountered with scalable hybrid coding schemes is, for certain coding parameters, only small to moderate. The double- and triple-layer constellations of the framework are shown to perform superior to other systems.

The second part considers layered code streams as generated by the scheme of the first part. Various error propagation issues in hybrid streams are discussed, which leads to the definition of a decoder quality constraint and a segmentation of the code stream to transmit. A packetization scheme based on successive source rate consumption is drafted, followed by the formulation of the channel code rate optimization problem for an optimum assignment of available codes to the channel packets. Proper MSE-based error metrics are derived, incorporating the properties of the source signal, a terminate-on-error decoding strategy, error concealment, inter-packet dependencies, and the channel conditions. The Viterbi algorithm is presented as a low-complexity solution to the optimization problem, showing a great adaptivity of the joint source channel coding scheme to the channel conditions. An almost constant image qualiity is achieved, also in mismatch situations, while the overall channel code rate decreases only as little as necessary as the channel quality deteriorates. It is further shown that the variance of code distributions is only small, and that the codes are assigned irregularly to all channel packets.

A double-layer constellation of the framework clearly outperforms other schemes with a substantial margin.

Keywords — Digital lossy video compression, visual communication, variable bit rate (VBR), SNR scalability, layered image processing, quality layer, hybrid code stream, predictive coding, progressive bit stream, joint source channel coding, fidelity constraint, channel error robustness, resilience, concealment, packet-switched, mobile and wireless ATM, noisy transmission, packet loss, binary symmetric channel, streaming, broadcasting, satellite and radio links, H.264, MPEG-4 AVC, Viterbi, trellis, unequal error protection

APA, Harvard, Vancouver, ISO, and other styles
16

MOURA, JOAO A. "Desenvolvimento e construção de sistema automatizados para controle de qualidade na produção de sementes de iodo-125." reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26454.

Full text
Abstract:
Submitted by Claudinei Pracidelli (cpracide@ipen.br) on 2016-07-01T11:08:37Z No. of bitstreams: 0
Made available in DSpace on 2016-07-01T11:08:37Z (GMT). No. of bitstreams: 0
Tese (Doutorado em Tecnologia Nuclear)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
17

Mhamdi, Maroua. "Méthodes de transmission d'images optimisées utilisant des techniques de communication numériques avancées pour les systèmes multi-antennes." Thesis, Poitiers, 2017. http://www.theses.fr/2017POIT2281/document.

Full text
Abstract:
Cette thèse est consacrée à l'amélioration des performances de codage/décodage de systèmes de transmission d'images fixes sur des canaux bruités et réalistes. Nous proposons, à cet effet, le développement de méthodes de transmission d'images optimisées en se focalisant sur les deux couches application et physique des réseaux sans fil. Au niveau de la couche application et afin d'assurer une bonne qualité de service, on utilise des algorithmes de compression efficaces permettant au récepteur de reconstruire l'image avec un maximum de fidélité (JPEG2000 et JPWL). Afin d'assurer une transmission sur des canaux sans fil avec un minimum de TEB à la réception, des techniques de transmission, de codage et de modulation avancées sont utilisées au niveau de la couche physique (système MIMO-OFDM, modulation adaptative, CCE, etc). Dans un premier temps, nous proposons un système de transmission robuste d'images codées JPWL intégrant un schéma de décodage conjoint source-canal basé sur des techniques de décodage à entrées pondérées. On considère, ensuite, l'optimisation d'une chaîne de transmission d'images sur un canal MIMO-OFDM sans fil réaliste. La stratégie de transmission d'images optimisée s'appuie sur des techniques de décodage à entrées pondérées et une approche d'adaptation de lien. Ainsi, le schéma de transmission proposé offre la possibilité de mettre en oeuvre conjointement de l'UEP, de l'UPA, de la modulation adaptative, du codage de source adaptatif et de décodage conjoint pour améliorer la qualité de l'image à la réception. Dans une seconde partie, nous proposons un système robuste de transmission de flux progressifs basé sur le principe de turbo décodage itératif de codes concaténés offrant une stratégie de protection inégale de données. Ainsi, l'originalité de cette étude consiste à proposer des solutions performantes d'optimisation globale d'une chaîne de communication numérique pour améliorer la qualité de transmission
This work is devoted to improve the coding/ decoding performance of a transmission scheme over noisy and realistic channels. For this purpose, we propose the development of optimized image transmission methods by focusing on both application and physical layers of wireless networks. In order to ensure a better quality of services, efficient compression algorithms (JPEG2000 and JPWL) are used in terms of the application layer enabling the receiver to reconstruct the images with maximum fidelity. Furthermore, to insure a transmission on wireless channels with a minimum BER at reception, some transmission, coding and advanced modulation techniques are used in the physical layer (MIMO-OFDM system, adaptive modulation, FEC, etc). First, we propose a robust transmission system of JPWL encoded images integrating a joint source-channel decoding scheme based on soft input decoding techniques. Next, the optimization of an image transmission scheme on a realistic MIMO-OFDM channel is considered. The optimized image transmission strategy is based on soft input decoding techniques and a link adaptation approach. The proposed transmission scheme offers the possibility of jointly implementing, UEP, UPA, adaptive modulation, adaptive source coding and joint decoding strategies, in order to improve the image visual quality at the reception. Then, we propose a robust transmission system for embedded bit streams based on concatenated block coding mechanism offering an unequal error protection strategy. Thus, the novelty of this study consists in proposing efficient solutions for the global optimization of wireless communication system to improve transmission quality
APA, Harvard, Vancouver, ISO, and other styles
18

Hu, Chao-Hsiang, and 胡朝翔. "An Instructional Design that Improves Students'' Source Code Quality by Reducing Bad Smells." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/87rwb7.

Full text
Abstract:
碩士
國立臺北科技大學
資訊工程系研究所
100
Programs with correct functionalities do not suggest that they have good code quality as well. Therefore, for teaching, it is desirable to guide students to pay more attention to the code quality when writing programming assignments. Consequently, the grading policy for students’ programming assignments should consider both correctness and quality. However, for grading of the code quality, if the teaching assistant (TA) simply assigns an objective score level instead of providing students with an in-depth feedback, students will have difficulty improving their code quality. This thesis proposes a pedagogical method that examines the existence of bad smells as part of the grading policy. Students are provided with detailed feedback that reveals the location and the kind of bad smells within the code, along with a peer code review that reduces bad smells. This thesis takes the students of 2011 Windows Programming course of National Taipei University Technology as the subject of research, assesses the bad smell density from students’ programming assignments, and surveys the class participants with a designed questionnaire to verify the effectiveness of the proposed method on improving the code quality. The proposed method requires the TA to spend a large amount of time grading programming assignments. Though demanding, the grading can be done within the working hours of a half-time TA.
APA, Harvard, Vancouver, ISO, and other styles
19

Rangarajan, Sarathkumar. "QOS-aware Web service discovery, selection, composition and application." Thesis, 2020. https://vuir.vu.edu.au/42153/.

Full text
Abstract:
Since the beginning of the 21st century, service-oriented architecture (SOA) has emerged as an advancement of distributed computing. SOA is a framework where software modules are developed using straightforward interfaces, and each module serves a specific array of functions. It delivers enterprise applications individually or integrated into a more significant composite Web services. However, SOA implementation faces several challenges, hindering its broader adaptation. This thesis aims to highlight three significant challenges in the implementation of SOA. The abundance of functionally similar Web services and the lack of integrity with non-functional features such as Quality of Service (QoS) leads to the difficulties in the prediction of QoS. Thus, the first challenge to be addressed is to find an efficient scheme for the prediction of QoS. The use of software source code metrics is a widely accepted alternative solution. Source code metrics are measured at a micro level and aggregated at the macro level to represent the software adequately. However, the effect of aggregation schemes on QoS prediction using source code metrics remains unexplored. The inequality distribution model, the Theil index, is proposed in this research to aggregate micro level source code metrics for three different datasets and compare the quality of QoS prediction. The experiment results show that the Theil index is a practical solution for effective QoS prediction. The second challenge is to search and compose suitable Web services with- out the need for expertise in composition tools. Currently, the existing approaches need system engineers with extensive knowledge of SOA techniques. A keyword-based search is a common approach for information retrieval which does not require an understanding of a query language or the underlying data structure. The proposed framework uses a schema-based keyword search over the relational database for an efficient Web service search and composition. Experiments are conducted with the WS-Dream data set to evaluate Web service search and composition framework using adequate performance parameters. The results of a quality constraints experiments show that the schema-based keyword search can achieve a better success rate than the existing approaches. Building an efficient data architecture for SOA applications is the third challenge as real-world SOA applications are required to process a vast quantity of data to produce a valuable service on demand. Contemporary SOA data processing systems such as the Enterprise Data Warehouse (EDW) lack scalability. A data lake, a productive data environment, is proposed to improve data ingestion for SOA systems. The data lake architecture stores both structured and unstructured data using the Hadoop Distributed File System (HDFS). Experiment results compare the data ingestion time of data lake and EDW. In the evaluation, the data lake-based architecture is implemented for personalized medication suggestion system. The data lake shows that it can generate patient clusters more concisely than the current EDW-based approaches. In summary, this research can effectively address three significant challenges for the broader adaptation of SOAs. The Theil index-based data aggregation model helps QoS prediction without the dependence on the Web service registry. Service engineers with less knowledge of SOA techniques can exploit a schema-based keyword search for a Web service search and composition. The data lake shows its potential to act as a data architecture for SOA applications.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography