Dissertations / Theses on the topic 'Genetic software engineering'

To see the other types of publications on this topic, follow the link: Genetic software engineering.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Genetic software engineering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hulse, Paul. "A study of topical applications of genetic programming and genetic algorithms in physical and engineering systems." Thesis, University of Salford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nettelblad, Carl. "Using Markov models and a stochastic Lipschitz condition for genetic analyses." Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-120295.

Full text
Abstract:
A proper understanding of biological processes requires an understanding of genetics and evolutionary mechanisms. The vast amounts of genetical information that can routinely be extracted with modern technology have so far not been accompanied by an equally extended understanding of the corresponding processes. The relationship between a single gene and the resulting properties, phenotype of an individual is rarely clear. This thesis addresses several computational challenges regarding identifying and assessing the effects of quantitative trait loci (QTL), genomic positions where variation is affecting a trait. The genetic information available for each individual is rarely complete, meaning that the unknown variable of the genotype in the loci modelled also needs to be addressed. This thesis contains the presentation of new tools for employing the information that is available in a way that maximizes the information used, by using hidden Markov models (HMMs), resulting in a change in algorithm runtime complexity from exponential to log-linear, in terms of the number of markers. It also proposes the introduction of inferred haplotypes to further increase the power to assess these unknown variables for pedigrees of related genetically diverse individuals. Modelling consequences of partial genetic information are also treated. Furthermore, genes are not directly affecting traits, but are rather expressed in the environment of and in concordance with other genes. Therefore, significant interactions can be expected within genes, where some combination of genetic variation gives a pronounced, or even opposite, effect, compared to when occurring separately. This thesis addresses how to perform efficient scans for multiple interacting loci, as well as how to derive highly accurate empirical significance tests in these settings. This is done by analyzing the mathematical properties of the objective function describing the quality of model fits, and reformulating it through a simple transformation. Combined with the presented prototype of a problem-solving environment, these developments can make multi-dimensional searches for QTL routine, allowing the pursuit of new biological insight.
eSSENCE
APA, Harvard, Vancouver, ISO, and other styles
3

Haq, Zia Ul. "Application of genetic algorithms for irrigation water scheduling." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/72987/.

Full text
Abstract:
A typical irrigation scheduling problem is one of preparing a schedule to service a group of outlets. These outlets may either be serviced sequentially or simultaneously. This problem has an analogy with the classical earliness/tardiness machine scheduling problems in operations research (OR). In previous published work integer programme were used to solve such problems; however, such scheduling problems belong to a class of combinatorial problems known to be computationally demanding (NP-hard). This is widely reported in OR. Hence integer programme can only be used to solve relatively small problems usually in a research environment where considerable computational resources and time can be allocated to solve a single schedule. For practical applications meta-heuristics such as genetic algorithms, simulated annealing or tabu search methods need to be used. However as reported in the literature, these need to be formulated carefully and tested thoroughly. This thesis demonstrates how arranged-demand irrigation scheduling problems can be correctly formulated and solved using genetic algorithms (GA). By interpreting arrangeddemand irrigation scheduling problems as single or multi-machine scheduling problems, the wealth of information accumulated over decades in OR is capitalized on. The objective is to schedule irrigation supplies as close as possible to the requested supply time of the farmers to provide a better level of service. This is in line with the concept of Service Oriented Management (SOM), described as the central goal of irrigation modernization in recent literature. This thesis also emphasizes the importance of rigorous evaluation of heuristics such as GA. First, a series of single machine models is presented that models the warabandi (rotation) type of irrigation distribution systems, where farmers are supplied water sequentially. Next, the multimachine models are presented which model the irrigation water distribution systems where several farmers may be supplied water simultaneously. Two types of multimachine models are defined. The simple multimachine models where all the farmers are supplied with identical discharges and the complex multimachine models where the farmers are allowed to demand different discharges. Two different approaches i.e. the stream tube approach and the time block approach are used to develop the multimachine models. These approaches are evaluated and compared to determine the suitability of either for the irrigation scheduling problems, which is one of the significant contributions of this thesis. The multimachine models are further enhanced by incorporating travel times which is an important part of the surface irrigation canal system and need to be taken into account when determining irrigation schedules. The models presented in this thesis are unique in many aspects. The potential of GA for a wide range of irrigation scheduling problems under arranged demand irrigation system is fully explored through a series of computational experiments.
APA, Harvard, Vancouver, ISO, and other styles
4

Jayawardena, Mahen. "Parallel algorithms and implementations for genetic analysis of quantitative traits." Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-85815.

Full text
Abstract:
Many important traits in plants, animals and humans are quantitative, and most such traits are generally believed to be regulated by multiple genetic loci. Standard computational tools for analysis of quantitative traits use linear regression models for relating the observed phenotypes to the genetic composition of individuals in a population. However, using these tools to simultaneously search for multiple genetic loci is very computationally demanding. The main reason for this is the complex nature of the optimization landscape for the multidimensional global optimization problems that must be solved. This thesis describes parallel algorithms and implementation techniques for such optimization problems. The new computational tools will eventually enable genetic analysis exploiting new classes of multidimensional statistical models, potentially resulting in interesting results in genetics. We first describe how the algorithm used for global optimization in the standard, serial software is parallelized and implemented on a grid system. Then, we also describe a parallelized version of the more elaborate global optimization algorithm DIRECT and show how this can be deployed on grid systems and other loosely-coupled architectures. The parallel DIRECT scheme is further developed to exploit both coarse-grained parallelism in grid or clusters as well as fine-grained, tightly-coupled parallelism in multi-core nodes. The results show that excellent speedup and performance can be archived on grid systems and clusters, even when using a tightly-coupled algorithms such as DIRECT. Finally, a pilot implementation of a grid portal providing a graphical front-end for our code is implemented. After some further development, this portal can be utilized by geneticists for performing multidimensional genetic analysis of quantitative traits on a regular basis.
APA, Harvard, Vancouver, ISO, and other styles
5

Vázquez, Vilar Marta. "DESIGN OF GENETIC ELEMENTS AND SOFTWARE TOOLS FOR PLANT SYNTHETIC BIOLOGY." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/68483.

Full text
Abstract:
[EN] Synthetic Biology is an emerging interdisciplinary field that aims to apply the engineering principles of modularity, abstraction and standardization to genetic engineering. The nascent branch of Synthetic Biology devoted to plants, Plant Synthetic Biology (PSB), offers new breeding possibilities for crops, potentially leading to enhanced resistance, higher yield, or increased nutritional quality. To this end, the molecular tools in the PSB toolbox need to be adapted accordingly, to become modular, standardized and more precise. Thus, the overall objective of this Thesis was to adapt, expand and refine DNA assembly tools for PSB to enable the incorporation of functional specifications to the description of standard genetic elements (phytobricks) and to facilitate the construction of increasingly complex and precise multigenic devices, including genome editing tools. The starting point of this Thesis was the modular DNA assembly method known as GoldenBraid (GB), based on type IIS restriction enzymes. To further optimize the GB construct-making process and to better catalog the phytobricks collection, a database and a set of software-tools were developed as described in Chapter 1. The final webbased software package, released as GB2.0, was made publicly available at www.gbcloning.upv.es. A detailed description of the functioning of GB2.0, exemplified with the building of a multigene construct for anthocyanin overproduction was also provided in Chapter 1. As the number and complexity of GB constructs increased, the next step forward consisted in the refinement of the standards with the incorporation of experimental information associated to each genetic element (described in Chapter 2). To this end, the GB package was reshaped into an improved version (GB3.0), which is a self-contained, fully traceable assembly system where the experimental data describing the functionality of each DNA element is displayed in the form of a standard datasheet. The utility of the technical specifications to anticipate the behavior of composite devices was exemplified with the combination of a chemical switch with a prototype of an anthocyanin overproduction module equivalent to the one described in Chapter 1, resulting in a dexamethasone-responsive anthocyanin device. Furthermore, Chapter 3 describes the adaptation and functional characterization of CRISPR/Cas9 genome engineering tools to the GB technology. The performance of the adapted tools for gene editing, transcriptional activation and repression was successfully validated by transient expression in N. benthamiana. Finally, Chapter 4 presents a practical implementation of GB technology for precision plant breeding. An intragenic construct comprising an intragenic selectable marker and a master regulator of the flavonoid biosynthesis was stably transformed in tomato resulting in fruits enhanced in flavonol content. All together, this Thesis shows the implementation of increasingly complex and precise genetic designs in plants using standard elements and modular tools following the principles of Synthetic Biology.
[ES] La Biología Sintética es un campo emergente de carácter interdisciplinar que se fundamenta en la aplicación de los principios ingenieriles de modularidad, abstracción y estandarización a la ingeniería genética. Una nueva vertiente de la Biología Sintética aplicada a las plantas, la Biología Sintética Vegetal (BSV), ofrece nuevas posibilidades de mejora de cultivos que podrían llevar a una mejora de la resistencia, a una mayor productividad, o a un aumento de la calidad nutricional. Sin embargo, para alcanzar este fin las herramientas moleculares disponibles en estos momentos para BSV deben ser adaptadas para convertirse en modulares, estándares y más precisas. Por ello se planteó como objetivo general de esta Tesis adaptar, expandir y refinar las herramientas de ensamblaje de DNA de la BSV para permitir la incorporación de especificaciones funcionales en la descripción de elementos genéticos estándar (fitobricks) y facilitar la construcción de estructuras multigénicas cada vez más complejas y precisas, incluyendo herramientas de editado genético. El punto de partida de esta Tesis fue el método de ensamblaje modular de ADN GoldenBraid (GB) basado en enzimas de restricción tipo IIS. Para optimizar el proceso de ensamblaje y catalogar la colección de fitobricks generados se desarrollaron una base de datos y un conjunto de herramientas software, tal y como se describe en el Capítulo 1. El paquete final de software se presentó en formato web como GB2.0, haciéndolo accesible al público a través de www.gbcloning.upv.es. El Capítulo 1 también proporciona una descripción detallada del funcionamiento de GB2.0 ejemplificando su uso con el ensamblaje de una construcción multigénica para la producción de antocianinas. Con el aumento en número y complejidad de las construcciones GB, el siguiente paso necesario fue el refinamiento de los estándar con la incorporación de la información experimental asociada a cada elemento genético (se describe en el Capítulo 2). Para este fin, el paquete de software de GB se reformuló en una nueva versión (GB3.0), un sistema de ensamblaje auto-contenido y completamente trazable en el que los datos experimentales que describen la funcionalidad de cada elemento genético se muestran en forma de una hoja de datos estándar. La utilidad de las especificaciones técnicas para anticipar el comportamiento de dispositivos biológicos compuestos se ejemplificó con la combinación de un interruptor químico y un prototipo de un módulo de sobreproducción de antocianinas equivalente al descrito en el Capítulo 1, resultando en un dispositivo de producción de antocianinas con respuesta a dexametasona. Además, en el Capítulo 3 se describe la adaptación a la tecnología GB de las herramientas de ingeniería genética CRISPR/Cas9, así como su caracterización funcional. La funcionalidad de estas herramientas para editado génico y activación y represión transcripcional se validó con el sistema de expresión transitoria en N.benthamiana. Finalmente, el Capítulo 4 presenta una implementación práctica del uso de la tecnología GB para hacer mejora vegetal de manera precisa. La transformación estable en tomate de una construcción intragénica que comprendía un marcador de selección intragénico y un regulador de la biosíntesis de flavonoides resultó en frutos con un mayor contenido de flavonoles. En conjunto, esta Tesis muestra la implementación de diseños genéticos cada vez más complejos y precisos en plantas utilizando elementos estándar y herramientas modulares siguiendo los principios de la Biología Sintética.
[CAT] La Biologia Sintètica és un camp emergent de caràcter interdisciplinar que es fonamenta amb l'aplicació a la enginyeria genètica dels principis de modularitat, abstracció i estandarització. Una nova vessant de la Biologia Sintètica aplicada a les plantes, la Biologia Sintètica Vegetal (BSV), ofereix noves possibilitats de millora de cultius que podrien portar a una millora de la resistència, a una major productivitat, o a un augment de la qualitat nutricional. Tanmateix, per poder arribar a este fi les eines moleculars disponibles en estos moments per a la BSV han d'adaptar-se per convertir-se en modulars, estàndards i més precises. Per això es plantejà com objectiu general d'aquesta Tesi adaptar, expandir i refinar les eines d'ensamblatge d'ADN de la BSV per permetre la incorporació d'especificacions funcionals en la descripció d'elements genètics estàndards (fitobricks) i facilitar la construcció d'estructures multigèniques cada vegada més complexes i precises, incloent eines d'edidat genètic. El punt de partida d'aquesta Tesi fou el mètode d'ensamblatge d'ADN modular GoldenBraid (GB) basat en enzims de restricció tipo IIS. Per optimitzar el proces d'ensamblatge i catalogar la col.lecció de fitobricks generats es desenvolupà una base de dades i un conjunt d'eines software, tal i com es descriu al Capítol 1. El paquet final de software es presentà en format web com GB2.0, fent-se accessible al públic mitjançant la pàgina web www.gbcloning.upv.es. El Capítol 1 també proporciona una descripció detallada del funcionament de GB2.0, exemplificant el seu ús amb l'ensamblatge d'una construcció multigènica per a la producció d'antocians. Amb l'augment en nombre i complexitat de les construccions GB, el següent pas fou el refinament dels estàndards amb la incorporació de la informació experimental associada a cada element genètic (es descriu en el Capítol 2). Per a aquest fi, el paquet de software de GB es reformulà amb una nova versió anomenada GB3.0. Aquesta versió consisteix en un sistema d'ensamblatge auto-contingut i complemtament traçable on les dades experimentals que descriuen la funcionalitat de cada element genètic es mostren en forma de fulla de dades estàndard. La utilitat de les especificacions tècniques per anticipar el comportament de dispositius biològics compostos s'exemplificà amb la combinació de un interruptor químic i un prototip d'un mòdul de sobreproducció d'antocians equivalent al descrit al Capítol 1. Aquesta combinació va tindre com a resultat un dispositiu de producció d'antocians que respón a dexametasona. A més a més, al Capítol 3 es descriu l'adaptació a la tecnologia GB de les eines d'enginyeria genètica CRISPR/Cas9, així com la seua caracterització funcional. La funcionalitat d'aquestes eines per a l'editat gènic i activació i repressió transcripcional es validà amb el sistema d'expressió transitòria en N. benthamiana. Finalment, al Capítol 4 es presenta una implementació pràctica de l'ús de la tecnologia GB per fer millora vegetal de mode precís. La transformació estable en tomaca d'una construcció intragènica que comprén un marcador de selecció intragènic i un regulador de la biosíntesi de flavonoïdes resultà en plantes de tomaca amb un major contingut de flavonols en llur fruits. En conjunt, esta Tesi mostra la implementació de dissenys genètics cada vegada més complexos i precisos en plantes utilitzant elements estàndards i eines modulars seguint els principis de la Biologia Sintètica.
Vázquez Vilar, M. (2016). DESIGN OF GENETIC ELEMENTS AND SOFTWARE TOOLS FOR PLANT SYNTHETIC BIOLOGY [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/68483
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
6

Bueno, Paulo Marcos Siqueira. "Geração de dados de teste orientada à diversidade com o uso de meta-heurísticas." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260996.

Full text
Abstract:
Orientador: Mario Jino
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-21T11:18:41Z (GMT). No. of bitstreams: 1 Bueno_PauloMarcosSiqueira_D.pdf: 3369612 bytes, checksum: e346274c745a489e77b074c57b0c1c78 (MD5) Previous issue date: 2012
Resumo: Técnicas e critérios de teste de software estabelecem elementos requeridos a serem exercitados no teste. A geração de dados de teste visa selecionar dados de teste, do domínio multidimensional de entrada do software, para satisfazer um critério. Uma linha de trabalhos para a geração de dados de teste utiliza meta-heurísticas para buscar, no espaço de possíveis entradas do software, aquelas que satisfaçam um determinado critério, área referida como Teste de Software Baseado em Buscas. Esta tese propõe uma nova técnica, a Geração de Dados de Teste Orientada à Diversidade (Diversity Oriented Test Data Generation - DOTG). Esta técnica incorpora a intuição, encontrada em bons projetistas de teste, de que a variedade, ou diversidade, dos dados de teste tem um papel relevante para a completeza, ou qualidade, do teste realizado. São propostas diferentes perspectivas para a diversidade do teste; cada perspectiva leva em consideração um tipo de informação distinto para avaliar a diversidade. É definido também um meta-modelo para guiar o desenvolvimento das perspectivas da DOTG. É desenvolvida a perspectiva do domínio de entrada do software para a diversidade (DOTG-ID), que considera a posição dos dados de teste neste domínio para calcular a diversidade do conjunto de teste. São propostas uma medida de distância entre dados de teste e uma medida de diversidade de conjuntos de teste. São desenvolvidas três meta-heurísticas para a geração automática de dados de alta diversidade: a SA-DOTG, baseada em Recozimento Simulado; a GA-DOTG, baseada em Algoritmos Genéticos; e a SR-DOTG, baseada na dinâmica de sistemas de partículas eletricamente carregadas. A avaliação empírica da DOTG-ID inclui: uma simulação Monte Carlo, realizada com o objetivo de estudar a influência de fatores na eficácia da técnica; e um experimento com programas, realizado para avaliar o efeito da diversidade dos conjuntos de teste na cobertura alcançada, medida com respeito a critérios de teste baseados em análise de fluxos de dados e no critério baseado em defeitos Análise de Mutantes. Os resultados das avaliações, significativos estatisticamente, indicam que na maioria das situações os conjuntos de alta diversidade atingem eficácia e valores de cobertura maiores do que os alcançados pelos conjuntos gerados aleatoriamente, de mesmo tamanho
Abstract: Software testing techniques and criteria establish required elements to be exercised during testing. Test data generation aims at selecting test data from the multidimensional software's input domain to satisfy a given criterion. A set of works on test data generation apply metaheuristics to search in the space of possible inputs for the software for those inputs that satisfy a given criterion. This field is named Search Based Software Testing. This thesis proposes a new technique, the Diversity Oriented Test Data Generation - DOTG. This technique embodies the intuition, which can be found in good testers, that the variety, or diversity, of test data used to test a software has some relation with the completeness, or quality, of the testing performed. We propose different perspectives for the test diversity concept; each one takes into account a different kind of information to evaluate the diversity. A metamodel is also defined to guide de development of the DOTG perspectives. We developed the Input Domain perspective for diversity (DOTG-ID), which considers the positions of the test data in the software input domain to compute a diversity value for the test sets. We propose a measure of distance between test data and a measure of diversity of test sets. For the automatic generation of high diversity test sets three metaheuristics were developed: the SA-DOTG based on Simulated Annealing; the GADOTG based on Genetic Algorithms, and the SR-DOTG, based on the dynamics of particle systems electrically charged. The empirical evaluation of DOTG-ID includes: a Monte Carlo simulation performed to study the influence of factors on the technique's effectiveness, and an experiment with programs, carried out to evaluate the effect of the test sets diversity on the attained coverage values, measured with respect to data-flow coverage and to mutation coverage. The evaluation results statistically significant, pointing out that in most of cases the test sets with high diversity reach effectiveness and coverage values higher than the ones reached by randomly generated test sets of the same size
Doutorado
Engenharia de Computação
Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
7

Krüger, Franz David, and Mohamad Nabeel. "Hyperparameter Tuning Using Genetic Algorithms : A study of genetic algorithms impact and performance for optimization of ML algorithms." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42404.

Full text
Abstract:
Maskininlärning har blivit allt vanligare inom näringslivet. Informationsinsamling med Data mining (DM) har expanderats och DM-utövare använder en mängd tumregler för att effektivisera tillvägagångssättet genom att undvika en anständig tid att ställa in hyperparametrarna för en given ML-algoritm för nå bästa träffsäkerhet. Förslaget i denna rapport är att införa ett tillvägagångssätt som systematiskt optimerar ML-algoritmerna med hjälp av genetiska algoritmer (GA), utvärderar om och hur modellen ska konstrueras för att hitta globala lösningar för en specifik datamängd. Genom att implementera genetiska algoritmer på två utvalda ML-algoritmer, K-nearest neighbors och Random forest, med två numeriska datamängder, Iris-datauppsättning och Wisconsin-bröstcancerdatamängd. Modellen utvärderas med träffsäkerhet och beräkningstid som sedan jämförs med sökmetoden exhaustive search. Resultatet har visat att GA fungerar bra för att hitta bra träffsäkerhetspoäng på en rimlig tid. Det finns vissa begränsningar eftersom parameterns betydelse varierar för olika ML-algoritmer.
As machine learning (ML) is being more and more frequent in the business world, information gathering through Data mining (DM) is on the rise, and DM-practitioners are generally using several thumb rules to avoid having to spend a decent amount of time to tune the hyperparameters (parameters that control the learning process) of an ML algorithm to gain a high accuracy score. The proposal in this report is to conduct an approach that systematically optimizes the ML algorithms using genetic algorithms (GA) and to evaluate if and how the model should be constructed to find global solutions for a specific data set. By implementing a GA approach on two ML-algorithms, K-nearest neighbors, and Random Forest, on two numerical data sets, Iris data set and Wisconsin breast cancer data set, the model is evaluated by its accuracy scores as well as the computational time which then is compared towards a search method, specifically exhaustive search. The results have shown that it is assumed that GA works well in finding great accuracy scores in a reasonable amount of time. There are some limitations as the parameter’s significance towards an ML algorithm may vary.
APA, Harvard, Vancouver, ISO, and other styles
8

Pečínka, Zdeněk. "Gramatická evoluce v optimalizaci software." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363820.

Full text
Abstract:
This master's thesis offers a brief introduction to evolutionary computation. It describes and compares the genetic programming and grammar based genetic programming and their potential use in automatic software repair. It studies possible applications of grammar based genetic programming on automatic software repair. Grammar based genetic programming is then used in design and implementation of a new method for automatic software repair. Experimental evaluation of the implemented automatic repair was performed on set of test programs.
APA, Harvard, Vancouver, ISO, and other styles
9

Haraldsson, Saemundur Oskar. "Genetic improvement of software : from program landscapes to the automatic improvement of a live system." Thesis, University of Stirling, 2017. http://hdl.handle.net/1893/26007.

Full text
Abstract:
In today’s technology driven society, software is becoming increasingly important in more areas of our lives. The domain of software extends beyond the obvious domain of computers, tablets, and mobile phones. Smart devices and the internet-of-things have inspired the integra- tion of digital and computational technology into objects that some of us would never have guessed could be possible or even necessary. Fridges and freezers connected to social media sites, a toaster activated with a mobile phone, physical buttons for shopping, and verbally asking smart speakers to order a meal to be delivered. This is the world we live in and it is an exciting time for software engineers and computer scientists. The sheer volume of code that is currently in use has long since outgrown beyond the point of any hope for proper manual maintenance. The rate of which mobile application stores such as Google’s and Apple’s have expanded is astounding. The research presented here aims to shed a light on an emerging field of research, called Genetic Improvement ( GI ) of software. It is a methodology to change program code to improve existing software. This thesis details a framework for GI that is then applied to explore fitness landscape of bug fixing Python software, reduce execution time in a C ++ program, and integrated into a live system. We show that software is generally not fragile and although fitness landscapes for GI are flat they are not impossible to search in. This conclusion applies equally to bug fixing in small programs as well as execution time improvements. The framework’s application is shown to be transportable between programming languages with minimal effort. Additionally, it can be easily integrated into a system that runs a live web service.
APA, Harvard, Vancouver, ISO, and other styles
10

Hacoupian, Yourik. "Mining Aspects through Cluster Analysis Using Support Vector Machines and Genetic Algorithms." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/170.

Full text
Abstract:
The main purpose of object-oriented programming is to use encapsulation to reduce the amount of coupling within each object. However, object-oriented programming has some weaknesses in this area. To address this shortcoming, researchers have proposed an approach known as aspect-oriented programming (AOP). AOP is intended to reduce the amount of tangled code within an application by grouping similar functions into an aspect. To demonstrate the powerful aspects of AOP, it is necessary to extract aspect candidates from current object-oriented applications. Many different approaches have been proposed to accomplish this task. One of such approaches utilizes vector based clustering to identify the possible aspect candidates. In this study, two different types of vectors are applied to two different vector-based clustering techniques. In this approach, each method in a software system S is represented by a d-dimensional vector. These vectors take into account the Fan-in values of the methods as well as the number of calls made to individual methods within the classes in software system S. Then a semi-supervised clustering approach known as Support Vector Clustering is applied to the vectors. In addition, an improved K-means clustering approach which is based on Genetic Algorithms is also applied to these vectors. The results obtained from these two approaches are then evaluated using standard metrics for aspect mining. In addition to introducing two new clustering based approaches to aspect mining, this research investigates the effectiveness of the currently known metrics used in aspect mining to evaluate a given vector based approach. Many of the metrics currently used for aspect mining evaluations are singleton metrics. Such metrics evaluate a given approach by taking into account only one aspect of a clustering technique. This study, introduces two different sets of metrics by combining these singleton measures. The iDIV metric combines the Diversity of a partition (DIV), Intra-cluster distance of a partition (IntraD), and the percentage of the number of methods analyzed (PAM) values to measure the overall effectiveness of the diversity of the partitions. While the iDISP metric combines the Dispersion of crosscutting concerns (DISP) along with Inter-cluster distance of a partition (InterD) and the PAM values to measure the quality of the clusters formed by a given method. Lastly, the oDIV and oDISP metrics introduced, take into account the complexity of the algorithms in relation with the DIV and DISP values. By comparing the obtained values for each of the approaches, this study is able to identify the best performing method as it pertains to these metrics.
APA, Harvard, Vancouver, ISO, and other styles
11

Mehrmand, Arash. "A Factorial Experiment on Scalability of Search-based Software Testing." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4224.

Full text
Abstract:
Software testing is an expensive process, which is vital in the industry. Construction of the test-data in software testing requires the major cost and knowing which method to use in order to generate the test data is very important. This paper discusses the performance of search-based algorithms (preferably genetic algorithm) versus random testing, in software test-data generation. A factorial experiment is designed so that, we have more than one factor for each experiment we make. Although many researches have been done in the area of automated software testing, this research differs from all of them due to sample programs (SUTs) which are used. Since the program generation is automatic as well, Grammatical Evolution is used to guide the program generations. They are not goal based, but generated according to the grammar we provide, with different levels of complexity. Genetic algorithm is first applied to programs, then we apply random testing. Based on the results which come up, this paper recommends one method to use for software testing, if the SUT has the same conditions as we had in this study. SUTs are not like the sample programs, provided by other studies since they are generated using a grammar.
APA, Harvard, Vancouver, ISO, and other styles
12

Lei, Celestino. "Using genetic algorithms and boosting for data preprocessing." Thesis, University of Macau, 2002. http://umaclib3.umac.mo/record=b1447848.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Naik, Apoorv. "Orchestra Framework: Protocol Design for Ad Hoc and Delay Tolerant Networks using Genetic Algorithms." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/43409.

Full text
Abstract:
Protocol designs targeted at a specific network scenario or performance metric appear promising on paper, but the complexity and cost of implementing and tuning a routing protocol from scratch presents a major bottleneck in the protocol design process. A unique framework called 'Orchestra` is proposed in the literature to support the testing and development of novel routing designs. The idea of the Orchestra framework is to create generic and reusable routing functional components which can be combined to create unique protocol designs customized for a specific performance metric or network setting. The first contribution of this thesis is the development of a generic, modular, scalable and extensible architecture of the Orchestra framework. Once the architecture and implementation of the framework is completed, the second contribution of this thesis is the development of functional components and strategies to design and implement routing protocols for delay tolerant networks (DTNs). DTNs are a special type of ad hoc network characterized by intermittent connectivity, long propagation delays and high loss rate. Thus, traditional ad hoc routing approaches cannot be used in DTNs, and special features must be developed for the Orchestra framework to support the design of DTN routing protocols. The component-based architecture of Orchestra can capture a variety of modules that can be used to assemble a routing protocol. However, manually assembling these components may result in suboptimal designs, because it is difficult to determine what the best combination is for a particular set of performance objectives and network characteristics. The third contribution of the thesis addresses this problem. A genetic algorithm based approach to automate the process of routing protocol design is developed and its performance is evaluated in the context of the Orchestra framework.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
14

Eaves, Hugh L. "Evaluating and Improving the Efficiency of Software and Algorithms for Sequence Data Analysis." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4295.

Full text
Abstract:
With the ever-growing size of sequence data sets, data processing and analysis are an increasingly large portion of the time and money spent on nucleic acid sequencing projects. Correspondingly, the performance of the software and algorithms used to perform that analysis has a direct effect on the time and expense involved. Although the analytical methods are widely varied, certain types of software and algorithms are applicable to a number of areas. Targeting improvements to these common elements has the potential for wide reaching rewards. This dissertation research consisted of several projects to characterize and improve upon the efficiency of several common elements of sequence data analysis software and algorithms. The first project sought to improve the efficiency of the short read mapping process, as mapping is the most time consuming step in many data analysis pipelines. The result was a new short read mapping algorithm and software, demonstrated to be more computationally efficient than existing software and enabling more of the raw data to be utilized. While developing this software, it was discovered that a widely used bioinformatics software library introduced a great deal of inefficiency into the application. Given the potential impact of similar libraries to other applications, and because little research had been done to evaluate library efficiency, the second project evaluated the efficiency of seven of the most popular bioinformatics software libraries, written in C++, Java, Python, and Perl. This evaluation showed that two of libraries written in the most popular language, Java, were an order of magnitude slower and used more memory than expected based on the language in which they were implemented. The third and final project, therefore, was the development of a new general-purpose bioinformatics software library for Java. This library, known as BioMojo, incorporated a new design approach resulting in vastly improved efficiency. Assessing the performance of this new library using the benchmark methods developed for the second project showed that BioMojo outperformed all of the other libraries across all benchmark tasks, being up to 30 times more CPU efficient than existing Java libraries.
APA, Harvard, Vancouver, ISO, and other styles
15

Olofsson, Fredrik, and Johan W. Andersson. "Human-like Behaviour in Real-Time Strategy Games : An Experiment With Genetic Algorithms." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3814.

Full text
Abstract:
If a computer game company wants to stay competitive they must offer something extra. For many years, this extra has often been synonymous with better graphics. Lately, and thanks to the Internet, the focus has shifted in favour of more multi-player support. This also means that the requirements of one-player games increases. Our proposal, to meet these new requirements, is that future game AI is made more human-like. One way to achieve this is believed to be the use of learning AI techniques, such as genetic algorithms and neural networks. In this thesis we will present the results from an experiment aiming at testing strategy game AI. Test persons played against traditional strategy game AI, a genetic algorithm AI, and other humans to see if they experienced any differences in the behaviour of the opponents.
APA, Harvard, Vancouver, ISO, and other styles
16

Frier, Jason Ross. "Genetic Algorithms as a Viable Method of Obtaining Branch Coverage." UNF Digital Commons, 2017. http://digitalcommons.unf.edu/etd/722.

Full text
Abstract:
Finding a way to automate the generation of test data is a crucial aspect of software testing. Testing comprises 50% of all software development costs [Korel90]. Finding a way to automate testing would greatly reduce cost and labor involved in the task of software testing. One of the ways to automate software testing is to automate the generation of test data inputs. For example, in statement coverage, creating test cases that will cover all of the conditions required when testing that program would be costly and time-consuming if undertaken manually. Therefore, a way must be found that allows the automation of creating test data inputs to satisfy all test requirements for a given test. One such way of automating test data generation is the use of genetic algorithms. Genetic algorithms use the creation of generations of test inputs, and then choose the most fit test inputs, or those test inputs that are most likely to satisfy the test requirement, as the test inputs that will be passed to the next generation of inputs. In this way, the solution to the test requirement problem can be found in an evolutionary fashion. Current research suggests that comparison of genetic algorithms with random test input generation produces varied results. While results of these studies show promise for the future use of genetic algorithms as an answer to the issue of discovering test inputs that will satisfy branch coverage, what is needed is additional experimental research that will validate the performance of genetic algorithms in a test environment. This thesis makes use of the EvoSuite plugin tool, which is a software plugin for the IntelliJ IDEA Integrated Development Environment that runs using a genetic algorithm as its main component. The EvoSuite tool is run against 22 Java classes, and the EvoSuite tool will automatically generate unit tests and will also execute those unit tests while simultaneously measuring branch coverage of the unit tests against the Java classes under test. The results of this thesis’ experimental research are that, just as the literature indicates, the EvoSuite tool performed with varied results. In particular, Fraser’s study of the EvoSuite tool as an Eclipse plugin was accurate in depicting how the EvoSuite tool would come to perform as an IntelliJ plugin, namely that the EvoSuite tool would perform poorly for a large number of classes tested.
APA, Harvard, Vancouver, ISO, and other styles
17

Hallier, Andrea Rae. "Variant-curation and database instantiation (Variant-CADI): an integrated software system for the automation of collection, annotation and management of variations in clinical genetic testing." Thesis, University of Iowa, 2016. https://ir.uiowa.edu/etd/2218.

Full text
Abstract:
One of the tools a clinician has in disease diagnosis and treatment is genetic testing. To generate value in genetic testing, the link between genetic variants and disease must be discovered, documented, and shared within the community. Working with two existing genomic variation tools, Kafeen and Cordova, a new set of features referred to as Variant-Curation and Database Instantiation (Variant-CADI) was identified, designed, implemented and integrated into the existing Cordova system to unite data collection, management and distribution into one cohesive tool accessible through user interfaces. This eliminates the user needing specialized knowledge of the underlying implementation, data pipeline or data management to collect desired disease specific genetic variations. Using this tool, new disease-specific variation database instances have been initialized and created as demonstrations of the utility of these applications.
APA, Harvard, Vancouver, ISO, and other styles
18

Shahdad, Mir Abubakr. "Engineering innovation (TRIZ based computer aided innovation)." Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3317.

Full text
Abstract:
This thesis describes the approach and results of the research to create a TRIZ based computer aided innovation tools (AEGIS and Design for Wow). This research has mainly been based around two tools created under this research: called AEGIS (Accelerated Evolutionary Graphics Interface System), and Design for Wow. Both of these tools are discussed in this thesis in detail, along with the test data, design methodology, test cases, and research. Design for Wow (http://www.designforwow.com) is an attempt to summarize the successful inventions/ designs from all over the world on a web portal which has multiple capabilities. These designs/innovations are then linked to the TRIZ Principles in order to determine whether innovative aspects of these successful innovations are fully covered by the forty TRIZ principles. In Design for Wow, a framework is created which is implemented through a review tool. The Design for Wow website includes this tool which has been used by researcher and the users of the site and reviewers to analyse the uploaded data in terms of strength of TRIZ Principles linked to them. AEGIS (Accelerated Evolutionary Graphics Interface System) is a software tool developed under this research aimed to help the graphic designers to make innovative graphic designs. Again it uses the forty TRIZ Principles as a set of guiding rules in the software. AEGIS creates graphic design prototypes according to the user input and uses TRIZ Principles framework as a guide to generate innovative graphic design samples. The AEGIS tool created is based on TRIZ Principles discussed in Chapter 3 (a subset of them). In AEGIS, the TRIZ Principles are used to create innovative graphic design effects. The literature review on innovative graphic design (in chapter 3) has been analysed for links with TRIZ Principles and then the DNA of AEGIS has been built on the basis of this study. Results from various surveys/ questionnaires indicated were used to collect the innovative graphic design samples and then TRIZ was mapped to it (see section 3.2). The TRIZ effects were mapped to the basic graphic design elements and the anatomy of the graphic design letters was studied to analyse the TRIZ effects in the collected samples. This study was used to build the TRIZ based AEGIS tool. Hence, AEGIS tool applies the innovative effects using TRIZ to basic graphic design elements (as described in section 3.3). the working of AEGIS is designed based on Genetic Algorithms coded specifically to implement TRIZ Principles specialized for Graphic Design, chapter 4 discusses the process followed to apply TRIZ Principles to graphic design and coding them using Genetic Algorithms, hence resulting in AEGIS tool. Similarly, in Design for Wow, the content uploaded has been analysed for its link with TRIZ Principles (see section 3.1 for TRIZ Principles). The tool created in Design for Wow is based on the framework of analysing the TRIZ links in the uploaded content. The ‘Wow’ concept discussed in the section 5.1 and 5.2 is the basis of the concept of Design for Wow website, whereby the users upload the content they classify as ‘Wow’. This content then is further analysed for the ‘Wow factor’ and then mapped to TRIZ Principles as TRIZ tagging methodology is framed (section 5.5). From the results of the research, it appears that the TRIZ Principles are a comprehensive set of innovation basic building blocks. Some surveys suggest that amongst other tools, TRIZ Principles were the first choice and used most .They have thus the potential of being used in other innovation domains, to help in their analysis, understanding and potential development.
APA, Harvard, Vancouver, ISO, and other styles
19

Buzzo, André Vinicius. "Estudo de algoritmo evolutivo com codificação real na geração de dados de teste estrutural e implementação de protótipo de ferramenta de apoio." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275761.

Full text
Abstract:
Orientador: Eliane Martins
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-18T01:12:41Z (GMT). No. of bitstreams: 1 Buzzo_AndreVinicius_M.pdf: 3473272 bytes, checksum: e3da091fcaa16f3245465636a77cfad0 (MD5) Previous issue date: 2011
Resumo: A geração automática de dados de teste pode ser abordada como um problema de otimização e algoritmos evolutivos se tornaram um foco de muita pesquisa nesta área. Recentemente um novo tipo de algoritmo evolutivo chamado GEO (GEO - Generalized Extremal Optimization) tem sido explorado em uma grande classe de problemas de otimização. Neste trabalho é apresentado o uso do algoritmo evolutivo GEO com codificação real - GEOreal - na geração de dados de teste. O desempenho deste algoritmo é comparado com diversos outros algoritmos e para melhor avaliar os resultados, duas funções objetivo - que mapeiam o problema de geração de dados em um problema de otimização - foram utilizadas. O algoritmo GEOreal combinado com a função objetivo Bueno e Jino obtiveram os melhores resultados nos problemas abordados. Um protótipo foi desenvolvido implementando todos os conceitos envolvidos neste trabalho e o seu desempenho foi comparado com outras ferramentas já disponíveis no mercado. Os resultados mostraram que este protótipo superou as ferramentas comparadas ao minimizar o tempo dispendido no esforço de gerar os dados de teste
Abstract: Automatic test data generation can be approached as an optimization problem and evolutionary algorithms have become a focus of much research in this area. Recently a new type of evolutionary algorithm called GEO (GEO - Generalized Extremal Optimization) has been explored in a large class of optimization problems. This paper presents the use of evolutionary algorithm with real coding GEO - GEOreal - in test data generation. The performance of this algorithm is compared with several other algorithms and to better compare the results two objective functions - that map the problem of generating data in an optimization problem - were used. The algorithm GEOreal combined with the function Bueno and Jino had the best results in the problems addressed. A prototype was developed implementing all the concepts involved in this work and its performance was compared with other tools already available. The results showed that this prototype was better than the compared tools when minimizing the time spent in the effort to test data generation
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
20

Moskowitz, David. "Automatically Defined Templates for Improved Prediction of Non-stationary, Nonlinear Time Series in Genetic Programming." NSUWorks, 2016. http://nsuworks.nova.edu/gscis_etd/953.

Full text
Abstract:
Soft methods of artificial intelligence are often used in the prediction of non-deterministic time series that cannot be modeled using standard econometric methods. These series, such as occur in finance, often undergo changes to their underlying data generation process resulting in inaccurate approximations or requiring additional human judgment and input in the process, hindering the potential for automated solutions. Genetic programming (GP) is a class of nature-inspired algorithms that aims to evolve a population of computer programs to solve a target problem. GP has been applied to time series prediction in finance and other domains. However, most GP-based approaches to these prediction problems do not consider regime change. This paper introduces two new genetic programming modularity techniques, collectively referred to as automatically defined templates, which better enable prediction of time series involving regime change. These methods, based on earlier established GP modularity techniques, take inspiration from software design patterns and are more closely modeled after the way humans actually develop software. Specifically, a regime detection branch is incorporated into the GP paradigm. Regime specific behavior evolves in a separate program branch, implementing the template method pattern. A system was developed to test, validate, and compare the proposed approach with earlier approaches to GP modularity. Prediction experiments were performed on synthetic time series and on the S&P 500 index. The performance of the proposed approach was evaluated by comparing prediction accuracy with existing methods. One of the two techniques proposed is shown to significantly improve performance of time series prediction in series undergoing regime change. The second proposed technique did not show any improvement and performed generally worse than existing methods or the canonical approaches. The difference in relative performance was shown to be due to a decoupling of reusable modules from the evolving main program population. This observation also explains earlier results regarding the inferior performance of genetic programming techniques using a similar, decoupled approach. Applied to financial time series prediction, the proposed approach beat a buy and hold return on the S&P 500 index as well as the return achieved by other regime aware genetic programming methodologies. No approach tested beat the benchmark return when factoring in transaction costs.
APA, Harvard, Vancouver, ISO, and other styles
21

Monção, Ana Claudia Bastos Loureiro. "Uma abordagem evolucionária para o teste de instruções select SQL com o uso da análise de mutantes." Universidade Federal de Goiás, 2013. http://repositorio.bc.ufg.br/tede/handle/tede/3346.

Full text
Abstract:
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2014-10-15T17:49:53Z No. of bitstreams: 2 Dissertacao - Ana Claudia Bastos Loureiro Monção - 2013.pdf: 4213405 bytes, checksum: 3bbe190ae0f4a45a2f8b4e71026f5d2e (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Jaqueline Silva (jtas29@gmail.com) on 2014-10-16T17:59:00Z (GMT) No. of bitstreams: 2 Dissertacao - Ana Claudia Bastos Loureiro Monção - 2013.pdf: 4213405 bytes, checksum: 3bbe190ae0f4a45a2f8b4e71026f5d2e (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2014-10-16T17:59:00Z (GMT). No. of bitstreams: 2 Dissertacao - Ana Claudia Bastos Loureiro Monção - 2013.pdf: 4213405 bytes, checksum: 3bbe190ae0f4a45a2f8b4e71026f5d2e (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-08-02
Software Testing is an important area of Software Engineering to ensuring the software quality. It consists of activities that involve long time and high costs, but need to be made throughout the process of building software. As in other areas of software engineering, there are problems in the activities of Software Testing whose solution is not trivial. For these problems, several techniques of optimization and search have been explored trying to find an optimal solution or near optimal, giving rise to lines of research textit Search-Based Software Engineering (SBSE) and textit Search-Based Software Testing (SBST). This work is part of this context and aims to solve the problem of selecting test data for test execution in SQL statements. Given the number of potential solutions to this problem, the proposed approach combines techniques Mutation Analysis for SQL with Evolutionary Computation to find a reduced data set, that be able to detect a large number of defects in SQL statements of a particular application. Based on a heuristic perspective, the proposal uses Genetic Algorithms (GA) to select tuples from a existing database (from production environment) trying to reduce it to a set of data relevant and effective. During the evolutionary process, Mutation Analysis is used to evaluate each set of test data selected by the AG. The results obtained from the experiments showed a good performance using meta-heuristic of Genetic Algorithms, and its variations.
Teste de Software é uma área da Engenharia de Software de fundamental importância para a garantia da qualidade do software. São atividades que envolvem tempo e custos elevados, mas que precisam ser realizadas durante todo o processo de construção de um software. Assim como em outra áreas da Engenharia de Software, existem problemas nas atividades de Teste de Software cuja solução não é trivial. Para esses problemas, têm sido exploradas várias técnicas de busca e otimização tentando encontrar uma solução ótima ou perto da ótima, dando origem às linhas de pesquisa Search-Based Software Engineering (SBSE) e Search-Based Software Testing (SBST). O presente trabalho está inserido neste contexto e tem como objetivo solucionar o problema de seleção de dados de teste para execução de testes em instruções SQL. Dada a quantidade de soluções possíveis para este problema, a abordagem proposta combina técnicas de Análise de Mutantes SQL com Computação Evolucionária para encontrar um conjunto de dados reduzido que seja capaz de detectar uma grande quantidade de defeitos em instruções SQL de uma determinada aplicação. Baseada em uma perspectiva heurística, a proposta utiliza Algoritmos Genéticos (AG) para selecionar tuplas de um banco de dados existente (de produção) tentando reduzi-lo em um conjunto de dados relevante e efetivo. Durante o processo evolucionário, a Análise de Mutantes é utilizada para avaliação de cada conjunto de dados de teste selecionado pelo AG. Os resultados obtidos com a realização dos experimentos revelaram um bom desempenho utilizando a metaheurística dos Algoritmos Genéticos e suas variações.
APA, Harvard, Vancouver, ISO, and other styles
22

Amorim, Lucas Benevides Viana de. "Um método para descoberta automática de regras para a detecção de Bad Smells." Universidade Federal de Alagoas, 2014. http://www.repositorio.ufal.br/handle/riufal/1757.

Full text
Abstract:
One of the techniques to maintain software quality is code refactoring. But to take advantage of code refactoring, one must know where in code it must be applied. A catalog of bad smells in code has been proposed in the literature as a way to know when a certain piece of code should be refactored andwhat kind of refactoring should be applied. This catalog has been extended by other researchers. However, detecting such bad smells is far from trivial, mainly because of the lack of a precise and consensual definition of each Bad Smell. In this researchwork,we propose a solution to the problemof automatic detection of Bad Smells by means of the automatic discovery of metrics based rules. In order to evaluate the effectiveness of the technique, we used a dataset containing information on software metrics calculated for 4 open source software systems written in Java (ArgoUML, Eclipse,Mylyn and Rhino) and, by means of a Decision Tree induction algorithm, C5.0, we were capable of generating rules for the detection of the 12 Bad Smells that were analyzed in our study. Our experiments show that the generate rules performed very satisfactorily when tested against a separated test dataset. Furthermore, aiming to optimize the proposed approach, a Genetic Algorithm was implemented to preselect the most informative software metrics for each Bad Smell and we show that it is possible to reduce classification error in addition to, many times, reduce the size of the generated rules. When compared to existing Bad Smells detection tools, we show evidence that the proposed technique has advantages.
Uma das técnicas para a manutenção da qualidade de um software é o refatoramento do código, mas para que esta prática traga benefícios, é necessário saber em que partes do código ela deve ser aplicada. Um catálogo com os problemas estruturais mais comuns (Bad Smells) foi proposto na literatura como uma maneira de saber quando um fragmento de código deve ser refatorado, e que tipo de refatoramento deve ser aplicado. Este catálogo vem sendo estendido por outros pesquisadores. No entanto, a detecção desses Bad Smells, está longe de ser trivial, principalmente devido a falta de uma definição precisa e consensual de cada Bad Smell. Neste trabalho de pesquisa, propomos uma solução para o problema da detecção automática de Bad Smells por meio da descoberta automática de regras baseadas emmétricas de software. Para avaliar a efetividade da técnica, utilizamos um conjunto de dados com informações sobre métricas de software calculadas para 4 sistemas de software de código aberto programados emJava (ArgoUML, Eclipse,Mylyn e Rhino) e, por meio de umalgoritmo classificador, indutor de Árvores deDecisão, C5.0, fomos capazes de gerar regras para a detecção dos 12 Bad Smells analisados emnossos estudos. Nossos experimentos demonstramque regras geradas obtiveramumresultado bastante satisfatório quando testadas emumconjunto de dados à parte (conjunto de testes). Além disso, visando otimizar o desempenho da solução proposta, implementamos um Algoritmo Genético para pré-selecionar as métricas de software mais informativas para cada Bad Smell emostramos que é possível diminuir o erro de classificação alémde, muitas vezes, reduzir o tamanho das regras geradas. Em comparação com ferramentas existentes para detecção de Bad Smells, mostramos indícios de que a técnica proposta apresenta vantagens.
APA, Harvard, Vancouver, ISO, and other styles
23

Crawford, Alistair, and n/a. "Bad Behaviour: The Prevention of Usability Problems Using GSE Models." Griffith University. School of Information and Communication Technology, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20061108.154141.

Full text
Abstract:
The aim of Human Computer Interaction or HCI is to both understand and improve the quality of the users' experience with the systems and technology they interact with. Recent HCI research requirements have stated a need for a unified predictive approach to system design that consolidates system engineering, cognitive modelling, and design principles into a single 'total system approach.' At present, few methods seek to integrate all three of these aspects into a single method and of those that do many are extensions to existing engineering techniques. This thesis, however proposes a new behaviour based approach designed to identify usability problems early in the design process before testing the system with actual users. In order to address the research requirements, this model uses a new design notation called Genetic Software Engineering (GSE) in conjunction with aspects of a cognitive modelling technique called NGOMSL (Natural GOMS Language) as the basis for this approach. GSE's behaviour tree notation, and NGOMSL's goal orientated format are integrated using a set of simple conversion rules defined in this study. Several well established design principles, believed to contribute to the eventual usability of a product, are then modelled in GSE. This thesis addresses the design of simple interfaces and the design of complex ubiquitous technology. The new GSE approach is used to model and predict usability problems in an extensive range of tasks from programming a VCR to making a video recording on a modern mobile phone. The validity of these findings is tested against actual user tests on the same tasks and devices to demonstrate the effectiveness of the GSE approach. Ultimately, the aim of the study is to demonstrate the effectiveness of the new cognitive and engineering based approach at predicting usability problems based on tangible representations of established design principles. This both fulfils the MCI research requirements for a 'total system approach' and establishes a new and novel approach to user interface and system design.
APA, Harvard, Vancouver, ISO, and other styles
24

Crawford, Alistair. "Bad Behaviour: The Prevention of Usability Problems Using GSE Models." Thesis, Griffith University, 2006. http://hdl.handle.net/10072/366051.

Full text
Abstract:
The aim of Human Computer Interaction or HCI is to both understand and improve the quality of the users' experience with the systems and technology they interact with. Recent HCI research requirements have stated a need for a unified predictive approach to system design that consolidates system engineering, cognitive modelling, and design principles into a single 'total system approach.' At present, few methods seek to integrate all three of these aspects into a single method and of those that do many are extensions to existing engineering techniques. This thesis, however proposes a new behaviour based approach designed to identify usability problems early in the design process before testing the system with actual users. In order to address the research requirements, this model uses a new design notation called Genetic Software Engineering (GSE) in conjunction with aspects of a cognitive modelling technique called NGOMSL (Natural GOMS Language) as the basis for this approach. GSE's behaviour tree notation, and NGOMSL's goal orientated format are integrated using a set of simple conversion rules defined in this study. Several well established design principles, believed to contribute to the eventual usability of a product, are then modelled in GSE. This thesis addresses the design of simple interfaces and the design of complex ubiquitous technology. The new GSE approach is used to model and predict usability problems in an extensive range of tasks from programming a VCR to making a video recording on a modern mobile phone. The validity of these findings is tested against actual user tests on the same tasks and devices to demonstrate the effectiveness of the GSE approach. Ultimately, the aim of the study is to demonstrate the effectiveness of the new cognitive and engineering based approach at predicting usability problems based on tangible representations of established design principles. This both fulfils the MCI research requirements for a 'total system approach' and establishes a new and novel approach to user interface and system design.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
25

Freitas, Diogo Machado de. "Geração evolucionária de heurísticas para localização de defeitos de software." Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/9010.

Full text
Abstract:
Submitted by Franciele Moreira (francielemoreyra@gmail.com) on 2018-10-30T13:30:59Z No. of bitstreams: 2 Dissertação - Diogo Machado de Freitas - 2018.pdf: 1477764 bytes, checksum: 73759c5ece96bf48ffd4d698f14026b9 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-10-30T13:41:38Z (GMT) No. of bitstreams: 2 Dissertação - Diogo Machado de Freitas - 2018.pdf: 1477764 bytes, checksum: 73759c5ece96bf48ffd4d698f14026b9 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-10-30T13:41:38Z (GMT). No. of bitstreams: 2 Dissertação - Diogo Machado de Freitas - 2018.pdf: 1477764 bytes, checksum: 73759c5ece96bf48ffd4d698f14026b9 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-09-24
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Fault Localization is one stage of the software life cycle, which demands important resources such as time and effort spent on a project. There are several initiatives towards the automation of the fault localization process and the reduction of the associated resources. Many techniques are based on heuristics that use information obtained (spectrum) from the execution of test cases, in order to measure the suspiciousness of each program element to be defective. Spectrum data generally refers to code coverage and test results (positive or negative). The present work presents two approaches based on the Genetic Programming algorithm for the problem of Fault Localization: a method to compose a new heuristic from a set of existing ones; and a method for constructing heuristics based on data from program mutation analysis. The innovative aspects of both methods refer to the joint investigation of: (i) specialization of heuristics for certain programs; (ii) application of an evolutionary approach to the generation of heuristics with non-linear equations; (iii) creation of heuristics based on the combination of traditional heuristics; (iv) use of coverage and mutation spectra extracted from the test activity; (v) analyzing and comparing the efficacy of methods that use coverage and mutation spectra for fault localization; and (vi) quality analysis of the mutation spectra as a data source for fault localization. The results have pointed to the competitiveness of both approaches in their contexts.
Localização de Defeitos é uma etapa do ciclo de vida de software, que demanda recursos importantes tais como o tempo e o esforço gastos em um projeto. Existem diversas iniciativas na direção da automação do processo de localização de defeitos e da redução dos recursos associados. Muitas técnicas são baseadas heurísticas que utilizam informação obtida (espectro) a partir da execução de casos de teste, visando a medir a suspeita de cada elemento de programa para ser defeituoso. Os dados de espectro referem-se, em geral, à cobertura de código e aos resultados dos teste (positivo ou negativo). O presente trabalho apresenta duas abordagens baseadas no algoritmo Programação Genética para o problema de Localização de Defeitos: um método para compor automaticamente novas heurísticas a partir de um conjunto de heurísticas existentes; e um método para a construção de heurísticas baseadas em dados oriundos da análise de mutação de programas. Os aspectos inovadores de ambos os métodos referem-se à investigação conjunta de: (i) especialização de heurísticas para determinados programas; (ii) aplicação de abordagem evolutiva para a geração de heurísticas com equações não lineares; (iii) criação de heurísticas a partir da combinação de heurísticas tradicionais; (iv) uso de espectro de cobertura e de mutação extraídos da atividade de teste; (v) análise e comparação da eficácia de métodos que usam os espectros de cobertura e de mutação para a localização de defeitos; e (vi) análise da qualidade dos espectros de mutação como fonte de dados para a localização de defeitos. Os resultados apontaram competitividade de ambas as abordagens em seus contextos.
APA, Harvard, Vancouver, ISO, and other styles
26

Bruneliere, Hugo. "Generic Model-based Approaches for Software Reverse Engineering and Comprehension." Thesis, Nantes, 2018. http://www.theses.fr/2018NANT4040/document.

Full text
Abstract:
De nos jours, les entreprises font souvent face à des problèmes de gestion, maintenance, évolution ou remplacement de leurs systèmes logiciel existants. La Rétro-Ingénierie est la phase requise d’obtention de diverses représentations de ces systèmes pour une meilleure compréhension de leurs buts / états.L’Ingénierie Dirigée par les Modèles (IDM) est un paradigme du Génie Logiciel reposant sur la création, manipulation et utilisation intensive de modèles dans les tâches de conception, développement, déploiement, intégration, maintenance et évolution. La Rétro-Ingénierie Dirigée par les Modèles (RIDM) a été proposée afin d’améliorer les approches de Rétro-Ingénierie traditionnelles. Elle vise à obtenir des modèles à partir d’un système existant, puis à les fédérer via des vues cohérentes pour une meilleure compréhension.Cependant, les solutions existantes sont limitées car étant souvent des intégrations spécifiques d’outils. Elles peuvent aussi être (très) hétérogènes, entravant ainsi leurs déploiements. Il manque donc de solutions pour que la RIDM puisse être combinée avec des capacités de vue / fédération de modèles.Dans cette thèse, nous proposons deux approches complémentaires, génériques et extensibles basées sur les modèles ainsi que leurs implémentations en open source basées sur Eclipse-EMF : (i) Pour faciliter l’élaboration de solutions de RIDM dans des contextes variés, en obtenant différents types de modèles à partir de systèmes existants (e.g. leurs codes source, données). (ii) Pour spécifier, construire et manipuler des vues fédérant différents modèles (e.g. résultant de la RIDM) selon des objectifs de compréhension (e.g. pour diverses parties prenantes)
Nowadays, companies face more and more the problem of managing, maintaining, evolving or replacing their existing software systems. Reverse Engineering is the required phase of obtaining various representations of these systems to provide a better comprehension of their purposes / states.Model Driven Engineering (MDE) is a Software Engineering paradigm relying on intensive model creation, manipulation and use within design, development, deployment, integration, maintenance and evolution tasks. Model Driven Reverse Engineering (MDRE) has been proposed to enhance traditional Reverse Engineering approaches via the application of MDE. It aims at obtaining models from an existing system according to various aspects, and then possibly federating them via coherent views for further comprehension.However, existing solutions are limited as they quite often rely on case-specific integrations of different tools. Moreover, they can sometimes be (very) heterogeneous which may hinder their practical deployments. Generic and extensible solutions are still missing for MDRE to be combined with model view / federation capabilities.In this thesis, we propose to rely on two complementary, generic and extensible model-based approaches and their Eclipse/EMF-based implementations in open source: (i) To facilitate the elaboration of MDRE solutions in many different contexts, by obtaining different kinds of models from existing systems (e.g. their source code, data). (ii) To specify, build and manipulate views federating different models (e.g. resulting from MDRE) according to comprehension objectives (e.g. for different stakeholders)
APA, Harvard, Vancouver, ISO, and other styles
27

patney, vikas. "Software Engineering Best Practices for Parallel Computing Development." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Informationsteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-23803.

Full text
Abstract:
In today’s computer age, the numerical simulations are replacing the traditional laboratory experiments. Researchers around the world are using advanced computer software and multiprocessor computer technology to perform experiments, and analyse these simulation results to advance in their respective endeavours. With a wide variety of tools and technologies available, it could be a tedious and time taking task for a non-computer science researcher to choose appropriate methodologies for developing simulation software The research of this thesis addresses the use of Message Passing Interface (MPI) using object-oriented programming techniques and discusses the methodologies suitable to scientific computing, also, propose a customized software engineering development model.
APA, Harvard, Vancouver, ISO, and other styles
28

Anderson, Steven E. "Functional specification for a Generic C3I Workstation." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA241377.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, September 1990.
Thesis Advisor(s): Luqi. Second Reader:Shimeall, Tomothy. "September 1990." Description based on title screen viewed on December 16, 2009. DTIC Descriptor(s): Communications intelligence, work stations, command control communications, embedded systems, models, combat readiness, specifications, tools, computers, theses, prototypes, costs, evolution(general), fleets(ships), naval operations, budgets, economic impact, combat effectiveness, requirements, computer programs, software engineering Author(s) subject terms: Software specification, hard real time software, embedded systems, generic C3I workstation, next generation computer resources. Includes bibliographical references (p. 253-255). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
29

Uzuncaova, Engin. "A generic software architecture for deception-based intrusion detection and response systems." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FUzuncaova.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science and M.S. in Software Engineering)--Naval Postgraduate School, March 2003.
Thesis advisor(s): James Bret Michael, Richard Riehle. Includes bibliographical references (p. 63-66). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
30

Kritzinger, Chris (Cornelis Christiaan). "The development of generic modelling software for citrus packing processes." Thesis, Stellenbosch : Stellenbosch University, 2007. http://hdl.handle.net/10019.1/21669.

Full text
Abstract:
Thesis (MSc)--University of Stellenbosch, 2007.
ENGLISH ABSTRACT: This study was initiated in October 2004 when Vizier Systems (Pty) Ltd approached the Department of Industrial Engineering at the University of Stellenbosch with a concept. They proposed that a fruit packing line be represented as a series of unit operations and suggested that the concept could be used to create a generic model that can be used to represent any packing line. After further discussions with Vizier about the concept and their reasons for requiring a generic model, a formal client requirement was formulated. It was decided that the generic modelling concept had to be tested in the citrus industry. Modelling theory was investigated and a generic modelling methodology was formulated by adapting an existing modelling methodology. The first few steps of the developed methodology led to industry data being gathered and several role-players in the citrus export industry being visited. An analysis of the data enabled the development of the necessary techniques to do distribution estimation and forecasting of the system input, which is fruit. The various processes were grouped into generic groups and detailed capacity calculations were developed for each process. The fruit parameter estimation techniques and capacity calculations were integrated into a five step modelling procedure. Once the generic model was set up to represent a specific packing line, the modelling procedure provided optimum flow rates, equipment setups and personnel allocations for defined production runs. The modelling procedure was then translated into a computer model. This allowed a complete capacity analysis of a packing line by incrementally varying the characteristics of the fruit input. The developed generic model was validated by comparing its predictions to the results of two production runs at an existing packing line. It was found that the generic model is able to adequately represent the packing line and that the fruit inputs and outputs can be accurately estimated. The concept proposed by Vizier, that a packing line can be generically modelled as a series of unit operations, was shown to be valid.
AFRIKAANSE OPSOMMING: Hierdie studie is in Oktober 2004 geïnisieer toe Vizier Systems (Pty) Ltd die Departement van Bedryfsingenieurswese aan die Universiteit van Stellenbosch met ’n konsep genader het. Hulle het aan die hand gedoen dat ’n vrugtepaklyn voorgestel kan word as ’n reeks eenheidsprosesse en dat die konsep gebruik kan word om ’n generiese model te skep om enige vrugtepaklyn te verteenwoordig. Na verdere samesprekings met Vizier oor die konsep en hul redes vir die noodsaaklikheid van ’n generiese model, is ’n formele kliëntebehoefte geformuleer. Daar is besluit dat die generiese modelleringskonsep in die sitrusbedryf getoets gaan word. Modelleringsteorie is ondersoek en ’n generiese modelleringsmetodologie is geformuleer deur ’n bestaande modelleringsmetodologie aan te pas. Die stappe van die ontwikkelde metodologie het gelei tot die insameling van data vanuit die industrie en verskeie rolspelers in die sitrus-uitvoerindustrie is besoek. ’n Analise van die data het die ontwikkeling van die tegnieke moontlik gemaak wat nodig was om verspreidingsberamings en voorspelling van die stelselinset – die vrugte – te doen. Die onderskeie prosesse is gegroepeer in generiese groepe en gedetailleerde kapasiteitsberekeninge is vir elke proses ontwikkel. Die vrugparameter beramingstegnieke en kapasiteitsberekeninge is geïntegreer in ’n vyf-stapmodelleringsprosedure. Nadat die generiese model opgestel is om ’n spesifieke paklyn voor te stel, het die modelleringsprosedure optimum vloeitempo’s, toerustingopstellings en personeeltoedelings vir die spesifieke produksielopie gegee. Die modelleringsprosedure is toe herlei tot ’n rekenaarmodel. Dit het ’n volledige kapasiteitsanalise van die paklyn moontlik gemaak, deur die eienskappe van die vruginset inkrementeel te varieer. Die ontwikkelde generiese model is gestaaf deur sy voorspellings te vergelyk met die resultate van twee produksielopies van ’n bestaande paklyn. Daar is bevind dat die generiese model in staat is om die paklyn voldoende voor te stel en dat dit die vruginsette en -uitsette akkuraat kon beraam. Die geldigheid van die konsep wat voorgestel is deur Vizier, naamlik dat ’n paklyn generies gemodelleer kan word as ’n reeks eenheidsprosesse, is bevestig.
APA, Harvard, Vancouver, ISO, and other styles
31

Heunis, Andre Emile. "Design and implementation of generic flight software for a CubeSat." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/95911.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: The main on-board computer in a satellite is responsible for ensuring the correct operation of the entire system. It performs this task using flight software. In order to reduce future development costs, it is desirable to develop generic software that can be re-used on subsequent missions. This thesis details the design and implementation of a generic flight software application for CubeSats. A generic, modular framework is used in order to increase the re-usability of the flight software architecture. In order to simplify the management of the various on-board processes, the software is built upon the FreeRTOS real-time operating system. The Consultative Committee for Space Data Systems’ telemetry and telecommand packet definitions are used to interface with ground stations. In addition, a number of services defined in the European Cooperation for Space Standardisation’s Packet Utilisation Standard are used to perform the functions required from the flight software. The final application contains all the command and data handling functionality required in a standard CubeSat mission. Mechanisms for the collection, storage and transmission of housekeeping data are included as well as the implementation of basic fault tolerance techniques. Through testing it is shown that the FreeRTOS scheduler can be used to ensure the software meets hard-real time requirements.
AFRIKAANSE OPSOMMING: Die hoof aanboordrekenaar in ’n satelliet verseker die korrekte werking van die hele stelsel. Die rekenaar voer hierdie taak uit deur van vlugsagteware gebruik te maak. Om toekomstige ontwikkelingskostes te verminder, is dit noodsaaklik om generiese sagteware te ontwikkel wat hergebruik kan word op daaropvolgende missies. Hierdie tesis handel oor die besonderhede van die ontwerp en implementering van generiese vlugsagteware vir ’n CubeSat. ’n Generiese, modulêre raamwerk word gebruik om die hergebruik van die sagteware te verbeter. Ten einde die beheer van die verskillende aanboordprosesse te vereenvoudig, word die sagteware gebou op die FreeRTOS reëletyd bedryfstelsel. Die telemetrie- en telebevelpakket definisies van die “Consultative Committee for Space Data Systems” word gebruik om met grondstasies te kommunikeer. Daarby is ’n aantal dienste omskryf in die “Packet Utilisation Standard” van die “European Cooperation for Space Standardisation” gebruik om die vereiste funksies van die vlugsagteware uit te voer. Die finale sagteware bevat al die bevel en data-hantering funksies soos wat vereis word van ’n standaard CubeSat missie. Meganismes vir die versameling, bewaring en oordrag van huishoudelike data is ingesluit sowel as die implementering van basiese fouttolerante tegnieke. Toetse het gewys dat die FreeRTOS skeduleerder gebruik kan word om te verseker dat die sagteware aan harde reëletyd vereistes voldoen.
APA, Harvard, Vancouver, ISO, and other styles
32

Bredenkamp, F. v. B. "The development of a generic just-in-time supply chain optimisation software tool." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/1920.

Full text
Abstract:
The demand from modern day customers for quality products, supplied in any quantity and within a short lead-time, forces organisations to stock the correct amount of inventory in the correct locations in its supply chain. Establishing the correct inventory levels within an organisation’s supply chain is complicated by the various stochastic processes occurring in a supply chain. The thesis is aimed at the development of a generic Just-In-Time (JIT) supply chain optimisation software tool, whereby the correct inventory levels for an organisation can be determined. These inventory levels will ensure that the organisation will achieve a predefined customer service level at the minimum cost to the company. The tool was developed and satisfactory results were obtained using the Harmony Search Algorithm (HSA) for optimising the inventory levels.
APA, Harvard, Vancouver, ISO, and other styles
33

Cousseau, Philippe. "Traitement d'image et traitement graphique en genie genetique : application a l'analyse d'images autoradiographiques et a la representation de cartes de molecules d'adn recombinant." Université Louis Pasteur (Strasbourg) (1971-2008), 1986. http://www.theses.fr/1986STR13037.

Full text
Abstract:
La premiere application concerne la lecture automatique des images autoradiographiques de gel d'electrophorese bidimensionnelle de proteines. La seconde application porte sur la realisation d'un logiciel interactif d'aide au dessin assiste par ordinateur, specifiquement dedie a laconstruction et a la representation graphique de cartes de molecules d'adn recombinants
APA, Harvard, Vancouver, ISO, and other styles
34

Wadhwani, Vickey, and Shoain Ahmed. "Evaluating Evolutionary Prototyping for Customizable Generic Products in Industry." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2402.

Full text
Abstract:
Software products can be categorized into three types namely bespoke, market driven and customizable generic products. Each of these products is facing different problems in their development and to address these problems different software process models have been introduced. The use and validation of different software process models for bespoke and market driven products have been discussed in earlier work. On the other hand, less attention was paid to the customizable generic products. Our thesis will fill this gap by conducting a case study on evolutionary prototyping (EP) for customizable generic products. The main aim of the thesis is to make an initial validation of EP for customizable generic products. In order to fulfill the aforementioned aim we performed a literature study on prototyping and EP, together with development of two customizable generic products. During this development process, we used approach of EP. The results from our investigation will provide researchers and practitioners with a deep insight to the EP and also to guide them in making decision regarding the use of EP. The main findings from our investigation are as follows: • EP is not used standalone as a software process model. Rather it is used as a concept that can be augmented with some iterative software process model. • Negative and positive aspects of EP were highlighted by discussing situations where it could be a better choice, with its advantages and disadvantages. • An initial validation was performed on EP for customizable generic products. Reported results from this case study show that the selected approach is a good choice when you want to have innovative product, clear ambiguous and sketchy requirements, discover new requirements, save resources of software testing, involve and satisfy customer. EP shows vulnerabilities in documentation of product and quality of code.
Mobile number : 0046762183249
APA, Harvard, Vancouver, ISO, and other styles
35

Iqbal, Ilyas. "Development of a Generic Integration Layerfor an ERP system." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-88555.

Full text
Abstract:
Enterprise Resource Management (ERP) software is used by organizations to control accounting, manufacturing and customer processes. It is common practice that organizations use more than a singular ERP system and various modules from multiple ERP vendors in their business. Hence, there is a need to integrate systems and find technical solutions that allow the EPRs to communicate. This thesis presents the development of a Generic Integration Layer (GIL) that allows various available ERP systems to exchange data. The goal was to design and develop a module that could integrate different ERP modules, fill the data communication gaps among them, with the goal to minimize the need to use several commercial product for data integration purposes. The developed GIL allows flexible import from different file formats as well as provides verification and updates in a database management server. Moreover, it transforms data into a generic format based on XML that can be read by many modern ERPs.
Master Thesis
Development of a Generic Integration Layer for an ERP system
APA, Harvard, Vancouver, ISO, and other styles
36

Langer, Samridhi. "Concept to store variant information gathered from different artifacts in an existing specification interchange format." Master's thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-212606.

Full text
Abstract:
Any software development process deals with four main artifacts namely; requirement, design, implementation and test. Depending upon the functionality of a particular product there might be variants present in these artifacts. These variants influence all the artifacts involved in a software development process. Data in the higher level artifact affects the data present in the further artifacts and is also refined when we move towards the lower level of abstraction. This thesis deals with the handling of all the variant information present in all the artifacts. Verification and consistency checks on this information were to be automated for making the development process easier. The results achieved during this thesis discuss the solutions for the problem of inconsistent variant information present in all the artifacts. By defining the extension of the intermediate format to support the variant information at Vector Informatik GmbH this problem has been resolved. The data used during the development is the variant information. The generic intermediate format has been extended in a way so that it can further support a variety of use cases. Along with the formulation of a format, documentation of variant information and methods to extract variant information form C source code are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
37

Mansfield, Martin F. "Design of a generic parse tree for imperative languages." Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834617.

Full text
Abstract:
Since programs are written in many languages and design documents are not maintained (if they ever existed), there is a need to extract the design and other information that the programs represent. To do this without writing a separate program for each language, a common representation of the symbol table and parse tree would be required.The purpose of the parse tree and symbol table will not be to generate object code but to provide a platform for analysis tools. In this way the tool designer develops only one version instead of separate versions for each language. The generic symbol table and generic parse tree may not be as detailed as those same structures in a specific compiler but the parse tree must include all structures for imperative languages.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
38

Pandikow, Asmus. "A Generic Principle for Enabling Interoperability of Structured and Object-Oriented Analysis and Design Tools." Doctoral thesis, Linköpings universitet, RTSLAB - Laboratoriet för realtidssystem, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4991.

Full text
Abstract:
In the 1980s, the evolution of engineering methods and techniques yielded the object-oriented approaches. Specifically, object orientation was established in software engineering, gradually relieving structured approaches. In other domains, e.g. systems engineering, object orientation is not well established. As a result, different domains employ different methods and techniques. This makes it difficult to exchange information between the domains, e.g. passing systems engineering information for further refinement to software engineering. This thesis presents a generic principle for bridging the gap between structured and object-oriented specification techniques. The principle enables interoperability of structured and object-oriented analysis and design tools through mutual information exchanges. Therefore, the concepts and elements of representative structured and object-oriented specification techniques are identified and analyzed. Then, a metamodel for each specification technique is created. From the meta-models, a common metamodel is synthesized. Finally, mappings between the meta-models and the common meta-model are created. Used in conjunction, the meta-models, the common meta-model and the mappings enable tool interoperability through transforming specification information under one meta-model via the common meta-model into a representation under another metamodel. Example transformations that illustrate the proposed principle using fragments of an aircraft’s landing gear specification are provided. The work presented in this thesis is based on the achievements of the SEDRES (ESPRIT 20496), SEDEX (NUTEK IPII-98-6292) and SEDRES-2 (IST 11953) projects. The projects strove for integrating different systems engineering tools in the forthcoming ISO-10303-233 (AP-233) standard for systems engineering design data. This thesis is an extension to the SEDRES / SEDEX and AP-233 achievements. It specifically focuses on integrating structured and modern UML based object-oriented specification techniques which was only performed schematically in the SEDRES / SEDEX and AP-233 work.
APA, Harvard, Vancouver, ISO, and other styles
39

Pathni, Charu. "Round-trip engineering concept for hierarchical UML models in AUTOSAR-based safety projects." Master's thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-187153.

Full text
Abstract:
Product development process begins at a very abstract level of understanding the requirements. The data needs to be passed on the next phase of development. This happens after every stage for further development and finally a product is made. This thesis deals with the data exchange process of software development process in specific. The problem lies in handling of data in terms of redundancy and versions of the data to be handled. Also, once data passed on to next stage, the ability to exchange it in reveres order is not existent in evident forms. The results found during this thesis discusses the solutions for the problem by getting all the data at same level, in terms of its format. Having the concept ready, provides an opportunity to use this data based on our requirements. In this research, the problem of data consistency, data verification is dealt with. This data is used during the development and data merging from various sources. The concept that is formulated can be expanded to a wide variety of applications with respect to development process. If the process involves exchange of data - scalability and generalization are the main foundation concepts that are contained within the concept.
APA, Harvard, Vancouver, ISO, and other styles
40

Mazo, Raul. "A Generic Approach for Automated Verification of Product Line Models." Phd thesis, Université Panthéon-Sorbonne - Paris I, 2011. http://tel.archives-ouvertes.fr/tel-00707351.

Full text
Abstract:
This thesis explores the subject of automatic verification of product line models. This approach is based on the hypothesis that to automatically verify product line models, they should first be transformed into a language that makes them computable. In this thesis, product line models are transformed into constraint (logic) programs, then verified against a typology of verification criteria. The typology enumerates, classifies and formalizes a collection of generic verification criteria, i.e. criteria that can be applied (with or without adaptation) to any product line formalism. The typology makes the distinction between two categories of criteria: criteria that deal with the formalism in which models are represented, and the formalism-independent criteria. To identify defects in the first category, the thesis proposes a conformance checking approach directly related with verification of the abstract syntactic aspects of a model. To identify defects in the second category, the thesis proposes a domain-specific verification approach. An optimal algorithm is specified and implemented in constraint logic program for each criterion in the typology. These can be used independently -or in combination- to verify individual product line models. The thesis offers to support the verification of multiple product line models using an integration approach. Besides, this thesis proposes a series of integration strategies that can be used before applying the verification as for individual models. The product line verification approach proposed in this thesis is generic in the sense that it can be reused for any kind of product line model that instantiates the generic meta model based on which it was developed. It is general in the sense that it supports the verification of a comprehensive collection of criteria defined in the typology. This approach was implemented in a prototype tool that supports the specification, transformation, integration, configuration, analysis and verification of product line models via constraints (logic) programming. A benchmark gathering a corpus of 54 product line models was developed, then used in a series of experiments. The experiments showed that (i) the implementation of the domain-specific verification approach is fast and scalable to product line models up-to 2000 artefacts; (ii) the implementation of the conformance checking approach is fast and scalable to product line models up-to 10000 artefacts; and (iii) both approaches are correct and useful for industrial-size models.
APA, Harvard, Vancouver, ISO, and other styles
41

Löfstrand, Sebastian, and Lundvall Jonas Fredén. "Utvärdering av Movesense för användning vid biomekaniska studier." Thesis, KTH, Hälsoinformatik och logistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252802.

Full text
Abstract:
Det finns ett behov av att kunna nyttja ett användarvänligt system för interaktion med kroppsnära sensorer vid undervisning och forskning vid skolan för kemi, bioteknologi och hälsa vid Kungliga Tekniska Högskolan. Ansvariga vid utbildningen har därför gett i uppdrag att utreda om ett specifikt sensorsystem, Movesense, kan utgöra grunden för ett användarvänligt verktyg för att kunna studera biomekaniska rörelser inom utbildning och forskning. En förstudie har genomförts för att undersöka sensorsystemets potential. En systemprototyp har utvecklats för konfiguration av sensorsystemet och hämtning av sensordata. En kvantitativ utvärdering av insamlade data från sensorsystemet och videoanalys har utförts för att fastställa om det är möjligt att utföra rörelseanalyser med hjälp av systemprototypen. Utredningen resulterade i en fungerande systemprototyp, samt slutsatsen att Movesense går att nyttja som verktyg för att studera vissa typer av rörelser. Prototypen har stor utvecklingspotential och sensorsystemet har potentiella möjligheter inom utbildning och forskning.
There is a need to be able to utilize a user-friendly system for interaction with body-worn sensors in teaching and research at the school for chemistry, biotechnology and health at the Royal Institute of Technology. Responsible persons at the program have therefore assigned a Bachelor of Science (BSc) degree project to investigate whether a specific sensor system, Movesense, can serve as a user-friendly tool for studying biomechanical movements within education and research. A preliminary study is carried out to examine the sensor system's potential. A system prototype is developed for configuring the sensor system and retrieving sensor data. A quantitative evaluation of collected data from the sensor system, and video analysis is performed to determine whether it is possible to perform motion analysis using the system prototype. The investigation resulted in a functioning system prototype, and that Movesense can be used as a tool for studying certain types of movements. The prototype has great development potential, and the sensor system has potential opportunities in education and research.
APA, Harvard, Vancouver, ISO, and other styles
42

Bessinger, Zachary. "An Automatic Framework for Embryonic Localization Using Edges in a Scale Space." TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1262.

Full text
Abstract:
Localization of Drosophila embryos in images is a fundamental step in an automatic computational system for the exploration of gene-gene interaction on Drosophila. Contour extraction of embryonic images is challenging due to many variations in embryonic images. In the thesis work, we develop a localization framework based on the analysis of connected components of edge pixels in a scale space. We propose criteria to select optimal scales for embryonic localization. Furthermore, we propose a scale mapping strategy to compress the range of a scale space in order to improve the efficiency of the localization framework. The effectiveness of the proposed framework and the scale mapping strategy are validated in our experiments.
APA, Harvard, Vancouver, ISO, and other styles
43

Junior, Edison Kicho Shimabukuro. "Um gerador de aplicações configurável." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-03022007-002615/.

Full text
Abstract:
Os geradores de aplicação são ferramentas que recebem uma especificação de software, validam essa especificação e geram artefatos automaticamente. Os geradores de aplicação podem trazer benefícios em termos de produtividade por gerarem automaticamente artefatos de baixo nível com base em especificações de nível mais alto. Um dos problemas dos geradores de aplicação é o seu alto custo de desenvolvimento. Os geradores de aplicação configuráveis são adaptados para fornecer apoio em domínios específicos, ou seja, são considerados meta-geradores utilizados para obter geradores de aplicação específicos. Este trabalho delineia um processo de desenvolvimento com geradores confi- guráveis, define a arquitetura e as características de um gerador configurável e apresenta a ferramenta Captor, que é um gerador de aplicação configurável desenvolvido para facilitar a construção de geradores específicos. Três estudos de caso nos quais a Captor é configurada para domínios de aplicação específi- cos são apresentados: persistência de dados, gestão de recursos de negócios e bóias náuticas
Application generators are tools that receive as input a software specification, validate it and automatically generate artifacts based on it. Application generators can bring several benefits in terms of productivity, as they automatically generate low-level artifacts based on higher abstraction level specifications. A major concern of application generators is their high development cost. Con- figurable application generators are those generators that can be adapted to give support in specific domains, i.e., they are considered as meta-generators through which it is possible to obtain specific application generators. This work presents an approach for software development supported by configurable application generators. It defines the architecture and main features of a configurable application generator and presents Captor, which is a configurable application generator developed to ease the creation of specific generators. Three case studies were conducted to show the configuration of the Captor tool to different application domains: objects persistence, business resource management and floating weather stations.
APA, Harvard, Vancouver, ISO, and other styles
44

Caballé, Llobet Santi. "A Computational Model for the Construction of Knowledge-based Collaborative Learning Distributed Applications." Doctoral thesis, Universitat Oberta de Catalunya, 2008. http://hdl.handle.net/10803/9127.

Full text
Abstract:
en català:

Un camp de recerca important dins del paradigma del Computer-Supported Collaborative Learning (CSCL) és la importància en la gestió eficaç de la informació d'esdeveniments generada durant l'activitat de l'aprenentatge col·laboratiu virtual, per a proporcionar coneixement sobre el comportament dels membres del grup. Aquesta visió és especialment pertinent en l'escenari educatiu actual que passa d'un paradigma tradicional - centrat en la figura d'un instructor magistral - a un paradigma emergent que considera els estudiants com actors centrals en el seu procés d'aprenentatge. En aquest nou escenari, els estudiants aprenen, amb l'ajuda de professors, la tecnologia i els altres estudiants, el que potencialment necessitaran per a desenvolupar les seves activitats acadèmiques o professionals futures.
Els principals aspectes a tenir en compte en aquest context són, primer de tot, com dissenyar una plataforma sota el paradigma del CSCL, que es pugui utilitzar en situacions reals d'aprenentatge col·laboratiu complexe i a llarg termini, basades en el model d'aprenentatge de resolució de problemes. I que permet al professor una anàlisi del grup més eficaç així com donar el suport adequat als estudiants quan sigui necessari.
En segon lloc, com extreure coneixement pertinent de la col·laboració per donar consciència i retorn als estudiants a nivell individual i de rendiment del grup, així com per a propòsits d'avaluació.
L'assoliment d'aquests objectius impliquen el disseny d'un model conceptual d'interacció durant l'aprenentatge col·laboratiu que estructuri i classifiqui la informació generada en una aplicació col·laborativa en diferents nivells de descripció. A partir d'aquesta aproximació conceptual, els models computacionals hi donen resposta per a proporcionar una extracció eficaç del coneixement produït per l'individu i per l'activitat del grup, així com la possibilitat d'explotar aquest coneixement com una eina metacognitiva pel suport en temps real i regulat del procés d'aprenentatge col·laboratiu.
A més a més, les necessitats dels entorns CSCL han evolucionat en gran mesura durant els darrers anys d'acord amb uns requisits pedagògics i tecnològics cada cop més exigents. Els entorns d'aprenentatge col·laboratius virtuals ara ja no depenen de grups d'estudiants homogenis, continguts i recursos d'aprenentatge estàtics, ni pedagogies úniques, sinó que exigeixen una forta personalització i un alt grau de flexibilitat. En aquest nou escenari, les organitzacions educatives actuals necessiten estendre's i moure's cap a paradigmes d'ensenyament altament personalitzats, amb immediatesa i constantment, on cada paradigma incorpora el seu propi model pedagògic, el seu propi objectiu d'aprenentatge i incorpora els seus propis recursos educatius específics.
Les demandes de les organitzacions actuals també inclouen la integració efectiva, en termes de cost i temps, de sistemes d'aprenentatge llegats i externs, que pertanyen a altres institucions, departaments i cursos. Aquests sistemes llegats es troben implementats en llenguatges diferents, suportats per plataformes heterogènies i distribuïdes arreu, per anomenar alguns dels problemes més habituals. Tots aquests problemes representen certament un gran repte per la comunitat de recerca actual i futura. Per tant, els propers esforços han d'anar encarats a ajudar a desenvolupadors, recercaires, tecnòlegs i pedagogs a superar aquests exigents requeriments que es troben actualment en el domini del CSCL, així com proporcionar a les organitzacions educatives solucions ràpides i flexibles per a potenciar i millorar el rendiment i resultats de l'aprenentatge col·laboratiu. Aquesta tesi proposa un primer pas per aconseguir aquests objectius.
An important research topic in Computer Supported Collaborative Learning (CSCL) is to explore the importance of efficient management of event information generated from group activity in collaborative learning practices for its further use in extracting and providing knowledge on interaction behavior.
The essential issue here is first how to design a CSCL platform that can be used for real, long-term, complex collaborative problem solving situations and which enables the instructor to both analyze group interaction effectively and provide an adequate support when needed. Secondly, how to extract relevant knowledge from collaboration in order to provide learners with efficient awareness and feedback as regards individual and group performance and assessment. The achievement of these tasks involve the design of a conceptual framework of collaborative learning interaction that structures and classifies the information generated in a collaborative application at several levels of description. Computational models are then to realize this conceptual approach for an efficient management of the knowledge produced by the individual and group activity as well as the possibility of exploiting this knowledge further as a metacognitive tool for real-time coaching and regulating the collaborative learning process.
In addition, CSCL needs have been evolving over the last years accordingly with more and more demanding pedagogical and technological requirements. On-line collaborative learning environments no longer depend on homogeneous groups, static content and resources, and single pedagogies, but high customization and flexibility are a must in this context. As a result, current educational organizations' needs involve extending and moving to highly customized learning and teaching forms in timely fashion, each incorporating its own pedagogical approach, each targeting a specific learning goal, and each incorporating its specific resources.
These entire issues certainly represent a great challenge for current and future research in this field. Therefore, further efforts need to be made that help developers, technologists and pedagogists overcome the demanding requirements currently found in the CSCL domain as well as provide modern educational organizations with fast, flexible and effective solutions for the enhancement and improvement of the collaborative learning performance and outcomes. This thesis proposes a first step toward these goals.

Índex foliat:
The main contribution in this thesis is the exploration of the importance of an efficient management of information generated from group activity in Computer-Supported Collaborative Learning (CSCL) practices for its further use in extracting and providing knowledge on interaction behavior. To this end, the first step is to investigate a conceptual model for data analysis and management so as to identify the many kinds of indicators that describe collaboration and learning and classify them into high-level potential categories of effective collaboration. Indeed, there are more evident key discourse elements and aspects than those shown by the literature, which play an important role both for promoting student participation and enhancing group and individual performance, such as, the impact and effectiveness of students' contributions, among others, that are explored in this work. By making these elements explicit, the discussion model proposed accomplishes high students' participation rates and contribution quality in a more natural and effective way. This approach goes beyond a mere interaction analysis of asynchronous discussion in the sense that it builds a multi-functional model that fosters knowledge sharing and construction, develops a strong sense of community among students, provides tutors with a powerful tool for students' monitoring, discussion regulation, while it allows for peer facilitation through self, peer and group awareness and assessment.
The results of the research described so far motivates the development of a computational system as the translation from the conceptual model into a computer system that implements the management of the information and knowledge acquired from the group activity, so as to be efficiently fed back to the collaboration. The achievement of a generic, robust, flexible, interoperable, reusable computational model that meets the fundamental functional needs shared by any collaborative learning experience is largely investigated in this thesis. The systematic reuse of this computational model permits a fast adaptation to new learning and teaching requirements, such as learning by discussion, by relying on the most advanced software engineering processes and methodologies from the field of software reuse, and thus important benefits are expected in terms of productivity, quality, and cost.
Therefore, another important contribution is to explore and extend suitable software reuse techniques, such as Generic Programming, so as to allow the computational model to be successfully particularized in as many as situations as possible without losing efficiency in the process. In particular, based on domain analysis techniques, a high-level computational description and formalization of the CSCL domain are identified and modeled. Then, different specific-platform developments that realize the conceptual description are provided. It is also explored a certain level of automation by means of advanced techniques based on Service-Oriented Architectures and Web-services while passing from the conceptual specification to the desired realization, which greatly facilitates the development of CSCL applications using this computational model.
Based on the outcomes of these investigations, this thesis contributes with computational collaborative learning systems, which are capable of managing both qualitative and quantitative information and transforming it into useful knowledge for all the implicated parties in an efficient and clear way. This is achieved by both the specific assessment of each contribution by the tutor who supervises the discussion and by rich statistical information about student's participation. This statistical data is automatically provided by the system; for instance, statistical data sheds light on the students' engagement in the discussion forum or how much interest drew the student's intervention in the form of participation impact, level of passivity, proactivity, reactivity, and so on. The aim is to provide both a deeper understanding of the actual discussion process and a more objective assessment of individual and group activity.
This information is then processed and analyzed by means of a multivariate statistical model in order to extract useful knowledge about the collaboration. The knowledge acquired is communicated back to the members of the learning group and their tutor in appropriate formats, thus providing valuable awareness and feedback of group interaction and performance as well as may help identify and assess the real skills and intentions of participants. The most important benefit expected from the conceptual model for interaction data analysis and management is a great improvement and enhancement of the learning and teaching collaborative experiences.
Finally, the possibilities of using distributed and Grid technology to support real CSCL environments are also extensively explored in this thesis. The results of this investigation lead to conclude that the features provided by these technologies form an ideal context for supporting and meeting demanding requirements of collaborative learning applications. This approach is taken one step further for enhancing the possibilities of the computational model in the CSCL domain and it is successfully adopted on an empirical and application basis. From the results achieved, it is proved the feasibility of distributed technologies to considerably enhance and improve the collaborative learning experience. In particular, the use of Grid computing is successfully applied for the specific purpose of increasing the efficiency of processing a large amount of information from group activity log files.
APA, Harvard, Vancouver, ISO, and other styles
45

Ramraj, Varun. "Exploiting whole-PDB analysis in novel bioinformatics applications." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:6c59c813-2a4c-440c-940b-d334c02dd075.

Full text
Abstract:
The Protein Data Bank (PDB) is the definitive electronic repository for experimentally-derived protein structures, composed mainly of those determined by X-ray crystallography. Approximately 200 new structures are added weekly to the PDB, and at the time of writing, it contains approximately 97,000 structures. This represents an expanding wealth of high-quality information but there seem to be few bioinformatics tools that consider and analyse these data as an ensemble. This thesis explores the development of three efficient, fast algorithms and software implementations to study protein structure using the entire PDB. The first project is a crystal-form matching tool that takes a unit cell and quickly (< 1 second) retrieves the most related matches from the PDB. The unit cell matches are combined with sequence alignments using a novel Family Clustering Algorithm to display the results in a user-friendly way. The software tool, Nearest-cell, has been incorporated into the X-ray data collection pipeline at the Diamond Light Source, and is also available as a public web service. The bulk of the thesis is devoted to the study and prediction of protein disorder. Initially, trying to update and extend an existing predictor, RONN, the limitations of the method were exposed and a novel predictor (called MoreRONN) was developed that incorporates a novel sequence-based clustering approach to disorder data inferred from the PDB and DisProt. MoreRONN is now clearly the best-in-class disorder predictor and will soon be offered as a public web service. The third project explores the development of a clustering algorithm for protein structural fragments that can work on the scale of the whole PDB. While protein structures have long been clustered into loose families, there has to date been no comprehensive analytical clustering of short (~6 residue) fragments. A novel fragment clustering tool was built that is now leading to a public database of fragment families and representative structural fragments that should prove extremely helpful for both basic understanding and experimentation. Together, these three projects exemplify how cutting-edge computational approaches applied to extensive protein structure libraries can provide user-friendly tools that address critical everyday issues for structural biologists.
APA, Harvard, Vancouver, ISO, and other styles
46

Bokhari, Mahmoud Abdulwahab K. "Genetic Improvement of Software for Energy E ciency in Noisy and Fragmented Eco-Systems." Thesis, 2020. http://hdl.handle.net/2440/130174.

Full text
Abstract:
Software has made its way to every aspect of our daily life. Users of smart devices expect almost continuous availability and uninterrupted service. However, such devices operate on restricted energy resources. As energy eficiency of software is relatively a new concern for software practitioners, there is a lack of knowledge and tools to support the development of energy eficient software. Optimising the energy consumption of software requires measuring or estimating its energy use and then optimising it. Generalised models of energy behaviour suffer from heterogeneous and fragmented eco-systems (i.e. diverse hardware and operating systems). The nature of such optimisation environments favours in-vivo optimisation which provides the ground-truth for energy behaviour of an application on a given platform. One key challenge in in-vivo energy optimisation is noisy energy readings. This is because complete isolation of the effects of software optimisation is simply infeasible, owing to random and systematic noise from the platform. In this dissertation we explore in-vivo optimisation using Genetic Improvement of Software (GI) for energy eficiency in noisy and fragmented eco-systems. First, we document expected and unexpected technical challenges and their solutions when conducting energy optimisation experiments. This can be used as guidelines for software practitioners when conducting energy related experiments. Second, we demonstrate the technical feasibility of in-vivo energy optimisation using GI on smart devices. We implement a new approach for mitigating noisy readings based on simple code rewrite. Third, we propose a new conceptual framework to determine the minimum number of samples required to show significant differences between software variants competing in tournaments. We demonstrate that the number of samples can vary drastically between different platforms as well as from one point of time to another within a single platform. It is crucial to take into consideration these observations when optimising in the wild or across several devices in a control environment. Finally, we implement a new validation approach for energy optimisation experiments. Through experiments, we demonstrate that the current validation approaches can mislead software practitioners to draw wrong conclusions. Our approach outperforms the current validation techniques in terms of specificity and sensitivity in distinguishing differences between validation solutions.
Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2020
APA, Harvard, Vancouver, ISO, and other styles
47

Nathoo, Kirti. "Establish a generic railway electronic interlocking solution using software engineering methods." Thesis, 2015. http://hdl.handle.net/10539/17639.

Full text
Abstract:
A research investigation has been undertaken to establish a generic software interlocking solution for electronic railway systems. The system is intended to be independent of the physical station layout and easily adaptable in any country of application. Railway signalling principles and regulated safety standards are incorporated into the system design. A literature review has been performed to investigate existing interlocking methods and to identify common aspects amongst these methods. Existing methods for the development of electronic interlocking systems are evaluated. The application of software engineering techniques to interlocking systems is also considered. Thereafter a model of the generic solution is provided. The solution is designed following an agile life cycle development process. The structure of the interlocking is based on an MVC (Model-View-Controller) architecture which provides a modular foundation upon which the system is developed. The interlocking system is modelled using Boolean interlocking functions and UML (Unified Modelling Language) statecharts. Statecharts are used to graphically represent the procedures of interlocking operations. The Boolean interlocking functions and statechart models collectively represent a proof of concept for a generic interlocking software solution. The theoretical system models are used to simulate the interlocking software in TIA (Totally Integrated Automation) Portal. The behaviour of the interlocking during element faults and safety–critical events is validated through graphical software simulations. Test cases are derived based on software engineering test techniques to validate the behaviour and completeness of the software. The software simulations indicate that the general algorithms defined for the system model can easily be determined for a specific station layout. The model is not dependent on the physical signalling elements. The generic algorithms defined for determining the availability of the signalling element types and the standard interlocking functions are easily adaptable to a physical layout. The generic solution encompasses interlocking principles and rail safety standards which enables the interlocking to respond in a fail-safe manner during hazardous events. The incorporation of formal software engineering methods assists in guaranteeing the safety of the system as safety components are built into the system at various stages. The use of development life cycle models and design patterns supports the development of a modular and flexible system architecture. This allows new additions or amendments to easily be incorporated into the system. The application of software engineering techniques assists in developing a generic maintainable interlocking solution for railways.
APA, Harvard, Vancouver, ISO, and other styles
48

Smith, Jacob N. "Techniques in Active and Generic Software Libraries." 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7823.

Full text
Abstract:
Reusing code from software libraries can reduce the time and effort to construct software systems and also enable the development of larger systems. However, the benefits that come from the use of software libraries may not be realized due to limitations in the way that traditional software libraries are constructed. Libraries come equipped with application programming interfaces (API) that help enforce the correct use of the abstractions in those libraries. Writing new components and adapting existing ones to conform to library APIs may require substantial amounts of "glue" code that potentially affects software's efficiency, robustness, and ease-of-maintenance. If, as a result, the idea of reusing functionality from a software library is rejected, no benefits of reuse will be realized. This dissertation explores and develops techniques that support the construction of software libraries with abstraction layers that do not impede efficiency. In many situations, glue code can be expected to have very low (or zero) performance overhead. In particular, we describe advances in the design and development of active libraries - software libraries that take an active role in the compilation of the user's code. Common to the presented techniques is that they may "break" a library API (in a controlled manner) to adapt the functionality of the library for a particular use case. The concrete contributions of this dissertation are: a library API that supports iterator selection in the Standard Template Library, allowing generic algorithms to find the most suitable traversal through a container, allowing (in one case) a 30-fold improvement in performance; the development of techniques, idioms, and best practices for concepts and concept maps in C++, allowing the construction of algorithms for one domain entirely in terms of formalisms from a second domain; the construction of generic algorithms for algorithmic differentiation, implemented as an active library in Spad, language of the Open Axiom computer algebra system, allowing algorithmic differentiation to be applied to the appropriate mathematical object and not just concrete data-types; and the description of a static analysis framework to describe the generic programming notion of local specialization within Spad, allowing more sophisticated (value-based) control over algorithm selection and specialization in categories and domains. We will find that active libraries simultaneously increase the expressivity of the underlying language and the performance of software using those libraries.
APA, Harvard, Vancouver, ISO, and other styles
49

Dietrich, David. "A Mode-Based Pattern for Feature Requirements, and a Generic Feature Interface." Thesis, 2013. http://hdl.handle.net/10012/7922.

Full text
Abstract:
Feature-oriented requirements decompose a system's requirements into individual bundles of functionality called features, where each feature's behaviour can be expressed as a state-machine model. However, state machines are difficult to write; determining how to decompose behaviour into states is not obvious, different stakeholders will have different opinions on how to structure the state machine, and the state machines can easily become too complex. This thesis proposes a pattern for decomposing and structuring the model of a feature's behavioural requirements, based on modes of operation (e.g., Active, Inactive, Failed) that are common to features in multiple domains. Interestingly, the highest-level modes of the pattern can serve as a generic behavioural interface for all features that adhere to the pattern. The thesis proposes also several pattern extensions that provide guidance on how to structure the Active and Inactive behaviour of the feature. The pattern was applied to model the behavioural requirements of 21 automotive features that were specified in 7 production-grade requirements documents. The pattern was applicable to all 21 features, and the proposed generic feature interface was applicable to 50 out of 58 inter-feature references. A user study with 18 participants evaluated whether use of the pattern made it easier than otherwise to write state machines for features and whether feature state machines written with the help of the pattern are more readable than those written without the help of the pattern. The results of the study indicate that use of the pattern facilitates writing of feature state machines.
APA, Harvard, Vancouver, ISO, and other styles
50

Ahumada, Pardo Dania I. "Una aproximación evolucionista para la generación automática de sentencias SQL a partir de ejemplos." Thèse, 2015. http://hdl.handle.net/1866/12454.

Full text
Abstract:
En la actualidad, el uso de las tecnologías ha sido primordial para el avance de las sociedades, estas han permitido que personas sin conocimientos informáticos o usuarios llamados “no expertos” se interesen en su uso, razón por la cual los investigadores científicos se han visto en la necesidad de producir estudios que permitan la adaptación de sistemas, a la problemática existente dentro del ámbito informático. Una necesidad recurrente de todo usuario de un sistema es la gestión de la información, la cual se puede administrar por medio de una base de datos y lenguaje específico, como lo es el SQL (Structured Query Language), pero esto obliga al usuario sin conocimientos a acudir a un especialista para su diseño y construcción, lo cual se ve reflejado en costos y métodos complejos, entonces se plantea una pregunta ¿qué hacer cuando los proyectos son pequeñas y los recursos y procesos son limitados? Teniendo como base la investigación realizada por la universidad de Washington[39], donde sintetizan sentencias SQL a partir de ejemplos de entrada y salida, se pretende con esta memoria automatizar el proceso y aplicar una técnica diferente de aprendizaje, para lo cual utiliza una aproximación evolucionista, donde la aplicación de un algoritmo genético adaptado origina sentencias SQL válidas que responden a las condiciones establecidas por los ejemplos de entrada y salida dados por el usuario. Se obtuvo como resultado de la aproximación, una herramienta denominada EvoSQL que fue validada en este estudio. Sobre los 28 ejercicios empleados por la investigación [39], 23 de los cuales se obtuvieron resultados perfectos y 5 ejercicios sin éxito, esto representa un 82.1% de efectividad. Esta efectividad es superior en un 10.7% al establecido por la herramienta desarrollada en [39] SQLSynthesizer y 75% más alto que la herramienta siguiente más próxima Query by Output QBO[31]. El promedio obtenido en la ejecución de cada ejercicio fue de 3 minutos y 11 segundos, este tiempo es superior al establecido por SQLSynthesizer; sin embargo, en la medida un algoritmo genético supone la existencia de fases que amplían los rangos de tiempos, por lo cual el tiempo obtenido es aceptable con relación a las aplicaciones de este tipo. En conclusión y según lo anteriormente expuesto, se obtuvo una herramienta automática con una aproximación evolucionista, con buenos resultados y un proceso simple para el usuario “no experto”.
Actuellement l'usage des technologies est primordial pour l'avance de la société, celles-ci ont permis que des personnes sans connaissances informatiques ou des utilisateurs appelés "non expert" s'intéressent à son usage. C'est la raison pour laquelle les enquêteurs scientifiques se sont vus dans la nécessité de produire les études qui permettent l'adaptation des systèmes à la problématique existante à l'intérieur du domaine informatique. Une nécessité récurrente pour tout utilisateur d'un système est la gestion de l'information, que l’on peut administrer au moyen d'une base de données et de langage spécifique pour celles-ci comme est le SQL (Structured Query Language), mais qui oblige à l'utilisateur à chercher un spécialiste pour sa conception et sa construction, et qui représente des prix et des méthodes complexes. Une question se pose alors, quoi faire quand les projets sont petites et les ressources et les processus limités ? Ayant pour base la recherche de l'université de Washington [39], ce mémoire automatise le processus et applique une différente technique d'apprentissage qui utilise une approche évolutionniste, où l'application d'un algorithme génétique adapté génère des requêtes SQL valides répondant aux conditions établies par les exemples d'entrée et de sortie donnés par l'utilisateur. On a obtenu comme résultat de l’approche un outil dénommé EvoSQL qui a été validé dans cette étude. Sur les 28 exercices employés par la recherche [39], 23 exercices ont été obtenus avec des résultats parfaits et 5 exercices sans succès, ce qui représente 82.1 % d'effectivité. Cette effectivité est supérieure de 10.7 % à celle établie par l'outil développé dans [32] SQLSynthesizer et 75% plus haute que l'outil suivant le plus proche Query by Output QBO [31]. La moyenne obtenue dans l'exécution de chaque exercice a été de 3 min et 11sec, ce qui est supérieur au temps établi par SQlSynthesizer, cependant dans la mesure où un algorithme génétique suppose que l'existence de phases augmente les rangs des temps, le temps obtenu est acceptable par rapport aux applications de ce type. Dans une conclusion et selon ce qui a été antérieurement exposé nous avons obtenu un outil automatique, avec une approche évolutionniste, avec de bons résultats et un processus simple pour l'utilisateur « non expert ».
At present the use of the technologies is basic for the advance of the society; these have allowed that persons without knowledge or so called "non expert" users are interested in this use, is for it that the researchers have seen the need to produce studies that allow the adjustment of the systems the existing at the problematic inside the area of the technology. A need of every user of a system is the management of the information, which can be manage by a database and specific language for these as the SQL (Structured Query Language), which forces the user to come to a specialist for the design and construction of this one, which represents costs and complex methods, but what to do when they are small investigations where the resources and processes are limited? Taking as a base the research of the university of Washington [32], this report automates the process and applies a different learning technique, for which uses an evolutionary approach, where the application of a genetic adapted algorithm generates query SQL valid that answer to the conditions established by the given examples of entry and exit given by the user. There was obtained as a result of the approach a tool named EvoSQL that was validated in the same 28 exercises used by the investigation [32], of which 23 exercises were obtained by ideal results and 5 not successful exercises, which represents 82.1 % of efficiency, superior in 10.7 % to the established one for the tool developed in [32] SQLSynthesizer and 75% higher than the following near tool Query by Output QBO [26]. The average obtained in the execution of every exercise was of 3 min and 11seg that is superior to the time established by SQlSynthesizer, Nevertheless, being a genetic algorithm where the steps existence makes that the ranges of times are extended, the obtained one is acceptable with relation to the applications of this type. In conclusion et according to previously exposed, we have obtained an automatic tool, with an evolutionary approach, with good results and a simple process for the « not expert » user.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography