Tesis sobre el tema "Genetic software engineering"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Genetic software engineering".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Hulse, Paul. "A study of topical applications of genetic programming and genetic algorithms in physical and engineering systems". Thesis, University of Salford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391313.
Texto completoNettelblad, Carl. "Using Markov models and a stochastic Lipschitz condition for genetic analyses". Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-120295.
Texto completoeSSENCE
Haq, Zia Ul. "Application of genetic algorithms for irrigation water scheduling". Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/72987/.
Texto completoJayawardena, Mahen. "Parallel algorithms and implementations for genetic analysis of quantitative traits". Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-85815.
Texto completoVázquez, Vilar Marta. "DESIGN OF GENETIC ELEMENTS AND SOFTWARE TOOLS FOR PLANT SYNTHETIC BIOLOGY". Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/68483.
Texto completo[ES] La Biología Sintética es un campo emergente de carácter interdisciplinar que se fundamenta en la aplicación de los principios ingenieriles de modularidad, abstracción y estandarización a la ingeniería genética. Una nueva vertiente de la Biología Sintética aplicada a las plantas, la Biología Sintética Vegetal (BSV), ofrece nuevas posibilidades de mejora de cultivos que podrían llevar a una mejora de la resistencia, a una mayor productividad, o a un aumento de la calidad nutricional. Sin embargo, para alcanzar este fin las herramientas moleculares disponibles en estos momentos para BSV deben ser adaptadas para convertirse en modulares, estándares y más precisas. Por ello se planteó como objetivo general de esta Tesis adaptar, expandir y refinar las herramientas de ensamblaje de DNA de la BSV para permitir la incorporación de especificaciones funcionales en la descripción de elementos genéticos estándar (fitobricks) y facilitar la construcción de estructuras multigénicas cada vez más complejas y precisas, incluyendo herramientas de editado genético. El punto de partida de esta Tesis fue el método de ensamblaje modular de ADN GoldenBraid (GB) basado en enzimas de restricción tipo IIS. Para optimizar el proceso de ensamblaje y catalogar la colección de fitobricks generados se desarrollaron una base de datos y un conjunto de herramientas software, tal y como se describe en el Capítulo 1. El paquete final de software se presentó en formato web como GB2.0, haciéndolo accesible al público a través de www.gbcloning.upv.es. El Capítulo 1 también proporciona una descripción detallada del funcionamiento de GB2.0 ejemplificando su uso con el ensamblaje de una construcción multigénica para la producción de antocianinas. Con el aumento en número y complejidad de las construcciones GB, el siguiente paso necesario fue el refinamiento de los estándar con la incorporación de la información experimental asociada a cada elemento genético (se describe en el Capítulo 2). Para este fin, el paquete de software de GB se reformuló en una nueva versión (GB3.0), un sistema de ensamblaje auto-contenido y completamente trazable en el que los datos experimentales que describen la funcionalidad de cada elemento genético se muestran en forma de una hoja de datos estándar. La utilidad de las especificaciones técnicas para anticipar el comportamiento de dispositivos biológicos compuestos se ejemplificó con la combinación de un interruptor químico y un prototipo de un módulo de sobreproducción de antocianinas equivalente al descrito en el Capítulo 1, resultando en un dispositivo de producción de antocianinas con respuesta a dexametasona. Además, en el Capítulo 3 se describe la adaptación a la tecnología GB de las herramientas de ingeniería genética CRISPR/Cas9, así como su caracterización funcional. La funcionalidad de estas herramientas para editado génico y activación y represión transcripcional se validó con el sistema de expresión transitoria en N.benthamiana. Finalmente, el Capítulo 4 presenta una implementación práctica del uso de la tecnología GB para hacer mejora vegetal de manera precisa. La transformación estable en tomate de una construcción intragénica que comprendía un marcador de selección intragénico y un regulador de la biosíntesis de flavonoides resultó en frutos con un mayor contenido de flavonoles. En conjunto, esta Tesis muestra la implementación de diseños genéticos cada vez más complejos y precisos en plantas utilizando elementos estándar y herramientas modulares siguiendo los principios de la Biología Sintética.
[CAT] La Biologia Sintètica és un camp emergent de caràcter interdisciplinar que es fonamenta amb l'aplicació a la enginyeria genètica dels principis de modularitat, abstracció i estandarització. Una nova vessant de la Biologia Sintètica aplicada a les plantes, la Biologia Sintètica Vegetal (BSV), ofereix noves possibilitats de millora de cultius que podrien portar a una millora de la resistència, a una major productivitat, o a un augment de la qualitat nutricional. Tanmateix, per poder arribar a este fi les eines moleculars disponibles en estos moments per a la BSV han d'adaptar-se per convertir-se en modulars, estàndards i més precises. Per això es plantejà com objectiu general d'aquesta Tesi adaptar, expandir i refinar les eines d'ensamblatge d'ADN de la BSV per permetre la incorporació d'especificacions funcionals en la descripció d'elements genètics estàndards (fitobricks) i facilitar la construcció d'estructures multigèniques cada vegada més complexes i precises, incloent eines d'edidat genètic. El punt de partida d'aquesta Tesi fou el mètode d'ensamblatge d'ADN modular GoldenBraid (GB) basat en enzims de restricció tipo IIS. Per optimitzar el proces d'ensamblatge i catalogar la col.lecció de fitobricks generats es desenvolupà una base de dades i un conjunt d'eines software, tal i com es descriu al Capítol 1. El paquet final de software es presentà en format web com GB2.0, fent-se accessible al públic mitjançant la pàgina web www.gbcloning.upv.es. El Capítol 1 també proporciona una descripció detallada del funcionament de GB2.0, exemplificant el seu ús amb l'ensamblatge d'una construcció multigènica per a la producció d'antocians. Amb l'augment en nombre i complexitat de les construccions GB, el següent pas fou el refinament dels estàndards amb la incorporació de la informació experimental associada a cada element genètic (es descriu en el Capítol 2). Per a aquest fi, el paquet de software de GB es reformulà amb una nova versió anomenada GB3.0. Aquesta versió consisteix en un sistema d'ensamblatge auto-contingut i complemtament traçable on les dades experimentals que descriuen la funcionalitat de cada element genètic es mostren en forma de fulla de dades estàndard. La utilitat de les especificacions tècniques per anticipar el comportament de dispositius biològics compostos s'exemplificà amb la combinació de un interruptor químic i un prototip d'un mòdul de sobreproducció d'antocians equivalent al descrit al Capítol 1. Aquesta combinació va tindre com a resultat un dispositiu de producció d'antocians que respón a dexametasona. A més a més, al Capítol 3 es descriu l'adaptació a la tecnologia GB de les eines d'enginyeria genètica CRISPR/Cas9, així com la seua caracterització funcional. La funcionalitat d'aquestes eines per a l'editat gènic i activació i repressió transcripcional es validà amb el sistema d'expressió transitòria en N. benthamiana. Finalment, al Capítol 4 es presenta una implementació pràctica de l'ús de la tecnologia GB per fer millora vegetal de mode precís. La transformació estable en tomaca d'una construcció intragènica que comprén un marcador de selecció intragènic i un regulador de la biosíntesi de flavonoïdes resultà en plantes de tomaca amb un major contingut de flavonols en llur fruits. En conjunt, esta Tesi mostra la implementació de dissenys genètics cada vegada més complexos i precisos en plantes utilitzant elements estàndards i eines modulars seguint els principis de la Biologia Sintètica.
Vázquez Vilar, M. (2016). DESIGN OF GENETIC ELEMENTS AND SOFTWARE TOOLS FOR PLANT SYNTHETIC BIOLOGY [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/68483
TESIS
Premiado
Bueno, Paulo Marcos Siqueira. "Geração de dados de teste orientada à diversidade com o uso de meta-heurísticas". [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260996.
Texto completoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-21T11:18:41Z (GMT). No. of bitstreams: 1 Bueno_PauloMarcosSiqueira_D.pdf: 3369612 bytes, checksum: e346274c745a489e77b074c57b0c1c78 (MD5) Previous issue date: 2012
Resumo: Técnicas e critérios de teste de software estabelecem elementos requeridos a serem exercitados no teste. A geração de dados de teste visa selecionar dados de teste, do domínio multidimensional de entrada do software, para satisfazer um critério. Uma linha de trabalhos para a geração de dados de teste utiliza meta-heurísticas para buscar, no espaço de possíveis entradas do software, aquelas que satisfaçam um determinado critério, área referida como Teste de Software Baseado em Buscas. Esta tese propõe uma nova técnica, a Geração de Dados de Teste Orientada à Diversidade (Diversity Oriented Test Data Generation - DOTG). Esta técnica incorpora a intuição, encontrada em bons projetistas de teste, de que a variedade, ou diversidade, dos dados de teste tem um papel relevante para a completeza, ou qualidade, do teste realizado. São propostas diferentes perspectivas para a diversidade do teste; cada perspectiva leva em consideração um tipo de informação distinto para avaliar a diversidade. É definido também um meta-modelo para guiar o desenvolvimento das perspectivas da DOTG. É desenvolvida a perspectiva do domínio de entrada do software para a diversidade (DOTG-ID), que considera a posição dos dados de teste neste domínio para calcular a diversidade do conjunto de teste. São propostas uma medida de distância entre dados de teste e uma medida de diversidade de conjuntos de teste. São desenvolvidas três meta-heurísticas para a geração automática de dados de alta diversidade: a SA-DOTG, baseada em Recozimento Simulado; a GA-DOTG, baseada em Algoritmos Genéticos; e a SR-DOTG, baseada na dinâmica de sistemas de partículas eletricamente carregadas. A avaliação empírica da DOTG-ID inclui: uma simulação Monte Carlo, realizada com o objetivo de estudar a influência de fatores na eficácia da técnica; e um experimento com programas, realizado para avaliar o efeito da diversidade dos conjuntos de teste na cobertura alcançada, medida com respeito a critérios de teste baseados em análise de fluxos de dados e no critério baseado em defeitos Análise de Mutantes. Os resultados das avaliações, significativos estatisticamente, indicam que na maioria das situações os conjuntos de alta diversidade atingem eficácia e valores de cobertura maiores do que os alcançados pelos conjuntos gerados aleatoriamente, de mesmo tamanho
Abstract: Software testing techniques and criteria establish required elements to be exercised during testing. Test data generation aims at selecting test data from the multidimensional software's input domain to satisfy a given criterion. A set of works on test data generation apply metaheuristics to search in the space of possible inputs for the software for those inputs that satisfy a given criterion. This field is named Search Based Software Testing. This thesis proposes a new technique, the Diversity Oriented Test Data Generation - DOTG. This technique embodies the intuition, which can be found in good testers, that the variety, or diversity, of test data used to test a software has some relation with the completeness, or quality, of the testing performed. We propose different perspectives for the test diversity concept; each one takes into account a different kind of information to evaluate the diversity. A metamodel is also defined to guide de development of the DOTG perspectives. We developed the Input Domain perspective for diversity (DOTG-ID), which considers the positions of the test data in the software input domain to compute a diversity value for the test sets. We propose a measure of distance between test data and a measure of diversity of test sets. For the automatic generation of high diversity test sets three metaheuristics were developed: the SA-DOTG based on Simulated Annealing; the GADOTG based on Genetic Algorithms, and the SR-DOTG, based on the dynamics of particle systems electrically charged. The empirical evaluation of DOTG-ID includes: a Monte Carlo simulation performed to study the influence of factors on the technique's effectiveness, and an experiment with programs, carried out to evaluate the effect of the test sets diversity on the attained coverage values, measured with respect to data-flow coverage and to mutation coverage. The evaluation results statistically significant, pointing out that in most of cases the test sets with high diversity reach effectiveness and coverage values higher than the ones reached by randomly generated test sets of the same size
Doutorado
Engenharia de Computação
Doutor em Engenharia Elétrica
Krüger, Franz David y Mohamad Nabeel. "Hyperparameter Tuning Using Genetic Algorithms : A study of genetic algorithms impact and performance for optimization of ML algorithms". Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-42404.
Texto completoAs machine learning (ML) is being more and more frequent in the business world, information gathering through Data mining (DM) is on the rise, and DM-practitioners are generally using several thumb rules to avoid having to spend a decent amount of time to tune the hyperparameters (parameters that control the learning process) of an ML algorithm to gain a high accuracy score. The proposal in this report is to conduct an approach that systematically optimizes the ML algorithms using genetic algorithms (GA) and to evaluate if and how the model should be constructed to find global solutions for a specific data set. By implementing a GA approach on two ML-algorithms, K-nearest neighbors, and Random Forest, on two numerical data sets, Iris data set and Wisconsin breast cancer data set, the model is evaluated by its accuracy scores as well as the computational time which then is compared towards a search method, specifically exhaustive search. The results have shown that it is assumed that GA works well in finding great accuracy scores in a reasonable amount of time. There are some limitations as the parameter’s significance towards an ML algorithm may vary.
Pečínka, Zdeněk. "Gramatická evoluce v optimalizaci software". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363820.
Texto completoHaraldsson, Saemundur Oskar. "Genetic improvement of software : from program landscapes to the automatic improvement of a live system". Thesis, University of Stirling, 2017. http://hdl.handle.net/1893/26007.
Texto completoHacoupian, Yourik. "Mining Aspects through Cluster Analysis Using Support Vector Machines and Genetic Algorithms". NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/170.
Texto completoMehrmand, Arash. "A Factorial Experiment on Scalability of Search-based Software Testing". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4224.
Texto completoLei, Celestino. "Using genetic algorithms and boosting for data preprocessing". Thesis, University of Macau, 2002. http://umaclib3.umac.mo/record=b1447848.
Texto completoNaik, Apoorv. "Orchestra Framework: Protocol Design for Ad Hoc and Delay Tolerant Networks using Genetic Algorithms". Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/43409.
Texto completoMaster of Science
Eaves, Hugh L. "Evaluating and Improving the Efficiency of Software and Algorithms for Sequence Data Analysis". VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4295.
Texto completoOlofsson, Fredrik y Johan W. Andersson. "Human-like Behaviour in Real-Time Strategy Games : An Experiment With Genetic Algorithms". Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3814.
Texto completoFrier, Jason Ross. "Genetic Algorithms as a Viable Method of Obtaining Branch Coverage". UNF Digital Commons, 2017. http://digitalcommons.unf.edu/etd/722.
Texto completoHallier, Andrea Rae. "Variant-curation and database instantiation (Variant-CADI): an integrated software system for the automation of collection, annotation and management of variations in clinical genetic testing". Thesis, University of Iowa, 2016. https://ir.uiowa.edu/etd/2218.
Texto completoShahdad, Mir Abubakr. "Engineering innovation (TRIZ based computer aided innovation)". Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3317.
Texto completoBuzzo, André Vinicius. "Estudo de algoritmo evolutivo com codificação real na geração de dados de teste estrutural e implementação de protótipo de ferramenta de apoio". [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275761.
Texto completoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-18T01:12:41Z (GMT). No. of bitstreams: 1 Buzzo_AndreVinicius_M.pdf: 3473272 bytes, checksum: e3da091fcaa16f3245465636a77cfad0 (MD5) Previous issue date: 2011
Resumo: A geração automática de dados de teste pode ser abordada como um problema de otimização e algoritmos evolutivos se tornaram um foco de muita pesquisa nesta área. Recentemente um novo tipo de algoritmo evolutivo chamado GEO (GEO - Generalized Extremal Optimization) tem sido explorado em uma grande classe de problemas de otimização. Neste trabalho é apresentado o uso do algoritmo evolutivo GEO com codificação real - GEOreal - na geração de dados de teste. O desempenho deste algoritmo é comparado com diversos outros algoritmos e para melhor avaliar os resultados, duas funções objetivo - que mapeiam o problema de geração de dados em um problema de otimização - foram utilizadas. O algoritmo GEOreal combinado com a função objetivo Bueno e Jino obtiveram os melhores resultados nos problemas abordados. Um protótipo foi desenvolvido implementando todos os conceitos envolvidos neste trabalho e o seu desempenho foi comparado com outras ferramentas já disponíveis no mercado. Os resultados mostraram que este protótipo superou as ferramentas comparadas ao minimizar o tempo dispendido no esforço de gerar os dados de teste
Abstract: Automatic test data generation can be approached as an optimization problem and evolutionary algorithms have become a focus of much research in this area. Recently a new type of evolutionary algorithm called GEO (GEO - Generalized Extremal Optimization) has been explored in a large class of optimization problems. This paper presents the use of evolutionary algorithm with real coding GEO - GEOreal - in test data generation. The performance of this algorithm is compared with several other algorithms and to better compare the results two objective functions - that map the problem of generating data in an optimization problem - were used. The algorithm GEOreal combined with the function Bueno and Jino had the best results in the problems addressed. A prototype was developed implementing all the concepts involved in this work and its performance was compared with other tools already available. The results showed that this prototype was better than the compared tools when minimizing the time spent in the effort to test data generation
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
Moskowitz, David. "Automatically Defined Templates for Improved Prediction of Non-stationary, Nonlinear Time Series in Genetic Programming". NSUWorks, 2016. http://nsuworks.nova.edu/gscis_etd/953.
Texto completoMonção, Ana Claudia Bastos Loureiro. "Uma abordagem evolucionária para o teste de instruções select SQL com o uso da análise de mutantes". Universidade Federal de Goiás, 2013. http://repositorio.bc.ufg.br/tede/handle/tede/3346.
Texto completoApproved for entry into archive by Jaqueline Silva (jtas29@gmail.com) on 2014-10-16T17:59:00Z (GMT) No. of bitstreams: 2 Dissertacao - Ana Claudia Bastos Loureiro Monção - 2013.pdf: 4213405 bytes, checksum: 3bbe190ae0f4a45a2f8b4e71026f5d2e (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2014-10-16T17:59:00Z (GMT). No. of bitstreams: 2 Dissertacao - Ana Claudia Bastos Loureiro Monção - 2013.pdf: 4213405 bytes, checksum: 3bbe190ae0f4a45a2f8b4e71026f5d2e (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-08-02
Software Testing is an important area of Software Engineering to ensuring the software quality. It consists of activities that involve long time and high costs, but need to be made throughout the process of building software. As in other areas of software engineering, there are problems in the activities of Software Testing whose solution is not trivial. For these problems, several techniques of optimization and search have been explored trying to find an optimal solution or near optimal, giving rise to lines of research textit Search-Based Software Engineering (SBSE) and textit Search-Based Software Testing (SBST). This work is part of this context and aims to solve the problem of selecting test data for test execution in SQL statements. Given the number of potential solutions to this problem, the proposed approach combines techniques Mutation Analysis for SQL with Evolutionary Computation to find a reduced data set, that be able to detect a large number of defects in SQL statements of a particular application. Based on a heuristic perspective, the proposal uses Genetic Algorithms (GA) to select tuples from a existing database (from production environment) trying to reduce it to a set of data relevant and effective. During the evolutionary process, Mutation Analysis is used to evaluate each set of test data selected by the AG. The results obtained from the experiments showed a good performance using meta-heuristic of Genetic Algorithms, and its variations.
Teste de Software é uma área da Engenharia de Software de fundamental importância para a garantia da qualidade do software. São atividades que envolvem tempo e custos elevados, mas que precisam ser realizadas durante todo o processo de construção de um software. Assim como em outra áreas da Engenharia de Software, existem problemas nas atividades de Teste de Software cuja solução não é trivial. Para esses problemas, têm sido exploradas várias técnicas de busca e otimização tentando encontrar uma solução ótima ou perto da ótima, dando origem às linhas de pesquisa Search-Based Software Engineering (SBSE) e Search-Based Software Testing (SBST). O presente trabalho está inserido neste contexto e tem como objetivo solucionar o problema de seleção de dados de teste para execução de testes em instruções SQL. Dada a quantidade de soluções possíveis para este problema, a abordagem proposta combina técnicas de Análise de Mutantes SQL com Computação Evolucionária para encontrar um conjunto de dados reduzido que seja capaz de detectar uma grande quantidade de defeitos em instruções SQL de uma determinada aplicação. Baseada em uma perspectiva heurística, a proposta utiliza Algoritmos Genéticos (AG) para selecionar tuplas de um banco de dados existente (de produção) tentando reduzi-lo em um conjunto de dados relevante e efetivo. Durante o processo evolucionário, a Análise de Mutantes é utilizada para avaliação de cada conjunto de dados de teste selecionado pelo AG. Os resultados obtidos com a realização dos experimentos revelaram um bom desempenho utilizando a metaheurística dos Algoritmos Genéticos e suas variações.
Amorim, Lucas Benevides Viana de. "Um método para descoberta automática de regras para a detecção de Bad Smells". Universidade Federal de Alagoas, 2014. http://www.repositorio.ufal.br/handle/riufal/1757.
Texto completoUma das técnicas para a manutenção da qualidade de um software é o refatoramento do código, mas para que esta prática traga benefícios, é necessário saber em que partes do código ela deve ser aplicada. Um catálogo com os problemas estruturais mais comuns (Bad Smells) foi proposto na literatura como uma maneira de saber quando um fragmento de código deve ser refatorado, e que tipo de refatoramento deve ser aplicado. Este catálogo vem sendo estendido por outros pesquisadores. No entanto, a detecção desses Bad Smells, está longe de ser trivial, principalmente devido a falta de uma definição precisa e consensual de cada Bad Smell. Neste trabalho de pesquisa, propomos uma solução para o problema da detecção automática de Bad Smells por meio da descoberta automática de regras baseadas emmétricas de software. Para avaliar a efetividade da técnica, utilizamos um conjunto de dados com informações sobre métricas de software calculadas para 4 sistemas de software de código aberto programados emJava (ArgoUML, Eclipse,Mylyn e Rhino) e, por meio de umalgoritmo classificador, indutor de Árvores deDecisão, C5.0, fomos capazes de gerar regras para a detecção dos 12 Bad Smells analisados emnossos estudos. Nossos experimentos demonstramque regras geradas obtiveramumresultado bastante satisfatório quando testadas emumconjunto de dados à parte (conjunto de testes). Além disso, visando otimizar o desempenho da solução proposta, implementamos um Algoritmo Genético para pré-selecionar as métricas de software mais informativas para cada Bad Smell emostramos que é possível diminuir o erro de classificação alémde, muitas vezes, reduzir o tamanho das regras geradas. Em comparação com ferramentas existentes para detecção de Bad Smells, mostramos indícios de que a técnica proposta apresenta vantagens.
Crawford, Alistair y n/a. "Bad Behaviour: The Prevention of Usability Problems Using GSE Models". Griffith University. School of Information and Communication Technology, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20061108.154141.
Texto completoCrawford, Alistair. "Bad Behaviour: The Prevention of Usability Problems Using GSE Models". Thesis, Griffith University, 2006. http://hdl.handle.net/10072/366051.
Texto completoThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Information and Communication Technology
Full Text
Freitas, Diogo Machado de. "Geração evolucionária de heurísticas para localização de defeitos de software". Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/9010.
Texto completoApproved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-10-30T13:41:38Z (GMT) No. of bitstreams: 2 Dissertação - Diogo Machado de Freitas - 2018.pdf: 1477764 bytes, checksum: 73759c5ece96bf48ffd4d698f14026b9 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-10-30T13:41:38Z (GMT). No. of bitstreams: 2 Dissertação - Diogo Machado de Freitas - 2018.pdf: 1477764 bytes, checksum: 73759c5ece96bf48ffd4d698f14026b9 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-09-24
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Fault Localization is one stage of the software life cycle, which demands important resources such as time and effort spent on a project. There are several initiatives towards the automation of the fault localization process and the reduction of the associated resources. Many techniques are based on heuristics that use information obtained (spectrum) from the execution of test cases, in order to measure the suspiciousness of each program element to be defective. Spectrum data generally refers to code coverage and test results (positive or negative). The present work presents two approaches based on the Genetic Programming algorithm for the problem of Fault Localization: a method to compose a new heuristic from a set of existing ones; and a method for constructing heuristics based on data from program mutation analysis. The innovative aspects of both methods refer to the joint investigation of: (i) specialization of heuristics for certain programs; (ii) application of an evolutionary approach to the generation of heuristics with non-linear equations; (iii) creation of heuristics based on the combination of traditional heuristics; (iv) use of coverage and mutation spectra extracted from the test activity; (v) analyzing and comparing the efficacy of methods that use coverage and mutation spectra for fault localization; and (vi) quality analysis of the mutation spectra as a data source for fault localization. The results have pointed to the competitiveness of both approaches in their contexts.
Localização de Defeitos é uma etapa do ciclo de vida de software, que demanda recursos importantes tais como o tempo e o esforço gastos em um projeto. Existem diversas iniciativas na direção da automação do processo de localização de defeitos e da redução dos recursos associados. Muitas técnicas são baseadas heurísticas que utilizam informação obtida (espectro) a partir da execução de casos de teste, visando a medir a suspeita de cada elemento de programa para ser defeituoso. Os dados de espectro referem-se, em geral, à cobertura de código e aos resultados dos teste (positivo ou negativo). O presente trabalho apresenta duas abordagens baseadas no algoritmo Programação Genética para o problema de Localização de Defeitos: um método para compor automaticamente novas heurísticas a partir de um conjunto de heurísticas existentes; e um método para a construção de heurísticas baseadas em dados oriundos da análise de mutação de programas. Os aspectos inovadores de ambos os métodos referem-se à investigação conjunta de: (i) especialização de heurísticas para determinados programas; (ii) aplicação de abordagem evolutiva para a geração de heurísticas com equações não lineares; (iii) criação de heurísticas a partir da combinação de heurísticas tradicionais; (iv) uso de espectro de cobertura e de mutação extraídos da atividade de teste; (v) análise e comparação da eficácia de métodos que usam os espectros de cobertura e de mutação para a localização de defeitos; e (vi) análise da qualidade dos espectros de mutação como fonte de dados para a localização de defeitos. Os resultados apontaram competitividade de ambas as abordagens em seus contextos.
Bruneliere, Hugo. "Generic Model-based Approaches for Software Reverse Engineering and Comprehension". Thesis, Nantes, 2018. http://www.theses.fr/2018NANT4040/document.
Texto completoNowadays, companies face more and more the problem of managing, maintaining, evolving or replacing their existing software systems. Reverse Engineering is the required phase of obtaining various representations of these systems to provide a better comprehension of their purposes / states.Model Driven Engineering (MDE) is a Software Engineering paradigm relying on intensive model creation, manipulation and use within design, development, deployment, integration, maintenance and evolution tasks. Model Driven Reverse Engineering (MDRE) has been proposed to enhance traditional Reverse Engineering approaches via the application of MDE. It aims at obtaining models from an existing system according to various aspects, and then possibly federating them via coherent views for further comprehension.However, existing solutions are limited as they quite often rely on case-specific integrations of different tools. Moreover, they can sometimes be (very) heterogeneous which may hinder their practical deployments. Generic and extensible solutions are still missing for MDRE to be combined with model view / federation capabilities.In this thesis, we propose to rely on two complementary, generic and extensible model-based approaches and their Eclipse/EMF-based implementations in open source: (i) To facilitate the elaboration of MDRE solutions in many different contexts, by obtaining different kinds of models from existing systems (e.g. their source code, data). (ii) To specify, build and manipulate views federating different models (e.g. resulting from MDRE) according to comprehension objectives (e.g. for different stakeholders)
patney, vikas. "Software Engineering Best Practices for Parallel Computing Development". Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Informationsteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-23803.
Texto completoAnderson, Steven E. "Functional specification for a Generic C3I Workstation". Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA241377.
Texto completoThesis Advisor(s): Luqi. Second Reader:Shimeall, Tomothy. "September 1990." Description based on title screen viewed on December 16, 2009. DTIC Descriptor(s): Communications intelligence, work stations, command control communications, embedded systems, models, combat readiness, specifications, tools, computers, theses, prototypes, costs, evolution(general), fleets(ships), naval operations, budgets, economic impact, combat effectiveness, requirements, computer programs, software engineering Author(s) subject terms: Software specification, hard real time software, embedded systems, generic C3I workstation, next generation computer resources. Includes bibliographical references (p. 253-255). Also available in print.
Uzuncaova, Engin. "A generic software architecture for deception-based intrusion detection and response systems". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FUzuncaova.pdf.
Texto completoThesis advisor(s): James Bret Michael, Richard Riehle. Includes bibliographical references (p. 63-66). Also available online.
Kritzinger, Chris (Cornelis Christiaan). "The development of generic modelling software for citrus packing processes". Thesis, Stellenbosch : Stellenbosch University, 2007. http://hdl.handle.net/10019.1/21669.
Texto completoENGLISH ABSTRACT: This study was initiated in October 2004 when Vizier Systems (Pty) Ltd approached the Department of Industrial Engineering at the University of Stellenbosch with a concept. They proposed that a fruit packing line be represented as a series of unit operations and suggested that the concept could be used to create a generic model that can be used to represent any packing line. After further discussions with Vizier about the concept and their reasons for requiring a generic model, a formal client requirement was formulated. It was decided that the generic modelling concept had to be tested in the citrus industry. Modelling theory was investigated and a generic modelling methodology was formulated by adapting an existing modelling methodology. The first few steps of the developed methodology led to industry data being gathered and several role-players in the citrus export industry being visited. An analysis of the data enabled the development of the necessary techniques to do distribution estimation and forecasting of the system input, which is fruit. The various processes were grouped into generic groups and detailed capacity calculations were developed for each process. The fruit parameter estimation techniques and capacity calculations were integrated into a five step modelling procedure. Once the generic model was set up to represent a specific packing line, the modelling procedure provided optimum flow rates, equipment setups and personnel allocations for defined production runs. The modelling procedure was then translated into a computer model. This allowed a complete capacity analysis of a packing line by incrementally varying the characteristics of the fruit input. The developed generic model was validated by comparing its predictions to the results of two production runs at an existing packing line. It was found that the generic model is able to adequately represent the packing line and that the fruit inputs and outputs can be accurately estimated. The concept proposed by Vizier, that a packing line can be generically modelled as a series of unit operations, was shown to be valid.
AFRIKAANSE OPSOMMING: Hierdie studie is in Oktober 2004 geïnisieer toe Vizier Systems (Pty) Ltd die Departement van Bedryfsingenieurswese aan die Universiteit van Stellenbosch met ’n konsep genader het. Hulle het aan die hand gedoen dat ’n vrugtepaklyn voorgestel kan word as ’n reeks eenheidsprosesse en dat die konsep gebruik kan word om ’n generiese model te skep om enige vrugtepaklyn te verteenwoordig. Na verdere samesprekings met Vizier oor die konsep en hul redes vir die noodsaaklikheid van ’n generiese model, is ’n formele kliëntebehoefte geformuleer. Daar is besluit dat die generiese modelleringskonsep in die sitrusbedryf getoets gaan word. Modelleringsteorie is ondersoek en ’n generiese modelleringsmetodologie is geformuleer deur ’n bestaande modelleringsmetodologie aan te pas. Die stappe van die ontwikkelde metodologie het gelei tot die insameling van data vanuit die industrie en verskeie rolspelers in die sitrus-uitvoerindustrie is besoek. ’n Analise van die data het die ontwikkeling van die tegnieke moontlik gemaak wat nodig was om verspreidingsberamings en voorspelling van die stelselinset – die vrugte – te doen. Die onderskeie prosesse is gegroepeer in generiese groepe en gedetailleerde kapasiteitsberekeninge is vir elke proses ontwikkel. Die vrugparameter beramingstegnieke en kapasiteitsberekeninge is geïntegreer in ’n vyf-stapmodelleringsprosedure. Nadat die generiese model opgestel is om ’n spesifieke paklyn voor te stel, het die modelleringsprosedure optimum vloeitempo’s, toerustingopstellings en personeeltoedelings vir die spesifieke produksielopie gegee. Die modelleringsprosedure is toe herlei tot ’n rekenaarmodel. Dit het ’n volledige kapasiteitsanalise van die paklyn moontlik gemaak, deur die eienskappe van die vruginset inkrementeel te varieer. Die ontwikkelde generiese model is gestaaf deur sy voorspellings te vergelyk met die resultate van twee produksielopies van ’n bestaande paklyn. Daar is bevind dat die generiese model in staat is om die paklyn voldoende voor te stel en dat dit die vruginsette en -uitsette akkuraat kon beraam. Die geldigheid van die konsep wat voorgestel is deur Vizier, naamlik dat ’n paklyn generies gemodelleer kan word as ’n reeks eenheidsprosesse, is bevestig.
Heunis, Andre Emile. "Design and implementation of generic flight software for a CubeSat". Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/95911.
Texto completoENGLISH ABSTRACT: The main on-board computer in a satellite is responsible for ensuring the correct operation of the entire system. It performs this task using flight software. In order to reduce future development costs, it is desirable to develop generic software that can be re-used on subsequent missions. This thesis details the design and implementation of a generic flight software application for CubeSats. A generic, modular framework is used in order to increase the re-usability of the flight software architecture. In order to simplify the management of the various on-board processes, the software is built upon the FreeRTOS real-time operating system. The Consultative Committee for Space Data Systems’ telemetry and telecommand packet definitions are used to interface with ground stations. In addition, a number of services defined in the European Cooperation for Space Standardisation’s Packet Utilisation Standard are used to perform the functions required from the flight software. The final application contains all the command and data handling functionality required in a standard CubeSat mission. Mechanisms for the collection, storage and transmission of housekeeping data are included as well as the implementation of basic fault tolerance techniques. Through testing it is shown that the FreeRTOS scheduler can be used to ensure the software meets hard-real time requirements.
AFRIKAANSE OPSOMMING: Die hoof aanboordrekenaar in ’n satelliet verseker die korrekte werking van die hele stelsel. Die rekenaar voer hierdie taak uit deur van vlugsagteware gebruik te maak. Om toekomstige ontwikkelingskostes te verminder, is dit noodsaaklik om generiese sagteware te ontwikkel wat hergebruik kan word op daaropvolgende missies. Hierdie tesis handel oor die besonderhede van die ontwerp en implementering van generiese vlugsagteware vir ’n CubeSat. ’n Generiese, modulêre raamwerk word gebruik om die hergebruik van die sagteware te verbeter. Ten einde die beheer van die verskillende aanboordprosesse te vereenvoudig, word die sagteware gebou op die FreeRTOS reëletyd bedryfstelsel. Die telemetrie- en telebevelpakket definisies van die “Consultative Committee for Space Data Systems” word gebruik om met grondstasies te kommunikeer. Daarby is ’n aantal dienste omskryf in die “Packet Utilisation Standard” van die “European Cooperation for Space Standardisation” gebruik om die vereiste funksies van die vlugsagteware uit te voer. Die finale sagteware bevat al die bevel en data-hantering funksies soos wat vereis word van ’n standaard CubeSat missie. Meganismes vir die versameling, bewaring en oordrag van huishoudelike data is ingesluit sowel as die implementering van basiese fouttolerante tegnieke. Toetse het gewys dat die FreeRTOS skeduleerder gebruik kan word om te verseker dat die sagteware aan harde reëletyd vereistes voldoen.
Bredenkamp, F. v. B. "The development of a generic just-in-time supply chain optimisation software tool". Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/1920.
Texto completoCousseau, Philippe. "Traitement d'image et traitement graphique en genie genetique : application a l'analyse d'images autoradiographiques et a la representation de cartes de molecules d'adn recombinant". Université Louis Pasteur (Strasbourg) (1971-2008), 1986. http://www.theses.fr/1986STR13037.
Texto completoWadhwani, Vickey y Shoain Ahmed. "Evaluating Evolutionary Prototyping for Customizable Generic Products in Industry". Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2402.
Texto completoMobile number : 0046762183249
Iqbal, Ilyas. "Development of a Generic Integration Layerfor an ERP system". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-88555.
Texto completoMaster Thesis
Development of a Generic Integration Layer for an ERP system
Langer, Samridhi. "Concept to store variant information gathered from different artifacts in an existing specification interchange format". Master's thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-212606.
Texto completoMansfield, Martin F. "Design of a generic parse tree for imperative languages". Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834617.
Texto completoDepartment of Computer Science
Pandikow, Asmus. "A Generic Principle for Enabling Interoperability of Structured and Object-Oriented Analysis and Design Tools". Doctoral thesis, Linköpings universitet, RTSLAB - Laboratoriet för realtidssystem, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4991.
Texto completoPathni, Charu. "Round-trip engineering concept for hierarchical UML models in AUTOSAR-based safety projects". Master's thesis, Universitätsbibliothek Chemnitz, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-187153.
Texto completoMazo, Raul. "A Generic Approach for Automated Verification of Product Line Models". Phd thesis, Université Panthéon-Sorbonne - Paris I, 2011. http://tel.archives-ouvertes.fr/tel-00707351.
Texto completoLöfstrand, Sebastian y Lundvall Jonas Fredén. "Utvärdering av Movesense för användning vid biomekaniska studier". Thesis, KTH, Hälsoinformatik och logistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252802.
Texto completoThere is a need to be able to utilize a user-friendly system for interaction with body-worn sensors in teaching and research at the school for chemistry, biotechnology and health at the Royal Institute of Technology. Responsible persons at the program have therefore assigned a Bachelor of Science (BSc) degree project to investigate whether a specific sensor system, Movesense, can serve as a user-friendly tool for studying biomechanical movements within education and research. A preliminary study is carried out to examine the sensor system's potential. A system prototype is developed for configuring the sensor system and retrieving sensor data. A quantitative evaluation of collected data from the sensor system, and video analysis is performed to determine whether it is possible to perform motion analysis using the system prototype. The investigation resulted in a functioning system prototype, and that Movesense can be used as a tool for studying certain types of movements. The prototype has great development potential, and the sensor system has potential opportunities in education and research.
Bessinger, Zachary. "An Automatic Framework for Embryonic Localization Using Edges in a Scale Space". TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1262.
Texto completoJunior, Edison Kicho Shimabukuro. "Um gerador de aplicações configurável". Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-03022007-002615/.
Texto completoApplication generators are tools that receive as input a software specification, validate it and automatically generate artifacts based on it. Application generators can bring several benefits in terms of productivity, as they automatically generate low-level artifacts based on higher abstraction level specifications. A major concern of application generators is their high development cost. Con- figurable application generators are those generators that can be adapted to give support in specific domains, i.e., they are considered as meta-generators through which it is possible to obtain specific application generators. This work presents an approach for software development supported by configurable application generators. It defines the architecture and main features of a configurable application generator and presents Captor, which is a configurable application generator developed to ease the creation of specific generators. Three case studies were conducted to show the configuration of the Captor tool to different application domains: objects persistence, business resource management and floating weather stations.
Caballé, Llobet Santi. "A Computational Model for the Construction of Knowledge-based Collaborative Learning Distributed Applications". Doctoral thesis, Universitat Oberta de Catalunya, 2008. http://hdl.handle.net/10803/9127.
Texto completoUn camp de recerca important dins del paradigma del Computer-Supported Collaborative Learning (CSCL) és la importància en la gestió eficaç de la informació d'esdeveniments generada durant l'activitat de l'aprenentatge col·laboratiu virtual, per a proporcionar coneixement sobre el comportament dels membres del grup. Aquesta visió és especialment pertinent en l'escenari educatiu actual que passa d'un paradigma tradicional - centrat en la figura d'un instructor magistral - a un paradigma emergent que considera els estudiants com actors centrals en el seu procés d'aprenentatge. En aquest nou escenari, els estudiants aprenen, amb l'ajuda de professors, la tecnologia i els altres estudiants, el que potencialment necessitaran per a desenvolupar les seves activitats acadèmiques o professionals futures.
Els principals aspectes a tenir en compte en aquest context són, primer de tot, com dissenyar una plataforma sota el paradigma del CSCL, que es pugui utilitzar en situacions reals d'aprenentatge col·laboratiu complexe i a llarg termini, basades en el model d'aprenentatge de resolució de problemes. I que permet al professor una anàlisi del grup més eficaç així com donar el suport adequat als estudiants quan sigui necessari.
En segon lloc, com extreure coneixement pertinent de la col·laboració per donar consciència i retorn als estudiants a nivell individual i de rendiment del grup, així com per a propòsits d'avaluació.
L'assoliment d'aquests objectius impliquen el disseny d'un model conceptual d'interacció durant l'aprenentatge col·laboratiu que estructuri i classifiqui la informació generada en una aplicació col·laborativa en diferents nivells de descripció. A partir d'aquesta aproximació conceptual, els models computacionals hi donen resposta per a proporcionar una extracció eficaç del coneixement produït per l'individu i per l'activitat del grup, així com la possibilitat d'explotar aquest coneixement com una eina metacognitiva pel suport en temps real i regulat del procés d'aprenentatge col·laboratiu.
A més a més, les necessitats dels entorns CSCL han evolucionat en gran mesura durant els darrers anys d'acord amb uns requisits pedagògics i tecnològics cada cop més exigents. Els entorns d'aprenentatge col·laboratius virtuals ara ja no depenen de grups d'estudiants homogenis, continguts i recursos d'aprenentatge estàtics, ni pedagogies úniques, sinó que exigeixen una forta personalització i un alt grau de flexibilitat. En aquest nou escenari, les organitzacions educatives actuals necessiten estendre's i moure's cap a paradigmes d'ensenyament altament personalitzats, amb immediatesa i constantment, on cada paradigma incorpora el seu propi model pedagògic, el seu propi objectiu d'aprenentatge i incorpora els seus propis recursos educatius específics.
Les demandes de les organitzacions actuals també inclouen la integració efectiva, en termes de cost i temps, de sistemes d'aprenentatge llegats i externs, que pertanyen a altres institucions, departaments i cursos. Aquests sistemes llegats es troben implementats en llenguatges diferents, suportats per plataformes heterogènies i distribuïdes arreu, per anomenar alguns dels problemes més habituals. Tots aquests problemes representen certament un gran repte per la comunitat de recerca actual i futura. Per tant, els propers esforços han d'anar encarats a ajudar a desenvolupadors, recercaires, tecnòlegs i pedagogs a superar aquests exigents requeriments que es troben actualment en el domini del CSCL, així com proporcionar a les organitzacions educatives solucions ràpides i flexibles per a potenciar i millorar el rendiment i resultats de l'aprenentatge col·laboratiu. Aquesta tesi proposa un primer pas per aconseguir aquests objectius.
An important research topic in Computer Supported Collaborative Learning (CSCL) is to explore the importance of efficient management of event information generated from group activity in collaborative learning practices for its further use in extracting and providing knowledge on interaction behavior.
The essential issue here is first how to design a CSCL platform that can be used for real, long-term, complex collaborative problem solving situations and which enables the instructor to both analyze group interaction effectively and provide an adequate support when needed. Secondly, how to extract relevant knowledge from collaboration in order to provide learners with efficient awareness and feedback as regards individual and group performance and assessment. The achievement of these tasks involve the design of a conceptual framework of collaborative learning interaction that structures and classifies the information generated in a collaborative application at several levels of description. Computational models are then to realize this conceptual approach for an efficient management of the knowledge produced by the individual and group activity as well as the possibility of exploiting this knowledge further as a metacognitive tool for real-time coaching and regulating the collaborative learning process.
In addition, CSCL needs have been evolving over the last years accordingly with more and more demanding pedagogical and technological requirements. On-line collaborative learning environments no longer depend on homogeneous groups, static content and resources, and single pedagogies, but high customization and flexibility are a must in this context. As a result, current educational organizations' needs involve extending and moving to highly customized learning and teaching forms in timely fashion, each incorporating its own pedagogical approach, each targeting a specific learning goal, and each incorporating its specific resources.
These entire issues certainly represent a great challenge for current and future research in this field. Therefore, further efforts need to be made that help developers, technologists and pedagogists overcome the demanding requirements currently found in the CSCL domain as well as provide modern educational organizations with fast, flexible and effective solutions for the enhancement and improvement of the collaborative learning performance and outcomes. This thesis proposes a first step toward these goals.
Índex foliat:
The main contribution in this thesis is the exploration of the importance of an efficient management of information generated from group activity in Computer-Supported Collaborative Learning (CSCL) practices for its further use in extracting and providing knowledge on interaction behavior. To this end, the first step is to investigate a conceptual model for data analysis and management so as to identify the many kinds of indicators that describe collaboration and learning and classify them into high-level potential categories of effective collaboration. Indeed, there are more evident key discourse elements and aspects than those shown by the literature, which play an important role both for promoting student participation and enhancing group and individual performance, such as, the impact and effectiveness of students' contributions, among others, that are explored in this work. By making these elements explicit, the discussion model proposed accomplishes high students' participation rates and contribution quality in a more natural and effective way. This approach goes beyond a mere interaction analysis of asynchronous discussion in the sense that it builds a multi-functional model that fosters knowledge sharing and construction, develops a strong sense of community among students, provides tutors with a powerful tool for students' monitoring, discussion regulation, while it allows for peer facilitation through self, peer and group awareness and assessment.
The results of the research described so far motivates the development of a computational system as the translation from the conceptual model into a computer system that implements the management of the information and knowledge acquired from the group activity, so as to be efficiently fed back to the collaboration. The achievement of a generic, robust, flexible, interoperable, reusable computational model that meets the fundamental functional needs shared by any collaborative learning experience is largely investigated in this thesis. The systematic reuse of this computational model permits a fast adaptation to new learning and teaching requirements, such as learning by discussion, by relying on the most advanced software engineering processes and methodologies from the field of software reuse, and thus important benefits are expected in terms of productivity, quality, and cost.
Therefore, another important contribution is to explore and extend suitable software reuse techniques, such as Generic Programming, so as to allow the computational model to be successfully particularized in as many as situations as possible without losing efficiency in the process. In particular, based on domain analysis techniques, a high-level computational description and formalization of the CSCL domain are identified and modeled. Then, different specific-platform developments that realize the conceptual description are provided. It is also explored a certain level of automation by means of advanced techniques based on Service-Oriented Architectures and Web-services while passing from the conceptual specification to the desired realization, which greatly facilitates the development of CSCL applications using this computational model.
Based on the outcomes of these investigations, this thesis contributes with computational collaborative learning systems, which are capable of managing both qualitative and quantitative information and transforming it into useful knowledge for all the implicated parties in an efficient and clear way. This is achieved by both the specific assessment of each contribution by the tutor who supervises the discussion and by rich statistical information about student's participation. This statistical data is automatically provided by the system; for instance, statistical data sheds light on the students' engagement in the discussion forum or how much interest drew the student's intervention in the form of participation impact, level of passivity, proactivity, reactivity, and so on. The aim is to provide both a deeper understanding of the actual discussion process and a more objective assessment of individual and group activity.
This information is then processed and analyzed by means of a multivariate statistical model in order to extract useful knowledge about the collaboration. The knowledge acquired is communicated back to the members of the learning group and their tutor in appropriate formats, thus providing valuable awareness and feedback of group interaction and performance as well as may help identify and assess the real skills and intentions of participants. The most important benefit expected from the conceptual model for interaction data analysis and management is a great improvement and enhancement of the learning and teaching collaborative experiences.
Finally, the possibilities of using distributed and Grid technology to support real CSCL environments are also extensively explored in this thesis. The results of this investigation lead to conclude that the features provided by these technologies form an ideal context for supporting and meeting demanding requirements of collaborative learning applications. This approach is taken one step further for enhancing the possibilities of the computational model in the CSCL domain and it is successfully adopted on an empirical and application basis. From the results achieved, it is proved the feasibility of distributed technologies to considerably enhance and improve the collaborative learning experience. In particular, the use of Grid computing is successfully applied for the specific purpose of increasing the efficiency of processing a large amount of information from group activity log files.
Ramraj, Varun. "Exploiting whole-PDB analysis in novel bioinformatics applications". Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:6c59c813-2a4c-440c-940b-d334c02dd075.
Texto completoBokhari, Mahmoud Abdulwahab K. "Genetic Improvement of Software for Energy E ciency in Noisy and Fragmented Eco-Systems". Thesis, 2020. http://hdl.handle.net/2440/130174.
Texto completoThesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2020
Nathoo, Kirti. "Establish a generic railway electronic interlocking solution using software engineering methods". Thesis, 2015. http://hdl.handle.net/10539/17639.
Texto completoSmith, Jacob N. "Techniques in Active and Generic Software Libraries". 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-05-7823.
Texto completoDietrich, David. "A Mode-Based Pattern for Feature Requirements, and a Generic Feature Interface". Thesis, 2013. http://hdl.handle.net/10012/7922.
Texto completoAhumada, Pardo Dania I. "Una aproximación evolucionista para la generación automática de sentencias SQL a partir de ejemplos". Thèse, 2015. http://hdl.handle.net/1866/12454.
Texto completoActuellement l'usage des technologies est primordial pour l'avance de la société, celles-ci ont permis que des personnes sans connaissances informatiques ou des utilisateurs appelés "non expert" s'intéressent à son usage. C'est la raison pour laquelle les enquêteurs scientifiques se sont vus dans la nécessité de produire les études qui permettent l'adaptation des systèmes à la problématique existante à l'intérieur du domaine informatique. Une nécessité récurrente pour tout utilisateur d'un système est la gestion de l'information, que l’on peut administrer au moyen d'une base de données et de langage spécifique pour celles-ci comme est le SQL (Structured Query Language), mais qui oblige à l'utilisateur à chercher un spécialiste pour sa conception et sa construction, et qui représente des prix et des méthodes complexes. Une question se pose alors, quoi faire quand les projets sont petites et les ressources et les processus limités ? Ayant pour base la recherche de l'université de Washington [39], ce mémoire automatise le processus et applique une différente technique d'apprentissage qui utilise une approche évolutionniste, où l'application d'un algorithme génétique adapté génère des requêtes SQL valides répondant aux conditions établies par les exemples d'entrée et de sortie donnés par l'utilisateur. On a obtenu comme résultat de l’approche un outil dénommé EvoSQL qui a été validé dans cette étude. Sur les 28 exercices employés par la recherche [39], 23 exercices ont été obtenus avec des résultats parfaits et 5 exercices sans succès, ce qui représente 82.1 % d'effectivité. Cette effectivité est supérieure de 10.7 % à celle établie par l'outil développé dans [32] SQLSynthesizer et 75% plus haute que l'outil suivant le plus proche Query by Output QBO [31]. La moyenne obtenue dans l'exécution de chaque exercice a été de 3 min et 11sec, ce qui est supérieur au temps établi par SQlSynthesizer, cependant dans la mesure où un algorithme génétique suppose que l'existence de phases augmente les rangs des temps, le temps obtenu est acceptable par rapport aux applications de ce type. Dans une conclusion et selon ce qui a été antérieurement exposé nous avons obtenu un outil automatique, avec une approche évolutionniste, avec de bons résultats et un processus simple pour l'utilisateur « non expert ».
At present the use of the technologies is basic for the advance of the society; these have allowed that persons without knowledge or so called "non expert" users are interested in this use, is for it that the researchers have seen the need to produce studies that allow the adjustment of the systems the existing at the problematic inside the area of the technology. A need of every user of a system is the management of the information, which can be manage by a database and specific language for these as the SQL (Structured Query Language), which forces the user to come to a specialist for the design and construction of this one, which represents costs and complex methods, but what to do when they are small investigations where the resources and processes are limited? Taking as a base the research of the university of Washington [32], this report automates the process and applies a different learning technique, for which uses an evolutionary approach, where the application of a genetic adapted algorithm generates query SQL valid that answer to the conditions established by the given examples of entry and exit given by the user. There was obtained as a result of the approach a tool named EvoSQL that was validated in the same 28 exercises used by the investigation [32], of which 23 exercises were obtained by ideal results and 5 not successful exercises, which represents 82.1 % of efficiency, superior in 10.7 % to the established one for the tool developed in [32] SQLSynthesizer and 75% higher than the following near tool Query by Output QBO [26]. The average obtained in the execution of every exercise was of 3 min and 11seg that is superior to the time established by SQlSynthesizer, Nevertheless, being a genetic algorithm where the steps existence makes that the ranges of times are extended, the obtained one is acceptable with relation to the applications of this type. In conclusion et according to previously exposed, we have obtained an automatic tool, with an evolutionary approach, with good results and a simple process for the « not expert » user.