Дисертації з теми "Software algorithm"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Software algorithm.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Software algorithm".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Dementiev, Roman. "Algorithm engineering for large data sets hardware, software, algorithms." Saarbrücken VDM, Müller, 2006. http://d-nb.info/986494429/04.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Dementiev, Roman. "Algorithm engineering for large data sets : hardware, software, algorithms /." Saarbrücken : VDM-Verl. Dr. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3029033&prov=M&dok_var=1&dok_ext=htm.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ramage, Stephen Edward Andrew. "Advances in meta-algorithmic software libraries for distributed automated algorithm configuration." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/52809.

Повний текст джерела
Анотація:
A meta-algorithmic procedure is a computer procedure that operates upon another algorithm and its associated design space to produce another algorithm with desirable properties (e.g., faster runtime, better solution quality, ...; see e.g., Hoos [2008]). Many meta-algorithmic procedures have runtimes that are dominated by the runtime of the algorithm being operated on. This holds in particular for automatic algorithm configurators, such as ParamILS, SMAC, and GGA, which serve to optimize the design (expressed through user settable parameters) of an algorithm under certain use cases. Consequently, one can gain improved performance of the meta-algorithm if evaluations of the algorithm under study can be done in parallel. In this thesis, we explore a distributed version of the automatic configurator, SMAC, called pSMAC, and the library, AEATK, that it was built upon, which has proved general and versatile enough to support many other meta-algorithmic procedures.
Science, Faculty of
Computer Science, Department of
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Berry, Thomas. "Algorithm engineering : string processing." Thesis, Liverpool John Moores University, 2002. http://researchonline.ljmu.ac.uk/4973/.

Повний текст джерела
Анотація:
The string matching problem has attracted a lot of interest throughout the history of computer science, and is crucial to the computing industry. The theoretical community in Computer Science has a developed a rich literature in the design and analysis of string matching algorithms. To date, most of this work has been based on the asymptotic analysis of the algorithms. This analysis rarely tell us how the algorithm will perform in practice and considerable experimentation and fine-tuning is typically required to get the most out of a theoretical idea. In this thesis, promising string matching algorithms discovered by the theoretical community are implemented, tested and refined to the point where they can be usefully applied in practice. In the course of this work we have presented the following new algorithms. We prove that the time complexity of the new algorithms, for the average case is linear. We also compared the new algorithms with the existing algorithms by experimentation. " We implemented the existing one dimensional string matching algorithms for English texts. From the findings of the experimental results we identified the best two algorithms. We combined these two algorithms and introduce a new algorithm. " We developed a new two dimensional string matching algorithm. This algorithm uses the structure of the pattern to reduce the number of comparisons required to search for the pattern. " We described a method for efficiently storing text. Although this reduces the size of the storage space, it is not a compression method as in the literature. Our aim is to improve both space and time taken by a string matching algorithm. Our new algorithm searches for patterns in the efficiently stored text without decompressing the text. " We illustrated that by pre-processing the text we can improve the speed of the string matching algorithm when we search for a large number of patterns in a given text. " We proposed a hardware solution for searching in an efficiently stored DNA text.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Couto, Rafael Carvalho. "Desenvolvimento e aplicação do software MGA (Molecular Genetic Algorithm)." Universidade Federal de Goiás, 2013. http://repositorio.bc.ufg.br/tede/handle/tede/7512.

Повний текст джерела
Анотація:
Submitted by JÚLIO HEBER SILVA (julioheber@yahoo.com.br) on 2017-06-26T18:28:31Z No. of bitstreams: 2 Dissertação - Rafael Carvalho Couto - 2013.pdf: 41193945 bytes, checksum: 74a020dad23640afb84a085b841b91aa (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Cláudia Bueno (claudiamoura18@gmail.com) on 2017-07-07T20:26:09Z (GMT) No. of bitstreams: 2 Dissertação - Rafael Carvalho Couto - 2013.pdf: 41193945 bytes, checksum: 74a020dad23640afb84a085b841b91aa (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2017-07-07T20:26:10Z (GMT). No. of bitstreams: 2 Dissertação - Rafael Carvalho Couto - 2013.pdf: 41193945 bytes, checksum: 74a020dad23640afb84a085b841b91aa (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2013-04-15
This work focuses on the development of the software MGA, which aims to determine the lowest energy structures of a given molecular system, using Genetic Algorithm (GA). The GA is a method of artificial intelligence that was developed to work with finding the best solutions of the specified conditions, ie, an algorithm that seeks the best answer desired, an optimal result. The MGA uses three techniques: Random Search (RS), Noninclusive Genetic Algorithm (NGA), Inclusive Genetic Algorithm (IGA). The last one is characterized by a new type of evolutionary strategy that allows in a single calculation and a single cycle, obtain several minimum of the potential energy surface. For optimum operation of the algorithm, was made an optimization of the parameters used in MGA, through response surface methodology. Using the techniques RS, IGA and NGA, were determined 141 distinct molecular structures of the amino acid asparagine. In the electronic structure calculations were considered the semi-empirical methods PM3, AM1 and RM1; and DFT potentials, with basis sets 6-311G ** and PC1. The RS determined the Global Minimum (GM) with ease, for the different potentials used, and proved that it’s quite useful in determining molecular geometries where there is no accuracy in the determination of local minima in order of energy. The NGA is efficient in determining the GM, performing in a shorter time, if compared to RS and IGA. The IGA proved to be a more robust method than the others, because in addition to determining the GM, it can find the local minima in order of energy. Performing calculations on an intermediate time of RS and NGA, the IGA determined the GM as the NGA, and found structures that were not founded using RS. The GM’s of asparagine determined using the potentials PC1, PM3, AM1 and RM1 have a large structural difference. This demonstrates that different potencials used in the electronic structure calculations may lead to different results. By analyzing the structures obtained for potentials PC1, PM3, AM1 and RM1, using the IGA, it appears that there is a difference in the topology of the potential energy surface of these potentials.
O presente trabalho é focado no desenvolvimento do software MGA, que tem como objetivo a determinação das estruturas de menor energia de um dado sistema molecular, utilizando o Algoritmo Genético (AG). O AG é um método de inteligência artificial que foi desenvolvido para trabalhar com a procura de soluções que melhor atendam as condições especificadas, isto é, um algoritmo que procura a melhor resposta desejada, um resultado ótimo. O MGA utiliza três técnicas: Busca Aleatória (RS), Algoritmo Genético Não-inclusivo (NGA), Algoritmo Genético Inclusivo (IGA). Este último é caracterizado por um novo tipo de estratégia evolutiva que permite em um único cálculo e um único ciclo evolucionário obter diversos mínimos da superfície de energia potencial. Para o melhor funcionamento do algoritmo, foi feita uma otimização dos parâmetros utilizados do MGA, através da metodologia de superfície de resposta. Utilizando as técnicas RS, NGA e IGA, foram determinadas 141 estruturas moleculares distintas do aminoácido asparagina. Nos cálculos de estrutura eletrônica foram considerados os métodos semi-empíricos PM3, AM1 e RM1; e potenciais DFT, com os conjuntos de base 6-311G** e PC1. O RS determinou o Mínimo Global (GM) com facilidade, para os diferentes potenciais utilizados, e se mostrou bastante útil na determinação de geometrias moleculares onde não há um rigor na determinação de mínimos locais em ordem de energia. O NGA é eficiente na determinaçãoao do GM, realizando em um menor tempo, se comparado ao RS e IGA. O IGA mostrou-se um método mais robusto que os outros, pois além de determinar o GM é possível encontrar os mínimos locais em ordem de energia. Realizando cálculos em um tempo intermediário ao RS e NGA, o IGA determinou o GM assim como o NGA, e encontrou estruturas que não foram possíveis utilizando o RS. Os GM’s da asparagina determinados utilizando os potenciais PC1, PM3, AM1 e RM1 possuem uma grande diferença estrutural. Isto demonstra que diferentes potencias utilizados nos cálculos de estrutura eletrônica podem levar a diferentes resultados. Ao analisarmos as estruturas obtidas para os potenciais PC1, PM3, AM1 e RM1, utilizando o IGA, constata-se que há uma diferença na topologia de suas superfícies de energia potencial.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Panella, Nicola. "Software implementation of a BMS algorithm for automotive application." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Знайти повний текст джерела
Анотація:
This work aims to provide a solid understanding of the Simulink Model Based Design development process of a BMS algorithm, with the help of an example implementation of a Software Component (SWC) on the Micro Controller Unit (MCU). All SWCs are developed in Simulink environment using a specific workflow and toolset: Simulink Embedded Coder will then generate the C code which will be implemented in the MCU programming software. The introduction will cover basic control functions required by the BMS system to effectively monitor the battery pack of an automotive application, implementing a variety of safety procedures. The following section explains in detail the actual development of the sample SWC according to the Model Based Design Workflow presented: key factors of the development are Requirement Based Verification and Testing and Unit Testing implementation. This workflow’s toolset allows the development team to access sparse information providing a consistent linkage between multiple documented requirements and the programming environment. As a result, all the required information regarding every Unit or “module” is accessed within the Simulink interface. Error detection during development is greatly enhanced. This aspect is extremely important with regards to the debug process: the more the project in development is at the end of its life cycle (towards the client acceptance phase), the higher the bug fix cost results. Furthermore, some useful Simulink Add-Ons can perform compliance tests towards a collection of selected regulations as, inter alia, ISO 26262 “Road Vehicles – Functional Safety” Standard. Limitations imposed by integrated regulations during components’ development can ultimately facilitate the required certification and approval by competent authorities.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Gross, Hans-Gerhard. "Measuring evolutionary testability of real-time software." Thesis, University of South Wales, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

McLoone, M. P. "Generic silicon architectures for encryption algorithm." Thesis, Queen's University Belfast, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269123.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jung, Young Je. "Data compression and archiving software implementation and their algorithm comparison." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/26958.

Повний текст джерела
Анотація:
Although data compression has been studied for over 30 years, many new techniques are still evolving. There is considerable software available that incorporates compression schemes and archiving techniques. The U.S. Navy is interested in knowing the performance of this software. This thesis studies and compares the software. The testing files consist of the file types specified by the U.S. Naval Security Detachment at Pensacola, Florida
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Benage, William Fred. "A fault-tolerant software algorithm for a network of transputers." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Reiss, Agnieszka. "Software implementation of a video resampling algorithm on the TMS320C80." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/43283.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Carroll, Kevin M., John P. Wagle, Kimitake Sato, Brad H. DeWeese, Satoshi Mizuguchi, and Michael H. Stone. "Reliability of a Commercially Available and Algorithm-Based Kinetic Analysis Software Compared to Manual-Based Software." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etsu-works/4654.

Повний текст джерела
Анотація:
There is a need for reliable analysis techniques for kinetic data for coaches and sport scientists who employ athlete monitoring practices. The purpose of the study was: (1) to determine intra- and inter-rater reliability within a manual-based kinetic analysis program; and (2) to determine test-retest reliability of an algorithm-based kinetic analysis program. Five independent raters used a manual analysis program to analyse 100 isometric mid-thigh pull (IMTP) trials obtained from previously collected data. Each trial was analysed three times. The same IMTP trials were analysed using an algorithm-based analysis software. Variables measured were peak force, rate of force development from 0 to 50 ms (RFD50) and RFD from 0 to 200 ms (RFD200). Intraclass correlation coefficients (ICC) and coefficient of variation (CV) were used to assess intra- and inter-rater reliability. Nearly perfect reliability was observed for the manual-based (ICC > 0.92). However, poor intra- and inter-rater CV was observed for RFD (CV > 16.25% and CV > 32.27%, respectively). The algorithm-based method resulted in perfect reliability in all measurements (ICC = 1.0, CV = 0%). While manual methods of kinetic analysis may provide sufficient reliability, the perfect reliability observed within the algorithm-based method in the current study suggest it is a superior method for use in athlete monitoring programs.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Hopson, Benjamin Thomas Ken. "Techniques of design optimisation for algorithms implemented in software." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/20435.

Повний текст джерела
Анотація:
The overarching objective of this thesis was to develop tools for parallelising, optimising, and implementing algorithms on parallel architectures, in particular General Purpose Graphics Processors (GPGPUs). Two projects were chosen from different application areas in which GPGPUs are used: a defence application involving image compression, and a modelling application in bioinformatics (computational immunology). Each project had its own specific objectives, as well as supporting the overall research goal. The defence / image compression project was carried out in collaboration with the Jet Propulsion Laboratories. The specific questions were: to what extent an algorithm designed for bit-serial for the lossless compression of hyperspectral images on-board unmanned vehicles (UAVs) in hardware could be parallelised, whether GPGPUs could be used to implement that algorithm, and whether a software implementation with or without GPGPU acceleration could match the throughput of a dedicated hardware (FPGA) implementation. The dependencies within the algorithm were analysed, and the algorithm parallelised. The algorithm was implemented in software for GPGPU, and optimised. During the optimisation process, profiling revealed less than optimal device utilisation, but no further optimisations resulted in an improvement in speed. The design had hit a local-maximum of performance. Analysis of the arithmetic intensity and data-flow exposed flaws in the standard optimisation metric of kernel occupancy used for GPU optimisation. Redesigning the implementation with revised criteria (fused kernels, lower occupancy, and greater data locality) led to a new implementation with 10x higher throughput. GPGPUs were shown to be viable for on-board implementation of the CCSDS lossless hyperspectral image compression algorithm, exceeding the performance of the hardware reference implementation, and providing sufficient throughput for the next generation of image sensor as well. The second project was carried out in collaboration with biologists at the University of Arizona and involved modelling a complex biological system – VDJ recombination involved in the formation of T-cell receptors (TCRs). Generation of immune receptors (T cell receptor and antibodies) by VDJ recombination is an enormously complex process, which can theoretically synthesize greater than 1018 variants. Originally thought to be a random process, the underlying mechanisms clearly have a non-random nature that preferentially creates a small subset of immune receptors in many individuals. Understanding this bias is a longstanding problem in the field of immunology. Modelling the process of VDJ recombination to determine the number of ways each immune receptor can be synthesized, previously thought to be untenable, is a key first step in determining how this special population is made. The computational tools developed in this thesis have allowed immunologists for the first time to comprehensively test and invalidate a longstanding theory (convergent recombination) for how this special population is created, while generating the data needed to develop novel hypothesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Poggiolini, Mario. "The feature detection rule and its application within the negative selection algorithm." Diss., Pretoria : [s.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-06262009-112502/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Lynch, Michael Andrew. "Algorithm to layout (ATL) systems for VLSI design." Thesis, University of Newcastle Upon Tyne, 1986. http://hdl.handle.net/10443/2060.

Повний текст джерела
Анотація:
The complexities involved in custom VLSI design together with the failure of CAD techniques to keep pace with advances in the fabrication technology have resulted in a design bottleneck. Powerful tools are required to exploit the processing potential offered by the densities now available. Describing a system in a high level algorithmic notation makes writing, understanding, modification, and verification of a design description easier. It also removes some of the emphasis on the physical issues of VLSI design, and focus attention on formulating a correct and well structured design. This thesis examines how current trends in CAD techniques might influence the evolution of advanced Algorithm To Layout (ATL) systems. The envisaged features of an example system are specified. Particular attention is given to the implementation of one its features COPTS (Compilation Of Occam Programs To Schematics). COPTS is capable of generating schematic diagrams from which an actual layout can be derived. It takes a description written in a subset of Occam and generates a high level schematic diagram depicting its realisation as a VLSI system. This diagram provides the designer with feedback on the relative placement and interconnection of the operators used in the source code. It also gives a visual representation of the parallelism defined in the Occam description. Such diagrams are a valuable aid in documenting the implementation of a design. Occam has also been selected as the input to the design system that COPTS is a feature of. The choice of Occam was made on the assumption that the most appropriate algorithmic notation for such a design system will be a suitable high level programming language. This is in contrast to current automated VLSI design systems, which typically use a hardware des~ription language for input. These special purpose languages currently concentrate on handling structural/behavioural information and have limited ability to express algorithms. Using a language such as Occam allows a designer to write a behavioural description which can be compiled and executed as a simulator, or prototype, of the system. The programmability introduced into the design process enables designers to concentrate on a design's underlying algorithm. The choice of this algorithm is the most crucial decision since it determines the performance and area of the silicon implementation. The thesis is divided into four sections, each of several chapters. The first section considers VLSI design complexity, compares the expert systems and silicon compilation approaches to tackling it, and examines its parallels with software complexity. The second section reviews the advantages of using a conventional programming language for VLSI system descriptions. A number of alternative high level programming languages are considered for application in VLSI design. The third section defines the overall ATL system COPTS is envisaged to be part of, and considers the schematic representation of Occam programs. The final section presents a summary of the overall project and suggestions for future work on realising the full ATL system.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Martins, Wellington Santos. "Algorithm performance on a general purpose parallel computer." Thesis, University of East Anglia, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296870.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Starefors, Henrik, and Rasmus Persson. "MLID : A multilabelextension of the ID3 algorithm." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13667.

Повний текст джерела
Анотація:
AbstractMachine learning is a subfield within artificial intelligence that revolves around constructingalgorithms that can learn from, and make predictions on data. Instead of following strict andstatic instruction, the system operates by adapting and learning from input data in order tomake predictions and decisions. This work will focus on a subcategory of machine learningcalled “MultilabelClassification”, which is the concept of where items introduced to thesystem is categorized by an analytical model, learned through supervised learning, whereeach instance of the dataset can belong to multiple labels, or classes.This paper presents the task of implementing a multilabelclassifier based on the ID3algorithm, which we call MLID (MultilabelIterative Dichotomiser). The solution is presentedboth in a sequentially executed version as well as an parallelized one.We also presents acomparison based on accuracy and execution time, that is performed against algorithms of asimilar nature in order to evaluate the viability of using ID3 as a base to further expand andbuild upon in regards of multi label classification.In order to evaluate the performance of the MLID algorithm, we have measured theexecution time, accuracy, and made a summarization of precision and recall into what iscalled Fmeasure,which is the harmonic mean of both precision and sensitivity of thealgorithm. These results are then compared to already defined and established algorithms,on a range of datasets of varying sizes, in order to assess the viability of the MLID algorithm.The results produced when comparing MLID against other multilabelalgorithms such asBinary relevance, Classifier Chains and Random Trees shows that MLID can compete withother classifiers in term of accuracy and Fmeasure,but in terms of training the algorithm,the time required is proven inferior. Through these results, we can conclude that MLID is aviable option to use as a multilabelclassifier. Although, some constraints inherited from theoriginal ID3 algorithm does impede the full utility of the algorithm, we are certain thatfollowing the same path of development and improvement as ID3 experienced would allowMLID to develop towards a suitable choice of algorithm for a diverse range of multilabelclassification problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Srivastava, Anurag. "Generalized Event Tree Algorithm and Software for Dam Safety Risk Analysis." DigitalCommons@USU, 2008. https://digitalcommons.usu.edu/etd/32.

Повний текст джерела
Анотація:
ABSTRACT: Event tree analysis is a most commonly used method in dam safety risk analysis modeling. Available software tools for performing event tree analyses lack the flexibility to efficiently address many important factors in dam safety risk analysis. As a result of these practical limitations, spreadsheets have been used, sometimes including Visual Basic macros, to perform these analyses. However, this approach lacks generality and can require significant effort to apply to a specific dam or to modify the event tree structure. In response to these limitations, here a generalized event tree analysis tool, DAMRAE (DAM safety Risk Analysis Engine), has been developed. It includes a graphical interface for developing and populating an event tree, and a tool for calculating and post-processing an event tree risk model for dam safety risk assessment in a highly flexible manner. This thesis describes the underlying theoretical and computational logic employed in the current version of DAMRAE, and provides a detailed example of the calculations in the current version of DAMRAE for an application to a US Army Corps of Engineers (USACE) dam. The thesis closes with some conclusions about the capabilities of DAMRAE and a summary of plans for its further development.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Ouerd, Messaouda. "An algorithm directed computer aided software engineering (CASE) environment for C." Thesis, University of Ottawa (Canada), 1990. http://hdl.handle.net/10393/5964.

Повний текст джерела
Анотація:
The objectives of computer aided software engineering (CASE) systems are to improve productivity during the software development process and the quality of software using software engineering concepts via automation of the software development life cycle. This will result in a reusable software and will decrease the cost and time of software development and maintenance. The main concern in this thesis is with describing the features of a particular software understanding environment for C. An algorithm directed computer aided software engineering environment for C language has been developed and implemented. The system has been implemented on a Sun Workstation using the Sunview window interface. It provides computer aided software engineering tools which: (1) Assist the user in developing structured algorithms for procedural languages; (2) Automatically transform a structured algorithm into a corresponding program; (3) Redocument the resulting C program (or any C program developed using any other technique) in an organized representation.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Stenlund, Sebastian. "Testing Safety Critical Avionics Software Using LBTest." Thesis, Linköpings universitet, Programvara och system, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133645.

Повний текст джерела
Анотація:
A case study for the tool LBTest illustrating benets and limitations of the tool along the terms of usability, results and costs. The study shows the use of learning based testing on a safety critical application in the avionics industry. While requiring the user to have the oretical knowledge of the tools inner workings, the process of using the tool has benefits in terms of requirement analysis and the possibility of finding design and implementation errors in both the early and late stages of development
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Li, Xiaoli. "A map-growing localization algorithm for ad-hoc sensor networks /." free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p1418044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Kressner, Daniel. "Numerical Methods for Structured Matrix Factorizations." [S.l. : s.n.], 2001. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10047770.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Johnson, Donald C. "Application of a genetic algorithm to optimize staffing levels in software development." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA293725.

Повний текст джерела
Анотація:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, December 1994.
"December 1994." Thesis advisor(s): B. Ramesh, T. Hamid. Includes bibliographical references. Also available online.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Elliott, Donald M. "Application of a genetic algorithm to optimize quality assurance in software development." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from the National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA273193.

Повний текст джерела
Анотація:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1993.
Thesis advisor(s): Ramesh, B. ; Abdel-Hamid, Tarek K. "September 1993." Includes bibliographical references. Also available online.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Gdura, Youssef Omran. "C++ software for computing and visualizing 2-D manifolds using Henderson's algorithm." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ64078.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Shi, Shengxian. "Development of a bootstrap filter PTV algorithm and a smart PIV software." Thesis, University of Liverpool, 2009. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.533939.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Zarate, Orozco Ismael. "Software and Hardware-In-The-Loop Modeling of an Audio Watermarking Algorithm." Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc33221/.

Повний текст джерела
Анотація:
Due to the accelerated growth in digital music distribution, it becomes easy to modify, intercept, and distribute material illegally. To overcome the urgent need for copyright protection against piracy, several audio watermarking schemes have been proposed and implemented. These digital audio watermarking schemes have the purpose of embedding inaudible information within the host file to cover copyright and authentication issues. This thesis proposes an audio watermarking model using MATLAB® and Simulink® software for 1K and 2K fast Fourier transform (FFT) lengths. The watermark insertion process is performed in the frequency domain to guarantee the imperceptibility of the watermark to the human auditory system. Additionally, the proposed audio watermarking model was implemented in a Cyclone® II FPGA device from Altera® using the Altera® DSP Builder tool and MATLAB/Simulink® software. To evaluate the performance of the proposed audio watermarking scheme, effectiveness and fidelity performance tests were conducted for the proposed software and hardware-in-the-loop based audio watermarking model.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Czerny, Maximilian. "Automated software testing : Evaluation of Angluin's L* algorithm and applications in practice." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146018.

Повний текст джерела
Анотація:
Learning-based testing can ensure software quality without a formal documentation or maintained specification of the system under test. Therefore, an automaton learning algorithm is the key component to automatically generate efficient test cases for black-box systems. In the present report Angluin’s automaton learning algorithm L* and the extension called L* Mealy are examined and evaluated in the application area of learning-based software testing. The purpose of this work is to estimate the applicability of the L* algorithm for learning real world software and to describe constraints of this approach. To achieve this, a framework to test the L* implementation on various deterministic finite automata (DFA) was written and an adaptation called L* Mealy was integrated into the learning-based testing platform LBTest. To follow the learning process, the queries that the learner needs to perform on the system to learn are tracked and measured. Both algorithms show a polynomial growth on these queries in case studies from real world business software or on randomly generated DFAs. The test data indicate a better learning performance in practice than the theoretical predictions imply. In contrast to other existing learning algorithms, the L* adaptation L* Mealy performs slowly in LBTest due to a polynomially growing interval between the types of queries that the learner needs to derive a hypothesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Sánchez-Rey, Roberto. "Algorithm and related software to detect human bodies in an indoor environment." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3190.

Повний текст джерела
Анотація:
During the last decade the human body detection and tracking has been a very extensive research eld within the computer vision. There are many potential applications of people tracking such as security-monitoring, anthropomorphic analysis or biometrics. In this thesis we present an algorithm and related software to detect human bodies in an indoor environment. It is part of a wider project which aims to estimate the human height. The purposed algorithm performs in real-time to detect people. The algorithm is developed using the free OpenCV library in C++ programming language. As far as this algorithm is rst part of a wider system, our software gives two outputs. The principal one is the coordinates of the detected object. With the coordinates, the aforementioned measuring system will be able to calculate the height by itself. The other output is the video sequence with the detected person bounded by a rectangle, wich provides visual feedback to the user. This software is able to communicate with Matlab Engine. It is important since the subsequent height estimation system works in Matlab®.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Roio, Denis. "Algorithmic sovereignty." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/11101.

Повний текст джерела
Анотація:
This thesis describes a practice based research journey across various projects dealing with the design of algorithms, to highlight the governance implications in design choices made on them. The research provides answers and documents methodologies to address the urgent need for more awareness of decisions made by algorithms about the social and economical context in which we live. Algorithms consitute a foundational basis across different fields of studies: policy making, governance, art and technology. The ability to understand what is inscribed in such algorithms, what are the consequences of their execution and what is the agency left for the living world is crucial. Yet there is a lack of interdisciplinary and practice based literature, while specialised treatises are too narrow to relate to the broader context in which algorithms are enacted. This thesis advances the awareness of algorithms and related aspects of sovereignty through a series of projects documented as participatory action research. One of the projects described, Devuan, leads to the realisation of a new, worldwide renown operating system. Another project, "sup", consists of a minimalist approach to mission critical software and literate programming to enhance security and reliability of applications. Another project, D-CENT, consisted in a 3 year long path of cutting edge research funded by the EU commission on the emerging dynamics of participatory democracy connected to the technologies adopted by citizen organizations. My original contribution to knowledge lies within the function that the research underpinning these projects has on the ability to gain a better understanding of sociopolitical aspects connected to the design and management of algorithms. It suggests that we can improve the design and regulation of future public, private and common spaces which are increasingly governed by algorithms by understanding not only economical and legal implications, but also the connections between design choices and the sociopolitical context for their development and execution.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Silva, Ricardo João Besteiro e. "A behavioral analysis tool for models of software systems." Master's thesis, Faculdade de Ciências e Tecnologia, 2010. http://hdl.handle.net/10362/4023.

Повний текст джерела
Анотація:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Process calculi are simple languages which permit modeling of concurrent systems so that they can be verified for correctness. We can analyze concurrent systems based on process calculi by either comparing a representation of the actual implementation with a simpler specification for equivalence, or by verifying whether desired properties described in an adequate logic hold. Strong bisimulation equivalence is one of many equivalence relations defined on process calculi to aid in the verification of concurrent software. This equivalence relation relates processes which exhibit the same behavior, i.e. perform the same transitions, as equivalent regardless of internal implementation details. Logics to reason about processes range from those which describe temporal properties – how properties evolve during the course of a process’ life – behavioral properties – which actions a process is capable of performing – and spatial properties – what components compose a process and how are they connected. Model checking consists of verifying if a model, in our case a process, satisfies a given property. Model checking techniques are quite popular in conjunction with process calculi to aid in the verification of the correctness of concurrent systems. In this thesis we address the problems of checking bisimilarity between processess using characteristic formulae, which are formulae used to fully describe a process’ behavior. We implement some facilities to allow bisimilarity verification in the Spatial Logic Model Checker tool. As a result of adding these facilities we also extend the SLMC tool with an extra modality in the logic it uses to reason about processes. We have also added the possibility to define mutually recursive properties in the tool and enhanced the model checking algorithm with a cache to prevent redundant, time-consuming checks to be performed.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Sen, Caner. "Tsunami Source Inversion Using Genetic Algorithm." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612939/index.pdf.

Повний текст джерела
Анотація:
Tsunami forecasting methodology developed by the United States National Oceanic and Atmospheric Administration&rsquo
s Center for Tsunami Research is based on the concept of a pre-computed tsunami database which includes tsunami model results from Mw 7.5 earthquakes called tsunami source functions. Tsunami source functions are placed along the subduction zones of the oceans of the world in several rows. Linearity of tsunami propagation in an open ocean allows scaling and/or combination of the pre-computed tsunami source functions. An offshore scenario is obtained through inverting scaled and/or combined tsunami source functions against Deep-ocean Assessment and Reporting of Tsunami (DART) buoy measurements. A graphical user interface called Genetic Algorithm for INversion (GAIN) was developed in MATLAB using general optimization toolbox to perform an inversion. The 15 November 2006 Kuril and 27 February 2010 Chile tsunamis are chosen as case studies. One and/or several DART buoy measurement(s) is/are used to test different error minimization functions with/without earthquake magnitude as constraint. The inversion results are discussed comparing the forecasting model results with the tide gage measurements.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Buys, Stefan. "Genetic algorithm for Artificial Neural Network training for the purpose of Automated Part Recognition." Thesis, Nelson Mandela Metropolitan University, 2012. http://hdl.handle.net/10948/d1008356.

Повний текст джерела
Анотація:
Object or part recognition is of major interest in industrial environments. Current methods implement expensive camera based solutions. There is a need for a cost effective alternative to be developed. One of the proposed methods is to overcome the hardware, camera, problem by implementing a software solution. Artificial Neural Networks (ANN) are to be used as the underlying intelligent software as they have high tolerance for noise and have the ability to generalize. A colleague has implemented a basic ANN based system comprising of an ANN and three cost effective laser distance sensors. However, the system is only able to identify 3 different parts and needed hard coding changes made by trial and error. This is not practical for industrial use in a production environment where there are a large quantity of different parts to be identified that change relatively regularly. The ability to easily train more parts is required. Difficulties associated with traditional mathematically guided training methods are discussed, which leads to the development of a Genetic Algorithm (GA) based evolutionary training method that overcomes these difficulties and makes accurate part recognition possible. An ANN hybridised with GA training is introduced and a general solution encoding scheme which is used to encode the required ANN connection weights. Experimental tests were performed in order to determine the ideal GA performance and control parameters as studies have indicated that different GA control parameters can lead to large differences in training accuracy. After performing these tests, the training accuracy was analyzed by investigation into GA performance as well as hardware based part recognition performance. This analysis identified the ideal GA control parameters when training an ANN for the purpose of part recognition and showed that the ANN generally trained well and could generalize well on data not presented to it during training.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Mahdavi, Kiarash. "A clustering genetic algorithm for software modularisation with a multiple hill climbing approach." Thesis, Brunel University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425197.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Makai, Matthew Charles. "Incorporating Design Knowledge into Genetic Algorithm-based White-Box Software Test Case Generators." Thesis, Virginia Tech, 2008. http://hdl.handle.net/10919/32029.

Повний текст джерела
Анотація:
This thesis shows how to incorporate Unified Modeling Language sequence diagrams into genetic algorithm-based automated test case generators to increase the code coverage of their resulting test cases. Automated generation of test data through evolutionary testing was proven feasible in prior research studies. In those previous investigations, the metrics used for determining the test generation method effectiveness were the percentages of testing statement and branch code coverage achieved. However, the code coverage realized in those preceding studies often converged at suboptimal percentages due to a lack of guidance in conditional statements. This study compares the coverage percentages of 16 different Java programs when test cases are automatically generated with and without incorporating associated UML sequence diagrams. It introduces a tool known as the Evolutionary Test Case Generator, or ETCG, an automatic test case generator based on genetic algorithms that provides the ability to incorporate sequence diagrams to direct the heuristic search process and facilitate evolutionary testing. When the generator uses sequence diagrams, the resulting test cases showed an average improvement of 21% in branch coverage and 8% in statement coverage over test cases produced without using sequence diagrams.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Pradhan, Pushkar P. "Efficient group membership algorithm for ad hoc networks." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE0000593.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Mahmood, Qazafi. "LC - an effective classification based association rule mining algorithm." Thesis, University of Huddersfield, 2014. http://eprints.hud.ac.uk/id/eprint/24274/.

Повний текст джерела
Анотація:
Classification using association rules is a research field in data mining that primarily uses association rule discovery techniques in classification benchmarks. It has been confirmed by many research studies in the literature that classification using association tends to generate more predictive classification systems than traditional classification data mining techniques like probabilistic, statistical and decision tree. In this thesis, we introduce a novel data mining algorithm based on classification using association called “Looking at the Class” (LC), which can be used in for mining a range of classification data sets. Unlike known algorithms in classification using the association approach such as Classification based on Association rule (CBA) system and Classification based on Predictive Association (CPAR) system, which merge disjoint items in the rule learning step without anticipating the class label similarity, the proposed algorithm merges only items with identical class labels. This saves too many unnecessary items combining during the rule learning step, and consequently results in large saving in computational time and memory. Furthermore, the LC algorithm uses a novel prediction procedure that employs multiple rules to make the prediction decision instead of a single rule. The proposed algorithm has been evaluated thoroughly on real world security data sets collected using an automated tool developed at Huddersfield University. The security application which we have considered in this thesis is about categorizing websites based on their features to legitimate or fake which is a typical binary classification problem. Also, experimental results on a number of UCI data sets have been conducted and the measures used for evaluation is the classification accuracy, memory usage, and others. The results show that LC algorithm outperformed traditional classification algorithms such as C4.5, PART and Naïve Bayes as well as known classification based association algorithms like CBA with respect to classification accuracy, memory usage, and execution time on most data sets we consider.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Wang, Xiaoning. "A Modular Model Checking Algorithm for Cyclic Feature Compositions." Digital WPI, 2005. https://digitalcommons.wpi.edu/etd-theses/64.

Повний текст джерела
Анотація:
Feature-oriented software architecture is a way of organizing code around the features that the program provides instead of the program's objects and components. In the development of a feature-oriented software system, the developers, supplied with a set of features, select and organize features to construct the desired system. This approach, by better aligning the implementation of a system with the external view of users, is believed to have many potential benefits such as feature reuse and easy maintenance. However, there are challenges in the formal verification of feature-oriented systems: first, the product may grow very large and complicated. As a result, it's intractable to apply the traditional formal verification techniques such as model checking on such systems directly; second, since the number of feature-oriented products the developers can build is exponential in the number of features available, there may be redundant verification work if doing verification on each product. For example, developers may have shared specifications on different products built from the same set of features and hence doing verification on these features many times is really unnecessary. All these drive the need for modular verifications for feature-oriented architectures. Assume-guarantee reasoning as a modular verification technique is believed to be an effective solution. In this thesis, I compare two verification methods of this category on feature-oriented systems and analyze the results. Based on their pros and cons, I propose a new modular model checking method to accomplish verification for sequential feature compositions with cyclic connections between the features. This method first builds an abstract finite state machine, which summarizes the information related to checking the property/specification from the concrete feature design, and then applies a revised CTL model checker to decide whether the system design can preserve the property or not. Proofs of the soundness of my method are also given in this thesis.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Wang, Xiaoning. "A modular model checking algorithm for cyclic feature compositions." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-01115-201156/.

Повний текст джерела
Анотація:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: modular verification; feature-oriented software development; model checking; assume-guarantee reasoning. Includes bibliographical references (p. 72-73).
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Peschl, Gabriel. "Comparing restricted propagation grafhs for the similarity flooding algorithm." reponame:Repositório Institucional da UFPR, 2015. http://hdl.handle.net/1884/40436.

Повний текст джерела
Анотація:
Orientador : Prof. Dr. Marcos Didonet Del Fabro
Dissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 26/05/2015
Inclui referências : f.51-54
Resumo: A Engenharia de Software Orientada a Modelos é uma metodologia que utiliza modelos no processo de desenvolvimento de software. Muitas operações sobre esse modelos são necessárias estabelecer links entre modelos distintos, como por exemplo, nas transformação de modelos, nas rastreabilidade de modelos e nas integração de modelos. Neste trabalho, os links são estabelecidos através da operação matching. Com os links estabelecidos é comum calcular os valores de similaridades a eles, além de se indicar um grau de igualdade entre esses links. O Similarity Flooding é um algoritmo bem estabelecido que pode aumentar a similaridade entre os links. O algoritmo é genérico e está provado sua eficiência. Contudo, ele depende de uma estrutura menos genérica para manter a sua eficiência. Neste trabalho, foram codificados 9 métodos distintos de propagações para o Similarity Flooding entre os elementos de metamodelos e modelos. Esses elementos compreendem classes, atributos, referências, instâncias e o tipo dos elementos, por exemplo, Integer, String ou Boolean. Além de verificar a viabilidade desses métodos, 2 casos de estudos são discutidos. No primeiro caso de estudo, foram executados os métodos entre os metamodelos e modelos de Mantis e Bugzilla. Em seguida, foram executados os métodos entre os metamodelos e modelos de AccountOwner e Customer. Por fim, é apresentado um estudo comparativo entre os métodos de propagações codificados com um método genérico, com o objetivo de verificar quais métodos podem ser mais (ou menos) eficiente para o Similarity Flooding, dentre os metamodelos e modelos utilizados. De acordo com os resultados, utilizando técnicas restritas de propagações do SF, as similaridades entre os links melhoraram em relação a execução genérica do algoritmo. Isso porque diminuindo a quantidade de links o SF pode ter um melhor desempenho.
Abstract: In Model-Driven Software Engineering (MDSE), different approaches can be used to establish links between elements of different models for distinct purposes, such as serving as specifications for model transformations. Once the links have been established, it is common to set up a similarity value to indicate equivalence (or lack of) between the elements. Similarity Flooding (SF) is one of the best known algorithms for enhancing the similarity of structurally similar elements. The algorithm is generic and has proven to be efficient. However, it depends on graph-based structure and a less generic encoding. We created nine generic methods to propagate the similarities between links of elements of models. These elements comprise classes, attributes, references, instances and the type of element, e.g., Integer, String or Boolean. In order to verify the viability of these methods, 2 case studies are discussed. In the first case study, we execute our methods between metamodels and models of Mantis and Bugzilla. In the following, the metamodels and models of AccountOwner and Customer are used. At the end, a comparative study of the metamodel-based encoding is presented for the purpose of verifying whether a less generic implementation, involving a lesser number of model elements, based on the metamodel and model structures, might be a viable implementation and adaptation of the SF algorithm. We compare these methods with an implementation comprising all the propagation strutures (non-restricted propagation), which are more similar (though not equivalent) to the original SF implementation. According to the results, using the restricted propagation graphs of the SF, the similarity values between the links has increased in relation to the non-restricted algorithm. This is because reducing the amount of links, will increase the propagation values between the links of elements.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Ataser, Zafer. "Variable Shaped Detector: A Negative Selection Algorithm." Phd thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615629/index.pdf.

Повний текст джерела
Анотація:
Artificial Immune Systems (AIS) are class of computational intelligent methods developed based on the principles and processes of the biological immune system. AIS methods are categorized mainly into four types according to the inspired principles and processes of immune system. These categories are clonal selection, negative selection, immune network and danger theory. The approach of negative selection algorithm (NSA) is one of the major AIS models. NSA is a supervised learning algorithm based on the imitation of the T cells maturation process in thymus. In this imitation, detectors are used to mimic the cells, and the process of T cells maturation is simulated to generate detectors. Then, NSA classifies the specified data either as normal (self) data or as anomalous (non-self) data. In this classification task, NSA methods can make two kinds of classification errors: a self data is classified as anomalous, and a non-self data is classified as normal data. In this thesis, a novel negative selection method, variable shaped detector (V-shaped detector), is proposed to increase the classification accuracy, or in other words decreasing classification errors. In V-shaped detector, new approaches are introduced to define self and represent detectors. V-shaped detector uses the combination of Local Outlier Factor (LOF) and kth nearest neighbor (k-NN) to determine a different radius for each self sample, thus it becomes possible to model the self space using self samples and their radii. Besides, the cubic b-spline is proposed to generate a variable shaped detector. In detector representation, the application of cubic spline is meaningful, when the edge points are used. Hence, Edge Detection (ED) algorithm is developed to find the edge points of the given self samples. V-shaped detector was tested using different data sets and compared with the well-known one-class classification method, SVM, and the similar popular negative selection method, NSA with variable-sized detector termed V-detector. The experiments show that the proposed method generates reasonable and comparable results.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Mao, Yida 1972. "A metrics based detection of reusable object-oriented software components using machine learning algorithm /." Thesis, McGill University, 1999. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=21601.

Повний текст джерела
Анотація:
Since the emergence of the object technology, organizations have accumulated a tremendous amount of object-oriented (OO) code. Instead of continuing to recreate components similar to existing artifacts, and considering the rising costs of development, many organizations would like to decrease software development costs and cycle time by reusing existing OO components. The difficulty of finding reusable components is that reuse is a complex and thus less quantifiable measure. In this research, we first proposed three reuse hypotheses about the impact of three internal characteristics (inheritance, coupling, and complexity) of OO software artifacts on reusability. Corresponding metrics suites were then selected and extracted. We used C4.5, a machine learning algorithm, to build predictive models from the learning data set that we have obtained from a medium sized software system developed in C++. Each predictive models was then verified according to its completeness, correctness and global accuracy. The verification results proved that the proposed hypotheses were correct. The uniqueness of this research work is that we have combined the state of the art of three different subjects (reuse detection and prediction, OO metrics and their extraction, and applied machine learning algorithm) to form a process of finding interesting properties of OO software components that affect reusability.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Mao, Yida. "A metrics based detection of reusable object-oriented software components using machine learning algorithm." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0028/MQ50828.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Kolb, William Edward 1960. "MICROCOMPUTER BASED AUTOMATIC TRUCK DISPATCHING - SYSTEM MODELING AND SIMULATION (MINING, SOFTWARE, ALGORITHM, OPEN-PIT)." Thesis, The University of Arizona, 1986. http://hdl.handle.net/10150/292092.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Nilsson, Gustav, and William Takolander. "Smarta rekommendationer : Rekommendationer på webbsidor framtagna av maskininlärning." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-39345.

Повний текст джерела
Анотація:
I dagens samhälle är maskininlärning en metod som blir allt mer populär för att lösa olika problem som företag ställs inför världen runt. Många företag har berg av lagrad data som inte används till någon nytta. Den datan kan användas på många olika sätt för att göra förbättringar inom företagen. Ett av sätten är maskininlärning, det har blivit mer och mer populärt för att skapa rekommendationer. Det här projektets syfte är att skapa ett bevis på konceptet att en maskininlärningsmodell är kapabel att ge rekommendationer baserat på historisk data. Projektet kommer vara riktlinjer för hur Centrala Studiestödsnämnden (CSN) ska fortsätta med maskininlärning som ett alternativ till manuella rekommendationer. Det uppnås genom att determinera vilken data som ska användas, förstå datan som används och välja en algoritm som passar den datan. Sedan kan algoritmerna användas för att skapa maskininlärda modeller som kan testas i diverse olika sätt för att se vilken som passar ändamålet. Två modeller skapas med olika algoritmer som båda passar uppgiften. Modellerna testas genom praktiska och teoretiska test. Resultatet visar att algoritmerna är liknande i deras predikterade rekommendationer men har en del variation.
In today's society machine learning is a growing method to solve certain problems faced by companies worldwide. Many companies have mountains of stored data that are not being utilised. This data can be used in numerous ways to make improvements within these companies. One of the ways is machine learning, it is used more and more these days to generate recommendations. This project's purpose is to make a proof of concept of a machine learning model capable of giving recommendations based on historical data. This proof of concept will serve as guidelines to Centrala Studiestödsnämnden (CSN) in how they should approach machine learning as an alternative to manual recommendations. This is achieved by determining what data is to be used, understanding the data selected and then picking an algorithm suitable for that data. Then the algorithms will be used to create machine learned models which will be tested in various ways to see which works best for the task at hand. Two models are created with different algorithms that both fit the purpose. The models are tested through practical and theoretical tests. The results show that the algorithms are similar in which predicted recommendations they give but have slight variation.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Liu, Xing. "High-performance algorithms and software for large-scale molecular simulation." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53487.

Повний текст джерела
Анотація:
Molecular simulation is an indispensable tool in many different disciplines such as physics, biology, chemical engineering, materials science, drug design, and others. Performing large-scale molecular simulation is of great interest to biologists and chemists, because many important biological and pharmaceutical phenomena can only be observed in very large molecule systems and after sufficiently long time dynamics. On the other hand, molecular simulation methods usually have very steep computational costs, which limits current molecular simulation studies to relatively small systems. The gap between the scale of molecular simulation that existing techniques can handle and the scale of interest has become a major barrier for applying molecular simulation to study real-world problems. In order to study large-scale molecular systems using molecular simulation, it requires developing highly parallel simulation algorithms and constantly adapting the algorithms to rapidly changing high performance computing architectures. However, many existing algorithms and codes for molecular simulation are from more than a decade ago, which were designed for sequential computers or early parallel architectures. They may not scale efficiently and do not fully exploit features of today's hardware. Given the rapid evolution in computer architectures, the time has come to revisit these molecular simulation algorithms and codes. In this thesis, we demonstrate our approach to addressing the computational challenges of large-scale molecular simulation by presenting both the high-performance algorithms and software for two important molecular simulation applications: Hartree-Fock (HF) calculations and hydrodynamics simulations, on highly parallel computer architectures. The algorithms and software presented in this thesis have been used by biologists and chemists to study some problems that were unable to solve using existing codes. The parallel techniques and methods developed in this work can be also applied to other molecular simulation applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Sabih, Ann Faik. "Cognitive smart agents for optimising OpenFlow rules in software defined networks." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/15743.

Повний текст джерела
Анотація:
This research provides a robust solution based on artificial intelligence (AI) techniques to overcome the challenges in Software Defined Networks (SDNs) that can jeopardise the overall performance of the network. The proposed approach, presented in the form of an intelligent agent appended to the SDN network, comprises of a new hybrid intelligent mechanism that optimises the performance of SDN based on heuristic optimisation methods under an Artificial Neural Network (ANN) paradigm. Evolutionary optimisation techniques, including Particle Swarm Optimisation (PSO) and Genetic Algorithms (GAs) are deployed to find the best set of inputs that give the maximum performance of an SDN-based network. The ANN model is trained and applied as a predictor of SDN behaviour according to effective traffic parameters. The parameters that were used in this study include round-trip time and throughput, which were obtained from the flow table rules of each switch. A POX controller and OpenFlow switches, which characterise the behaviour of an SDN, have been modelled with three different topologies. Generalisation of the prediction model has been tested with new raw data that were unseen in the training stage. The simulation results show a reasonably good performance of the network in terms of obtaining a Mean Square Error (MSE) that is less than 10−6 [superscript]. Following the attainment of the predicted ANN model, utilisation with PSO and GA optimisers was conducted to achieve the best performance of the SDN-based network. The PSO approach combined with the predicted SDN model was identified as being comparatively better than the GA approach in terms of their performance indices and computational efficiency. Overall, this research demonstrates that building an intelligent agent will enhance the overall performance of the SDN network. Three different SDN topologies have been implemented to study the impact of the proposed approach with the findings demonstrating a reduction in the packets dropped ratio (PDR) by 28-31%. Moreover, the packets sent to the SDN controller were also reduced by 35-36%, depending on the generated traffic. The developed approach minimised the round-trip time (RTT) by 23% and enhanced the throughput by 10%. Finally, in the event where SDN controller fails, the optimised intelligent agent can immediately take over and control of the entire network.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Sen, Tayfun. "Parallel Closet+ Algorithm For Finding Frequent Closed Itemsets." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610742/index.pdf.

Повний текст джерела
Анотація:
Data mining is proving itself to be a very important field as the data available is increasing exponentially, thanks to first computerization and now internetization. On the other hand, cluster computing systems made up of commodity hardware are becoming widespread, along with the multicore processor architectures. This high computing power is synthesized with data mining to process huge amounts of data and to reach information and knowledge. Frequent itemset mining is a special subtopic of data mining because it is an integral part of many types of data mining tasks. Often this task is a prerequisite for many other data mining algorithms, most notably algorithms in the association rule mining area. For this reason, it is studied heavily in the literature. In this thesis, a parallel implementation of CLOSET+, a frequent closed itemset mining algorithm, is presented. The CLOSET+ algorithm has been modified to run on multiple processors simultaneously, in order to obtain results faster. Open MPI and Boost libraries have been used for the communication between different processes and the program has been tested on different inputs and parameters. Experimental results show that the algorithm exhibits high speedup and eficiency for dense data when the support value is higher than a determined value. Proposed parallel algorithm could prove to be useful for application areas where fast response is needed for low to medium number of frequent closed itemsets. A particular application area is the Web where online applications have similar requirements.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Wright, Hamish Michael. "A Homogeneous Hierarchical Scripted Vector Classification Network with Optimisation by Genetic Algorithm." Thesis, University of Canterbury. Electrical and Computer Engineering, 2007. http://hdl.handle.net/10092/1191.

Повний текст джерела
Анотація:
A simulated learning hierarchical architecture for vector classification is presented. The hierarchy used homogeneous scripted classifiers, maintaining similarity tables, and selforganising maps for the input. The scripted classifiers produced output, and guided learning with permutable script instruction tables. A large space of parametrised script instructions was created, from which many different combinations could be implemented. The parameter space for the script instruction tables was tuned using a genetic algorithm with the goal of optimizing the networks ability to predict class labels for bit pattern inputs. The classification system, known as Dura, was presented with various visual classification problems, such as: detecting overlapping lines, locating objects, or counting polygons. The network was trained with a random subset from the input space, and was then tested over a uniformly sampled subset. The results showed that Dura could successfully classify these and other problems. The optimal scripts and parameters were analysed, allowing inferences about which scripted operations were important, and what roles they played in the learning classification system. Further investigations were undertaken to determine Dura's performance in the presence of noise, as well as the robustness of the solutions when faced with highly stochastic training sequences. It was also shown that robustness and noise tolerance in solutions could be improved through certain adjustments to the algorithm. These adjustments led to different solutions which could be compared to determine what changes were responsible for the increased robustness or noise immunity. The behaviour of the genetic algorithm tuning the network was also analysed, leading to the development of a super solutions cache, as well as improvements in: convergence, fitness function, and simulation duration. The entire network was simulated using a program written in C++ using FLTK libraries for the graphical user interface.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Tsuruta, Mauricio. "Um estudo sobre a relação entre qualidade e arquitetura de software." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/12/12139/tde-29032011-201659/.

Повний текст джерела
Анотація:
Diversos setores da economia tem alto grau de dependência de sistemas computacionais: telecomunicação, financeiro, infraestrutura, industrial dentre outros. Desta forma, a qualidade do software contido nestes sistemas é um ítem importante para o bom desempenho destes setores. A arquitetura de software é considerada fator determinante para a qualidade de software. Este trabalho estuda a maneira pela qual a arquitetura de software determina a qualidade do software produzido e as possibilidades de se obter os atributos de qualidade desejados através da especificação de uma arquitetura de software apropriada. O método de pesquisa se fundamenta na revisão da literatura e quatro abordagens para a especificação da arquitetura de software são consideradas: clássica, orientada a objetos, orientada a atributos e orientada a busca. A abordagem orientada a busca é um campo de estudo relativamente recente e os avanços realizados são reportados dentro da área de conhecimento denominada de Search Based Software Engineering. Esta área de conhecimento utiliza técnicas metaheurísticas para achar boas soluções para os problemas encontrados na Engenharia de Software. Uma das técnicas meta-heurísticas mais utilizadas, o algorítmo genético, é usada em uma aplicação cujo processo de design segue a abordagem orientada a busca.
Many sectors of economy depend highly on computing systems: telecommunication, finance, infrastructure, industrial, and others. Thus, the quality of software in these systems is an important item to achieve good performance in these sectors. The software architecture is considered one of the main factors that shape the software quality. This work studies the way software architecture determines the software quality and the possibilities to obtain the desired software quality attributes through specifying appropriate software architecture. The research method is based upon literature review and four approaches to software architecture design process are considered: classic, object oriented, attribute oriented and search oriented. The search oriented approach to software architecture design process is a relatively new field of study and advances are reported in a knowledge area called Search Based Software Engineering. This knowledge area uses metaheuristics techniques to find good solutions to problems found in software engineering. One of the metaheuristic technique most frequently used, the genetic algorithm, is used in an application that follows the search based approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії