Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: SOFTWARE PREDICTION MODELS.

Rozprawy doktorskie na temat „SOFTWARE PREDICTION MODELS”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „SOFTWARE PREDICTION MODELS”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Bowes, David Hutchinson. "Factors affecting the performance of trainable models for software defect prediction". Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/10978.

Pełny tekst źródła
Streszczenie:
Context. Reports suggest that defects in code cost the US in excess of $50billion per year to put right. Defect Prediction is an important part of Software Engineering. It allows developers to prioritise the code that needs to be inspected when trying to reduce the number of defects in code. A small change in the number of defects found will have a significant impact on the cost of producing software. Aims. The aim of this dissertation is to investigate the factors which a ect the performance of defect prediction models. Identifying the causes of variation in the way that variables are computed should help to improve the precision of defect prediction models and hence improve the cost e ectiveness of defect prediction. Methods. This dissertation is by published work. The first three papers examine variation in the independent variables (code metrics) and the dependent variable (number/location of defects). The fourth and fifth papers investigate the e ect that di erent learners and datasets have on the predictive performance of defect prediction models. The final paper investigates the reported use of di erent machine learning approaches in studies published between 2000 and 2010. Results. The first and second papers show that independent variables are sensitive to the measurement protocol used, this suggests that the way data is collected a ects the performance of defect prediction. The third paper shows that dependent variable data may be untrustworthy as there is no reliable method for labelling a unit of code as defective or not. The fourth and fifth papers show that the dataset and learner used when producing defect prediction models have an e ect on the performance of the models. The final paper shows that the approaches used by researchers to build defect prediction models is variable, with good practices being ignored in many papers. Conclusions. The measurement protocols for independent and dependent variables used for defect prediction need to be clearly described so that results can be compared like with like. It is possible that the predictive results of one research group have a higher performance value than another research group because of the way that they calculated the metrics rather than the method of building the model used to predict the defect prone modules. The machine learning approaches used by researchers need to be clearly reported in order to be able to improve the quality of defect prediction studies and allow a larger corpus of reliable results to be gathered.
Style APA, Harvard, Vancouver, ISO itp.
2

Askari, Mina. "Information Theoretic Evaluation of Change Prediction Models for Large-Scale Software". Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/1139.

Pełny tekst źródła
Streszczenie:
During software development and maintenance, as a software system evolves, changes are made and bugs are fixed in various files. In large-scale systems, file histories are stored in software repositories, such as CVS, which record modifications. By studying software repositories, we can learn about open source software development rocesses. Knowing where these changes will happen in advance, gives power to managers and developers to concentrate on those files. Due to the unpredictability in software development process, proposing an accurate change prediction model is hard. It is even harder to compare different models with the actual model of changes that is not available.

In this thesis, we first analyze the information generated during the development process, which can be obtained through mining the software repositories. We observe that the change data follows a Zipf distribution and exhibits self-similarity. Based on the extracted data, we then develop three probabilistic models to predict which files will have changes or bugs. One purpose of creating these models is to rank the files of the software that are most susceptible to having faults.

The first model is Maximum Likelihood Estimation (MLE), which simply counts the number of events i. e. , changes or bugs that occur in to each file, and normalizes the counts to compute a probability distribution. The second model is Reflexive Exponential Decay (RED), in which we postulate that the predictive rate of modification in a file is incremented by any modification to that file and decays exponentially. The result of a new bug occurring to that file is a new exponential effect added to the first one. The third model is called RED Co-Changes (REDCC). With each modification to a given file, the REDCC model not only increments its predictive rate, but also increments the rate for other files that are related to the given file through previous co-changes.

We then present an information-theoretic approach to evaluate the performance of different prediction models. In this approach, the closeness of model distribution to the actual unknown probability distribution of the system is measured using cross entropy. We evaluate our prediction models empirically using the proposed information-theoretic approach for six large open source systems. Based on this evaluation, we observe that of our three prediction models, the REDCC model predicts the distribution that is closest to the actual distribution for all the studied systems.
Style APA, Harvard, Vancouver, ISO itp.
3

Tran, Qui Can Cuong. "Empirical evaluation of defect identification indicators and defect prediction models". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2553.

Pełny tekst źródła
Streszczenie:
Context. Quality assurance plays a vital role in the software engineering development process. It can be considered as one of the activities, to observe the execution of software project to validate if it behaves as expected or not. Quality assurance activities contribute to the success of software project by reducing the risks of software’s quality. Accurate planning, launching and controlling quality assurance activities on time can help to improve the performance of software projects. However, quality assurance activities also consume time and cost. One of the reasons is that they may not focus on the potential defect-prone area. In some of the latest and more accurate findings, researchers suggested that quality assurance activities should focus on the scope that may have the potential of defect; and defect predictors should be used to support them in order to save time and cost. Many available models recommend that the project’s history information be used as defect indicator to predict the number of defects in the software project. Objectives. In this thesis, new models are defined to predict the number of defects in the classes of single software systems. In addition, the new models are built based on the combination of product metrics as defect predictors. Methods. In the systematic review a number of article sources are used, including IEEE Xplore, ACM Digital Library, and Springer Link, in order to find the existing models related to the topic. In this context, open source projects are used as training sets to extract information about occurred defects and the system evolution. The training data is then used for the definition of the prediction models. Afterwards, the defined models are applied on other systems that provide test data, so information that was not used for the training of the models; to validate the accuracy and correctness of the models Results. Two models are built. One model is built to predict the number of defects of one class. One model is built to predict whether one class contains bug or no bug.. Conclusions. The proposed models are the combination of product metrics as defect predictors that can be used either to predict the number of defects of one class or to predict if one class contains bugs or no bugs. This combination of product metrics as defect predictors can improve the accuracy of defect prediction and quality assurance activities; by giving hints on potential defect prone classes before defect search activities will be performed. Therefore, it can improve the software development and quality assurance in terms of time and cost
Style APA, Harvard, Vancouver, ISO itp.
4

Liu, Qin. "Optimal utilization of historical data sets for the construction of software cost prediction models". Thesis, Northumbria University, 2006. http://nrl.northumbria.ac.uk/2129/.

Pełny tekst źródła
Streszczenie:
The accurate prediction of software development cost at early stage of development life-cycle may have a vital economic impact and provide fundamental information for management decision making. However, it is not well understood in practice how to optimally utilize historical software project data for the construction of cost predictions. This is because the analysis of historical data sets for software cost estimation leads to many practical difficulties. In addition, there has been little research done to prove the benefits. To overcome these limitations, this research proposes a preliminary data analysis framework, which is an extension of Maxwell's study. The proposed framework is based on a set of statistical analysis methods such as correlation analysis, stepwise ANOVA, univariate analysis, etc. and provides a formal basis for the erection of cost prediction models from his¬torical data sets. The proposed framework is empirically evaluated against commonly used prediction methods, namely Ordinary Least-Square Regression (OLS), Robust Regression (RR), Classification and Regression Trees (CART), K-Nearest Neighbour (KNN), and is also applied to both heterogeneous and homogeneous data sets. Formal statistical significance testing was performed for the comparisons. The results from the comparative evaluation suggest that the proposed preliminary data analysis framework is capable to construct more accurate prediction models for all selected prediction techniques. The framework processed predictor variables are statistic significant, at 95% confidence level for both parametric techniques (OLS and RR) and one non-parametric technique (CART). Both the heterogeneous data set and homogenous data set benefit from the application of the proposed framework for improving project effort prediction accuracy. The homogeneous data set is more effective after being processed by the framework. Overall, the evaluation results demonstrate that the proposed framework has an excellent applicability. Further research could focus on two main purposes: First, improve the applicability by integrating missing data techniques such as listwise deletion (LD), mean imputation (MI), etc., for handling missing values in historical data sets. Second, apply benchmarking to enable comparisons, i.e. allowing companies to compare themselves with respect to their productivity or quality.
Style APA, Harvard, Vancouver, ISO itp.
5

Brosig, Fabian [Verfasser], i S. [Akademischer Betreuer] Kounev. "Architecture-Level Software Performance Models for Online Performance Prediction / Fabian Maria Konrad Brosig. Betreuer: S. Kounev". Karlsruhe : KIT-Bibliothek, 2014. http://d-nb.info/105980316X/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Chun, Zhang Jing. "Trigonometric polynomial high order neural network group models for financial data simulation & prediction /". [Campblelltown, N.S.W.] : The author, 1998. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030721.152829/index.html.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

McDonald, Simon Francis. "Better clinical decisions for less effort : building prediction software models to improve anti-coagulation care and prevent thrombosis and strokes". Thesis, Lancaster University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539665.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Hall, Otto. "Inference of buffer queue times in data processing systems using Gaussian Processes : An introduction to latency prediction for dynamic software optimization in high-end trading systems". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214791.

Pełny tekst źródła
Streszczenie:
This study investigates whether Gaussian Process Regression can be applied to evaluate buffer queue times in large scale data processing systems. It is additionally considered whether high-frequency data stream rates can be generalized into a small subset of the sample space. With the aim of providing basis for dynamic software optimization, a promising foundation for continued research is introduced. The study is intended to contribute to Direct Market Access financial trading systems which processes immense amounts of market data daily. Due to certain limitations, we shoulder a naïve approach and model latencies as a function of only data throughput in eight small historical intervals. The training and test sets are represented from raw market data, and we resort to pruning operations to shrink the datasets by a factor of approximately 0.0005 in order to achieve computational feasibility. We further consider four different implementations of Gaussian Process Regression. The resulting algorithms perform well on pruned datasets, with an average R2 statistic of 0.8399 over six test sets of approximately equal size as the training set. Testing on non-pruned datasets indicate shortcomings from the generalization procedure, where input vectors corresponding to low-latency target values are associated with less accuracy. We conclude that depending on application, the shortcomings may be make the model intractable. However for the purposes of this study it is found that buffer queue times can indeed be modelled by regression algorithms. We discuss several methods for improvements, both in regards to pruning procedures and Gaussian Processes, and open up for promising continued research.
Denna studie undersöker huruvida Gaussian Process Regression kan appliceras för att utvärdera buffer-kötider i storskaliga dataprocesseringssystem. Dessutom utforskas ifall dataströmsfrekvenser kan generaliseras till en liten delmängd av utfallsrymden. Medmålet att erhålla en grund för dynamisk mjukvaruoptimering introduceras en lovandestartpunkt för fortsatt forskning. Studien riktas mot Direct Market Access system för handel på finansiella marknader, somprocesserar enorma mängder marknadsdata dagligen. På grund av vissa begränsningar axlas ett naivt tillvägagångssätt och väntetider modelleras som en funktion av enbartdatagenomströmning i åtta små historiska tidsinterval. Tränings- och testdataset representeras från ren marknadsdata och pruning-tekniker används för att krympa dataseten med en ungefärlig faktor om 0.0005, för att uppnå beräkningsmässig genomförbarhet. Vidare tas fyra olika implementationer av Gaussian Process Regression i beaktning. De resulterande algorithmerna presterar bra på krympta dataset, med en medel R2 statisticpå 0.8399 över sex testdataset, alla av ungefär samma storlek som träningsdatasetet. Tester på icke krympta dataset indikerar vissa brister från pruning, där input vektorermotsvararande låga latenstider är associerade med mindre exakthet. Slutsatsen dras att beroende på applikation kan dessa brister göra modellen obrukbar. För studiens syftefinnes emellertid att latenstider kan sannerligen modelleras av regressionsalgoritmer. Slutligen diskuteras metoder för förbättrning med hänsyn till både pruning och GaussianProcess Regression, och det öppnas upp för lovande vidare forskning.
Style APA, Harvard, Vancouver, ISO itp.
9

Vlad, Iulian Teodor. "Mathematical Methods to Predict the Dynamic Shape Evolution of Cancer Growth based on Spatio-Temporal Bayesian and Geometrical Models". Doctoral thesis, Universitat Jaume I, 2016. http://hdl.handle.net/10803/670303.

Pełny tekst źródła
Streszczenie:
The aim of this research is to observe the dynamics of cancer tumors and to develop and implement new methods and algorithms for prediction of tumor growth. I offer some tools to help physicians for a better understanding this disease and to check if the prescribed treatment have the desired results. The plan of the thesis is the following. In Chapter 1 I briefly recall some properties and classification of points processes with some examples of spatio-temporal point processes. Chapter 2 presents a short overview of the theory of Levy bases and integration with respect to such basis is given, I recall standard results about spatial Cox processes, and finally I propose different types of growth models and a new algorithm, the Cobweb, which is presented and developed based on the proposed methodology. Chapters 3, 4 and 5 are dedicated to present new prediction methods. The implementation in Matlab software comes in Chapter 6. The thesis ends with some conclusion and future research.
El objetivo de esta investigación es observar la dinámica de los tumores, desarrollar e implementarnuevos métodos y algoritmos para la predicción del crecimiento tumoral. Queremos ofrecer algunasherramientas para ayudar a los médicos a comprender y tratar esta enfermedad. Utilizando unmétodo de predicción , y comparándolo con la evolución real de un tumor, un médico puede constata si el tratamiento prescrito tiene el efecto deseado, y de acuerdo con ello, si es necesario, tomar la decisión de intervención quirúrgica. El plan de la tesis es el siguiente. En el primer capítulo recordamos brevemente algunaspropiedades y procesos de clasificación de procesos puntuales con algunos ejemplosespacio-temporales. El capítulo 2 presenta una breve descripción de la teoría de las bases de Levy y se da la integración con respecto a dicha base, recordamos resultados estándar sobre procesosespaciales de Cox, y finalmente proponemos diferentes tipos de modelos de crecimien to y un nuevo algoritmo, el Cobweb, que es presentado y desarrollado en base a la metodología propuesta. Los capítulos 3, 4 y 5 están dedicados a presentar nuevos métodos de predicción.
Style APA, Harvard, Vancouver, ISO itp.
10

SARCIA', SALVATORE ALESSANDRO. "An Approach to improving parametric estimation models in the case of violation of assumptions based upon risk analysis". Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2009. http://hdl.handle.net/2108/1048.

Pełny tekst źródła
Streszczenie:
In this work, we show the mathematical reasons why parametric models fall short of providing correct estimates and define an approach that overcomes the causes of these shortfalls. The approach aims at improving parametric estimation models when any regression model assumption is violated for the data being analyzed. Violations can be that, the errors are x-correlated, the model is not linear, the sample is heteroscedastic, or the error probability distribution is not Gaussian. If data violates the regression assumptions and we do not deal with the consequences of these violations, we cannot improve the model and estimates will be incorrect forever. The novelty of this work is that we define and use a feed-forward multi-layer neural network for discrimination problems to calculate prediction intervals (i.e. evaluate uncertainty), make estimates, and detect improvement needs. The primary difference from traditional methodologies is that the proposed approach can deal with scope error, model error, and assumption error at the same time. The approach can be applied for prediction, inference, and model improvement over any situation and context without making specific assumptions. An important benefit of the approach is that, it can be completely automated as a stand-alone estimation methodology or used for supporting experts and organizations together with other estimation techniques (e.g., human judgment, parametric models). Unlike other methodologies, the proposed approach focuses on the model improvement by integrating the estimation activity into a wider process that we call the Estimation Improvement Process as an instantiation of the Quality Improvement Paradigm. This approach aids mature organizations in learning from their experience and improving their processes over time with respect to managing their estimation activities. To provide an exposition of the approach, we use an old NASA COCOMO data set to (1) build an evolvable neural network model and (2) show how a parametric model, e.g., a regression model, can be improved and evolved with the new project data.
Style APA, Harvard, Vancouver, ISO itp.
11

Wiese, Igor Scaliante. "Predição de mudanças conjuntas de artefatos de software com base em informações contextuais". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-02122016-140016/.

Pełny tekst źródła
Streszczenie:
O uso de abordagens de predição de mudanças conjuntas auxilia os desenvolvedores a encontrar artefatos que mudam conjuntamente em uma tarefa. No passado, pesquisadores utilizaram análise estrutural para construir modelos de predição. Mais recentemente, têm sido propostas abordagens que utilizam informações históricas e análise textual do código fonte. Apesar dos avanços obtidos, os desenvolvedores de software ainda não usam essas abordagens amplamente, presumidamente por conta do número de falsos positivos. A hipótese desta tese é que informações contextuais obtidas das tarefas, da comunicação dos desenvolvedores e das mudanças dos artefatos descrevem as circunstâncias e condições em que as mudanças conjuntas ocorrem e podem ser utilizadas para realizar a predição de mudanças conjuntas. O objetivo desta tese consiste em avaliar se o uso de informações contextuais melhora a predição de mudanças conjuntas entre dois arquivos em relação às regras de associação, que é uma estratégia frequentemente usada na literatura. Foram construídos modelos de predição específicos para cada par de arquivos, utilizando as informações contextuais em conjunto com o algoritmo de aprendizagem de máquina random forest. Os modelos de predição foram avaliados em 129 versões de 10 projetos de código aberto da Apache Software Foundation. Os resultados obtidos foram comparados com um modelo baseado em regras de associação. Além de avaliar o desempenho dos modelos de predição também foram investigadas a influência do modo de agrupamento dos dados para construção dos conjuntos de treinamento e teste e a relevância das informações contextuais. Os resultados indicam que os modelos baseados em informações contextuais predizem 88% das mudanças corretamente, contra 19% do modelo de regras de associação, indicando uma precisão 3 vezes maior. Os modelos criados com informações contextuais coletadas em cada versão do software apresentaram maior precisão que modelos construídos a partir de um conjunto arbitrário de tarefas. As informações contextuais mais relevantes foram: o número de linhas adicionadas ou modificadas, número de linhas removidas, code churn, que representa a soma das linhas adicionadas, modificadas e removidas durante um commit, número de palavras na descrição da tarefa, número de comentários e papel dos desenvolvedores na discussão, medido pelo valor do índice de intermediação (betweenness) da rede social de comunicação. Os desenvolvedores dos projetos foram consultados para avaliar a importância dos modelos de predição baseados em informações contextuais. Segundo esses desenvolvedores, os resultados obtidos ajudam desenvolvedores novatos no projeto, pois não têm conhecimento da arquitetura e normalmente não estão familiarizados com as mudanças dos artefatos durante a evolução do projeto. Modelos de predição baseados em informações contextuais a partir de mudanças de software são relativamente precisos e, consequentemente, podem ser usados para apoiar os desenvolvedores durante a realização de atividades de manutenção e evolução de software
Co-change prediction aims to make developers aware of which artifacts may change together with the artifact they are working on. In the past, researchers relied on structural analysis to build prediction models. More recently, hybrid approaches relying on historical information and textual analysis have been proposed. Despite the advances in the area, software developers still do not use these approaches widely, presumably because of the number of false recommendations. The hypothesis of this thesis is that contextual information of software changes collected from issues, developers\' communication, and commit metadata describe the circumstances and conditions under which a co-change occurs and this is useful to predict co-changes. The aim of this thesis is to use contextual information to build co-change prediction models improving the overall accuracy, especially decreasing the amount of false recommendations. We built predictive models specific for each pair of files using contextual information and the Random Forest machine learning algorithm. The approach was evaluated in 129 versions of 10 open source projects from the Apache Software Foundation. We compared our approach to a baseline model based on association rules, which is often used in the literature. We evaluated the performance of the prediction models, investigating the influence of data aggregation to build training and test sets, as well as the identification of the most relevant contextual information. The results indicate that models based on contextual information can correctly predict 88% of co-change instances, against 19% achieved by the association rules model. This indicates that models based on contextual information can be 3 times more accurate. Models created with contextual information collected in each software version were more accurate than models built from an arbitrary amount of contextual information collected from more than one version. The most important pieces of contextual information to build the prediction models were: number of lines of code added or modified, number of lines of code removed, code churn, number of words in the discussion and description of a task, number of comments, and role of developers in the discussion (measured by the closeness value obtained from the communication social network). We asked project developers about the relevance of the results obtained by the prediction models based on contextual information. According to them, the results can help new developers to the project, since these developers have no knowledge about the architecture and are usually not familiar with the artifacts history. Thus, our results indicate that prediction models based on the contextual information are useful to support developers during the maintenance and evolution activities
Style APA, Harvard, Vancouver, ISO itp.
12

Rodrigues, Genaina Nunes. "A model driven approach for software reliability prediction". Thesis, University College London (University of London), 2008. http://discovery.ucl.ac.uk/1446004/.

Pełny tekst źródła
Streszczenie:
Software reliability, one of the major software quality attributes, quantitatively expresses the continuity of correct service delivery. In current practice, reliability models are typically measurement-based models, and mostly employed in isolation at the later stage of the soft ware development process, after architectural decisions have been made that cannot easily be reversed early software reliability prediction models are often insufficiently formal to be ana- lyzable and not usually connected to the target system. We postulate it is possible to overcome these issues by supporting software reliability engineering from requirements to deployment using scenario specifications. We contribute a novel reliability prediction technique that takes into account the component structure exhibited in the scenarios and the concurrent nature of component-based systems by extending scenario specifications to model (1) the probability of component failure, and (2) scenario transition probabilities. Those scenarios are subsequently transformed into enhanced behaviour models to compute the system reliability. Additionally we enable the integration between reliability and development models through profiles that extend the core Unified Modelling Language (UML). By means of a reli ability profile, the architecture of a component-based system can express both method invoca tions and deployment relationships between the application components in one environment. To facilitate reliability prediction, and determine the impact of concurrency on systems reliability, we have extended the Label Transition System Analyser Tool (LTSA), implementing a plugin for reliability analysis. Finally, we evaluate our analysis technique with a case study focusing on Condor, a dis tributed job scheduler and resource management system. The purpose of the case study is to evaluate the efficacy of our analysis technique and to compare it with other reliability tech niques.
Style APA, Harvard, Vancouver, ISO itp.
13

Ghose, Susmita. "Analysis of errors in software reliability prediction systems and application of model uncertainty theory to provide better predictions". College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3781.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Mechanical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Style APA, Harvard, Vancouver, ISO itp.
14

Vasudev, R. Sashin, i Ashok Reddy Vanga. "Accuracy of Software Reliability Prediction from Different Approaches". Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1298.

Pełny tekst źródła
Streszczenie:
Many models have been proposed for software reliability prediction, but none of these models could capture a necessary amount of software characteristic. We have proposed a mixed approach using both analytical and data driven models for finding the accuracy in reliability prediction involving case study. This report includes qualitative research strategy. Data is collected from the case study conducted on three different companies. Based on the case study an analysis will be made on the approaches used by the companies and also by using some other data related to the organizations Software Quality Assurance (SQA) team. Out of the three organizations, the first two organizations used for the case study are working on reliability prediction and the third company is a growing company developing a product with less focus on quality. Data collection was by the means of interviewing an employee of the organization who leads a team and is in the managing position for at least last 2 years.
svra06@student.bth.se
Style APA, Harvard, Vancouver, ISO itp.
15

Abdel-Ghaly, A. A. "Analysis of predictive quality of software reliability models". Thesis, City University London, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370836.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Dennison, Thomas E. "Fitting and prediction uncertainty for a software reliability model". Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23678.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Bowring, James Frederick. "Modeling and Predicting Software Behaviors". Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/19754.

Pełny tekst źródła
Streszczenie:
Software systems will eventually contribute to their own maintenance using implementations of self-awareness. Understanding how to specify, model, and implement software with a sense of self is a daunting problem. This research draws inspiration from the automatic functioning of a gimbal---a self-righting mechanical device that supports an object and maintains the orientation of this object with respect to gravity independently of its immediate operating environment. A software gimbal exhibits a self-righting feature that provisions software with two auxiliary mechanisms: a historical mechanism and a reflective mechanism. The historical mechanism consists of behavior classifiers trained on statistical models of data that are collected from executions of the program that exhibit known behaviors of the program. The reflective mechanism uses the historical mechanism to assess an ongoing or selected execution. This dissertation presents techniques for the identification and modeling of program execution features as statistical models. It further demonstrates how statistical machine-learning techniques can be used to manipulate these models and to construct behavior classifiers that can automatically detect and label known program behaviors and detect new unknown behaviors. The thesis is that statistical summaries of data collected from a software program's executions can model and predict external behaviors of the program. This dissertation presents three control-flow features and one value-flow feature of program executions that can be modeled as stochastic processes exhibiting the Markov property. A technique for building automated behavior classifiers from these models is detailed. Empirical studies demonstrating the efficacy of this approach are presented. The use of these techniques in example software engineering applications in the categories of software testing and failure detection are described.
Style APA, Harvard, Vancouver, ISO itp.
18

Cahill, Jaspar. "Machine learning techniques to improve software quality". Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/41730/1/Jaspar_Cahill_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.
Style APA, Harvard, Vancouver, ISO itp.
19

Van, Koten Chikako, i n/a. "Bayesian statistical models for predicting software effort using small datasets". University of Otago. Department of Information Science, 2007. http://adt.otago.ac.nz./public/adt-NZDU20071009.120134.

Pełny tekst źródła
Streszczenie:
The need of today�s society for new technology has resulted in the development of a growing number of software systems. Developing a software system is a complex endeavour that requires a large amount of time. This amount of time is referred to as software development effort. Software development effort is the sum of hours spent by all individuals involved. Therefore, it is not equal to the duration of the development. Accurate prediction of the effort at an early stage of development is an important factor in the successful completion of a software system, since it enables the developing organization to allocate and manage their resource effectively. However, for many software systems, accurately predicting the effort is a challenge. Hence, a model that assists in the prediction is of active interest to software practitioners and researchers alike. Software development effort varies depending on many variables that are specific to the system, its developmental environment and the organization in which it is being developed. An accurate model for predicting software development effort can often be built specifically for the target system and its developmental environment. A local dataset of similar systems to the target system, developed in a similar environment, is then used to calibrate the model. However, such a dataset often consists of fewer than 10 software systems, causing a serious problem in the prediction, since predictive accuracy of existing models deteriorates as the size of the dataset decreases. This research addressed this problem with a new approach using Bayesian statistics. This particular approach was chosen, since the predictive accuracy of a Bayesian statistical model is not so dependent on a large dataset as other models. As the size of the dataset decreases to fewer than 10 software systems, the accuracy deterioration of the model is expected to be less than that of existing models. The Bayesian statistical model can also provide additional information useful for predicting software development effort, because it is also capable of selecting important variables from multiple candidates. In addition, it is parametric and produces an uncertainty estimate. This research developed new Bayesian statistical models for predicting software development effort. Their predictive accuracy was then evaluated in four case studies using different datasets, and compared with other models applicable to the same small dataset. The results have confirmed that the best new models are not only accurate but also consistently more accurate than their regression counterpart, when calibrated with fewer than 10 systems. They can thus replace the regression model when using small datasets. Furthermore, one case study has shown that the best new models are more accurate than a simple model that predicts the effort by calculating the average value of the calibration data. Two case studies has also indicated that the best new models can be more accurate for some software systems than a case-based reasoning model. Since the case studies provided sufficient empirical evidence that the new models are generally more accurate than existing models compared, in the case of small datasets, this research has produced a methodology for predicting software development effort using the new models.
Style APA, Harvard, Vancouver, ISO itp.
20

Voorhees, David P. "Predicting software Size and Development Effort: Models Based on Stepwise Refinement". NSUWorks, 2005. http://nsuworks.nova.edu/gscis_etd/903.

Pełny tekst źródła
Streszczenie:
This study designed a Software Size Model and an Effort Prediction Model, then performed an empirical analysis of these two models. Each model design began with identifying its objectives, which led to describing the concept to be measured and the meta-model. The numerical assignment rules were then developed, providing a basis for size measurement and effort prediction across software engineering projects. The Software Size Model was designed to test the hypothesis that a software size measure represents the amount of knowledge acquired and stored in software artifacts, and the amount of time it took to acquire and store this knowledge. The Effort Prediction Model is based on the estimation by analogy approach and was designed to test the hypothesis that this model will produce reasonably close predictions when it uses historical data that conforms to the Software Size Model. The empirical study implemented each model, collected and recorded software size data from software engineering project deliverables, simulated effort prediction using the jack knife approach, and computed the absolute relative error and magnitude of relative error (MRE) statistics. This study resulted in 35.3% of the predictions having an MRE value at or below twenty-five percent. This result satisfies the criteria established for the study of having at least 31 % of the predictions with a MRE of25% or less. This study is significant for three reasons. First, no subjective factors were used to estimate effort. The elimination of subjective factors removes a source of error in the predictions and makes the study easier to replicate. Second, both models were described using metrology and measurement theory principles. This allows others to consistently implement the models and to modify these models while maintaining the integrity of the models' objectives. Third, the study's hypotheses were validated even though the software artifacts used to collect the software size data varied significantly in both content and quality. Recommendations for further study include applying the Software Size Model to other data-driven estimation models, collecting and using software size data from industry projects, looking at alternatives for how text-based software knowledge is identified and counted, and studying the impact of project cycles and project roles on predicting effort.
Style APA, Harvard, Vancouver, ISO itp.
21

Yun, Seok Jun. "Productivity prediction model based on Bayesian analysis and productivity console". Texas A&M University, 2003. http://hdl.handle.net/1969.1/2305.

Pełny tekst źródła
Streszczenie:
Software project management is one of the most critical activities in modern software development projects. Without realistic and objective management, the software development process cannot be managed in an effective way. There are three general problems in project management: effort estimation is not accurate, actual status is difficult to understand, and projects are often geographically dispersed. Estimating software development effort is one of the most challenging problems in project management. Various attempts have been made to solve the problem; so far, however, it remains a complex problem. The error rate of a renowned effort estimation model can be higher than 30% of the actual productivity. Therefore, inaccurate estimation results in poor planning and defies effective control of time and budgets in project management. In this research, we have built a productivity prediction model which uses productivity data from an ongoing project to reevaluate the initial productivity estimate and provides managers a better productivity estimate for project management. The actual status of the software project is not easy to understand due to problems inherent in software project attributes. The project attributes are dispersed across the various CASE (Computer-Aided Software Engineering) tools and are difficult to measure because they are not hard material like building blocks. In this research, we have created a productivity console which incorporates an expert system to measure project attributes objectively and provides graphical charts to visualize project status. The productivity console uses project attributes gathered in KB (Knowledge Base) of PAMPA II (Project Attributes Monitoring and Prediction Associate) that works with CASE tools and collects project attributes from the databases of the tools. The productivity console and PAMPA II work on a network, so geographically dispersed projects can be managed via the Internet without difficulty.
Style APA, Harvard, Vancouver, ISO itp.
22

Ghibellini, Alessandro. "Trend prediction in financial time series: a model and a software framework". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24708/.

Pełny tekst źródła
Streszczenie:
The research has the aim to build an autonomous support for traders which in future can be translated in an Active ETF. My thesis work is characterized for a huge focus on problem formulation and an accurate analysis on the impact of the input and the length of the future horizon on the results. I will demonstrate that using financial indicators already used by professional traders every day and considering a correct length of the future horizon, it is possible to reach interesting scores in the forecast of future market states, considering both accuracy, which is around 90% in all the experiments, and confusion matrices which confirm the good accuracy scores, without an expensive Deep Learning approach. In particular, I used a 1D CNN. I also emphasize that classification appears to be the best approach to address this type of prediction in combination with proper management of unbalanced class weights. In fact, it is standard having a problem of unbalanced class weights, otherwise the model will react for inconsistent trend movements. Finally I proposed a Framework which can be used also for other fields which allows to exploit the presence of the Experts of the sector and combining this information with ML/DL approaches.
Style APA, Harvard, Vancouver, ISO itp.
23

Fahmi, Mazen. "Evaluating count models for predicting post-release faults in object-oriented software". Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=31228.

Pełny tekst źródła
Streszczenie:
This thesis empirically compares statistical prediction models using fault count data and fault binary data. The types of statistical models that are studied in detail are Logistic Regression for binary data and Negative Binomial Regression for the count data. Different model building approaches are also evaluated: manual variable selection, stepwise variable selection, and hybrid selection (classification and regression trees combined with stepwise selection). The data set comes from a commercial Java application development project. In this project special attention was paid to data collection to ensure data accuracy. The comparison criteria we used were a consistency coefficient and the estimated cost savings from using the prediction model. The results indicate that while different model building approaches result in different object-oriented metrics being selected, there is no marked difference in the quality of the models that are produced. These results suggest that there is no compelling reason to collect highly accurate fault count data when building object-oriented models, and that fault binary data (which are much easier to collect) will do just as well. (Abstract shortened by UMI.)
Style APA, Harvard, Vancouver, ISO itp.
24

Wang, Yin-Han. "Model and software development for predicting fish growth in trout raceways". Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4751.

Pełny tekst źródła
Streszczenie:
Thesis (M.S.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains xii, 105 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 47).
Style APA, Harvard, Vancouver, ISO itp.
25

Durán, Alcaide Ángel. "Development of high-performance algorithms for a new generation of versatile molecular descriptors. The Pentacle software". Doctoral thesis, Universitat Pompeu Fabra, 2010. http://hdl.handle.net/10803/7201.

Pełny tekst źródła
Streszczenie:
The work of this thesis was focused on the development of high-performance algorithms for a new generation of molecular descriptors, with many advantages with respect to its predecessors, suitable for diverse applications in the field of drug design, as well as its implementation in commercial grade scientific software (Pentacle). As a first step, we developed a new algorithm (AMANDA) for discretizing molecular interaction fields which allows extracting from them the most interesting regions in an efficient way. This algorithm was incorporated into a new generation of alignmentindependent molecular descriptors, named GRIND-2. The computing speed and efficiency of the new algorithm allow the application of these descriptors in virtual screening. In addition, we developed a new alignment-independent encoding algorithm (CLACC) producing quantitative structure-activity relationship models which have better predictive ability and are easier to interpret than those obtained with other methods.
El trabajo que se presenta en esta tesis se ha centrado en el desarrollo de algoritmos de altas prestaciones para la obtención de una nueva generación de descriptores moleculares, con numerosas ventajas con respecto a sus predecesores, adecuados para diversas aplicaciones en el área del diseño de fármacos, y en su implementación en un programa científico de calidad comercial (Pentacle). Inicialmente se desarrolló un nuevo algoritmo de discretización de campos de interacción molecular (AMANDA) que permite extraer eficientemente las regiones de máximo interés. Este algoritmo fue incorporado en una nueva generación de descriptores moleculares independientes del alineamiento, denominados GRIND-2. La rapidez y eficiencia del nuevo algoritmo permitieron aplicar estos descriptores en cribados virtuales. Por último, se puso a punto un nuevo algoritmo de codificación independiente de alineamiento (CLACC) que permite obtener modelos cuantitativos de relación estructura-actividad con mejor capacidad predictiva y mucho más fáciles de interpretar que los obtenidos con otros métodos.
Style APA, Harvard, Vancouver, ISO itp.
26

Adekile, Olusegun. "Object-oriented software development effort prediction using design patterns from object interaction analysis". [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2329.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Mudalige, Gihan Ravideva. "Predictive analysis and optimisation of pipelined wavefront applications using reusable analytic models". Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/3773/.

Pełny tekst źródła
Streszczenie:
Pipelined wavefront computations are an ubiquitous class of high performance parallel algorithms used for the solution of many scientific and engineering applications. In order to aid the design and optimisation of these applications, and to ensure that during procurement platforms are chosen best suited to these codes, there has been considerable research in analysing and evaluating their operational performance. Wavefront codes exhibit complex computation, communication, synchronisation patterns, and as a result there exist a large variety of such codes and possible optimisations. The problem is compounded by each new generation of high performance computing system, which has often introduced a previously unexplored architectural trait, requiring previous performance models to be rewritten and reevaluated. In this thesis, we address the performance modelling and optimisation of this class of application, as a whole. This differs from previous studies in which bespoke models are applied to specific applications. The analytic performance models are generalised and reusable, and we demonstrate their application to the predictive analysis and optimisation of pipelined wavefront computations running on modern high performance computing systems. The performance model is based on the LogGP parameterisation, and uses a small number of input parameters to specify the particular behaviour of most wavefront codes. The new parameters and model equations capture the key structural and behavioural differences among different wavefront application codes, providing a succinct summary of the operations for each application and insights into alternative wavefront application design. The models are applied to three industry-strength wavefront codes and are validated on several systems including a Cray XT3/XT4 and an InfiniBand commodity cluster. Model predictions show high quantitative accuracy (less than 20% error) for all high performance configurations and excellent qualitative accuracy. The thesis presents applications, projections and insights for optimisations using the model, which show the utility of reusable analytic models for performance engineering of high performance computing codes. In particular, we demonstrate the use of the model for: (1) evaluating application configuration and resulting performance; (2) evaluating hardware platform issues including platform sizing, configuration; (3) exploring hardware platform design alternatives and system procurement and, (4) considering possible code and algorithmic optimisations.
Style APA, Harvard, Vancouver, ISO itp.
28

Reichert, Thomas. "Development of 3D lattice models for predicting nonlinear timber joint behaviour". Thesis, Edinburgh Napier University, 2009. http://researchrepository.napier.ac.uk/Output/2827.

Pełny tekst źródła
Streszczenie:
This work presents the development of a three-dimensional lattice material model for wood and its application to timber joints including the potential strengthening benefit of second order effects. A lattice of discrete elements was used to capture the heterogeneity and fracture behaviour and the model results compared to tested Sitka spruce (Picea sitchensis) specimens. Despite the general applicability of lattice models to timber, they are computationally demanding, due to the nonlinear solution and large number of degrees of freedom required. Ways to reduce the computational costs are investigated. Timber joints fail due to plastic deformation of the steel fastener(s), embedment, or brittle fracture of the timber. Lattice models, contrary to other modelling approaches such as continuum finite elements, have the advantage to take into account brittle fracture, crack development and material heterogeneity by assigning certain strength and stiffness properties to individual elements. Furthermore, plastic hardening is considered to simulate timber embedment. The lattice is an arrangement of longitudinal, lateral and diagonal link elements with a tri-linear load-displacement relation. The lattice is used in areas with high stress gradients and normal continuum elements are used elsewhere. Heterogeneity was accounted for by creating an artificial growth ring structure and density profile upon which the mean strength and stiffness properties were adjusted. Solution algorithms, such as Newton-Raphson, encounter problems with discrete elements for which 'snap-back' in the global load-displacement curves would occur. Thus, a specialised solution algorithm, developed by Jirasek and Bazant, was adopted to create a bespoke FE code in MATLAB that can handle the jagged behaviour of the load displacement response, and extended to account for plastic deformation. The model's input parameters were calibrated by determining the elastic stiffness from literature values and adjusting the strength, post-yield and heterogeneity parameters of lattice elements to match the load-displacement from laboratory tests under various loading conditions. Although problems with the modified solution algorithm were encountered, results of the model show the potential of lattice models to be used as a tool to predict load-displacement curves and fracture patterns of timber specimens.
Style APA, Harvard, Vancouver, ISO itp.
29

Ndenga, Malanga Kennedy. "Predicting post-release software faults in open source software as a means of measuring intrinsic software product quality". Electronic Thesis or Diss., Paris 8, 2017. http://www.theses.fr/2017PA080099.

Pełny tekst źródła
Streszczenie:
Les logiciels défectueux ont des conséquences coûteuses. Les développeurs de logiciels doivent identifier et réparer les composants défectueux dans leurs logiciels avant de les publier. De même, les utilisateurs doivent évaluer la qualité du logiciel avant son adoption. Cependant, la nature abstraite et les multiples dimensions de la qualité des logiciels entravent les organisations de mesurer leur qualités. Les métriques de qualité logicielle peuvent être utilisées comme proxies de la qualité du logiciel. Cependant, il est nécessaire de disposer d'une métrique de processus logiciel spécifique qui peut garantir des performances de prédiction de défaut meilleures et cohérentes, et cela dans de différents contextes. Cette recherche avait pour objectif de déterminer un prédicteur de défauts logiciels qui présente la meilleure performance de prédiction, nécessite moins d'efforts pour la détection et a un coût minimum de mauvaise classification des composants défectueux. En outre, l'étude inclut une analyse de l'effet de la combinaison de prédicteurs sur la performance d'un modèles de prédiction de défauts logiciels. Les données expérimentales proviennent de quatre projets OSS. La régression logistique et la régression linéaire ont été utilisées pour prédire les défauts. Les métriques Change Burst ont enregistré les valeurs les plus élevées pour les mesures de performance numérique, avaient les probabilités de détection de défaut les plus élevées et le plus faible coût de mauvaise classification des composants
Faulty software have expensive consequences. To mitigate these consequences, software developers have to identify and fix faulty software components before releasing their products. Similarly, users have to gauge the delivered quality of software before adopting it. However, the abstract nature and multiple dimensions of software quality impede organizations from measuring software quality. Software quality metrics can be used as proxies of software quality. There is need for a software process metric that can guarantee consistent superior fault prediction performances across different contexts. This research sought to determine a predictor for software faults that exhibits the best prediction performance, requires least effort to detect software faults, and has a minimum cost of misclassifying components. It also investigated the effect of combining predictors on performance of software fault prediction models. Experimental data was derived from four OSS projects. Logistic Regression was used to predict bug status while Linear Regression was used to predict number of bugs per file. Models built with Change Burst metrics registered overall better performance relative to those built with Change, Code Churn, Developer Networks and Source Code software metrics. Change Burst metrics recorded the highest values for numerical performance measures, exhibited the highest fault detection probabilities and had the least cost of mis-classification of components. The study found out that Change Burst metrics could effectively predict software faults
Style APA, Harvard, Vancouver, ISO itp.
30

Bürger, Adrian [Verfasser], i Moritz [Akademischer Betreuer] Diehl. "Nonlinear mixed-integer model predictive control of renewable energy systems : : methods, software, and experiments". Freiburg : Universität, 2020. http://d-nb.info/1225682150/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Peker, Serhat. "A Novel User Activity Prediction Model For Context Aware Computing Systems". Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613662/index.pdf.

Pełny tekst źródła
Streszczenie:
In the last decade, with the extensive use of mobile electronic and wireless communication devices, there is a growing need for context aware applications and many pervasive computing applications have become integral parts of our daily lives. Context aware recommender systems are one of the popular ones in this area. Such systems surround the users and integrate with the environment
hence, they are aware of the users'
context and use that information to deliver personalized recommendations about everyday tasks. In this manner, predicting user&rsquo
s next activity preferences with high accuracy improves the personalized service quality of context aware recommender systems and naturally provides user satisfaction. Predicting activities of people is useful and the studies on this issue in ubiquitous environment are considerably insufficient. Thus, this thesis proposes an activity prediction model to forecast a user&rsquo
s next activity preference using past preferences of the user in certain contexts and current contexts of user in ubiquitous environment. The proposed model presents a new approach for activity prediction by taking advantage of ontology. A prototype application is implemented to demonstrate the applicability of this proposed model and the obtained outputs of a sample case on this application revealed that the proposed model can reasonably predict the next activities of the users.
Style APA, Harvard, Vancouver, ISO itp.
32

Puerto, Valencia J. (Jose). "Predictive model creation approach using layered subsystems quantified data collection from LTE L2 software system". Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201907192705.

Pełny tekst źródła
Streszczenie:
Abstract. The road-map to a continuous and efficient complex software system’s improvement process has multiple stages and many interrelated on-going transformations, these being direct responses to its always evolving environment. The system’s scalability on this on-going transformations depends, to a great extent, on the prediction of resources consumption, and systematic emergent properties, thus implying, as the systems grow bigger in size and complexity, its predictability decreases in accuracy. A predictive model is used to address the inherent complexity growth and be able to increase the predictability of a complex system’s performance. The model creation processes are driven by the recollection of quantified data from different layers of the Long-term Evolution (LTE) Data-layer (L2) software system. The creation of such a model is possible due to the multiple system analysis tools Nokia has already implemented, allowing a multiple-layers data gathering flow. The process starts by first, stating the system layers differences, second, the use of a layered benchmark approach for the data collection at different levels, third, the design of a process flow organizing the data transformations from recollection, filtering, pre-processing and visualization, and forth, As a proof of concept, different Performance Measurements (PM) predictive models, trained by the collected pre-processed data, are compared. The thesis contains, in parallel to the model creation processes, the exploration, and comparison of various data visualization techniques that addresses the non-trivial graphical representation of the in-between subsystem’s data relations. Finally, the current results of the model process creation process are presented and discussed. The models were able to explain 54% and 67% of the variance in the two test configurations used in the instantiation of the model creation process proposed in this thesis.
Style APA, Harvard, Vancouver, ISO itp.
33

Dareini, Ali. "Prediction and analysis of model’s parameters of Li-ion battery cells". Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-11799.

Pełny tekst źródła
Streszczenie:
Lithium-ion batteries are complex systems and making a simulation model of them is always challenging. A method for producing an accurate model with high capabilities for predicting the behavior of the battery in a time and cost efficient way is desired in this field of work. The aim of this thesis has been to develop a method to be close to the desired method as much as possible, especially in two important aspects, time and cost. The method which is the goal of this thesis should fulfill the below five requirements: 1. Able to produce a generic battery model for different types of lithium-ion batteries 2. No or low cost for the development of the model 3. A time span around one week for obtaining the model 4. Able to predict the most aspects of the battery’s behavior like the voltage, SOC, temperature and, preferably, simulate the degradation effects, safety and thermal aspects 5. Accuracy with less than 15% error The start point of this thesis was the study of current methods for cell modeling. Based on their approach, they are divided into three categories, abstract, black box and white box methods. Each of these methods has its own advantages and disadvantages, but none of them are able to fulfill the above requirements. This thesis presents a method, called “gray box”, which is, partially, a mix of the black and white boxes’ concepts. The gray box method uses values for model’s parameters from different sources. Firstly, some chemical/physical measurements like in the case of the white box method, secondly, some of the physical tests/experiments used in the case of the black box method and thirdly, information provided by cell datasheets, books, papers, journals and scientific databases. As practical part of this thesis, a prismatic cell, EIG C20 with 20Ah capacity was selected as the sample cell and its electrochemical model was produced with the proposed method. Some of the model’s parameters are measured and some others are estimated. Also, the abilities of AutoLion, a specialized software for lithium-ion battery modeling were used to accelerate the modeling process. Finally, the physical tests were used as part of the references for calculating the accuracy of the produced model. The results show that the gray box method can produce a model with nearly no cost, in less than one week and with error around 30% for the HPPC tests and, less than this, for the OCV and voltage tests. The proposed method could, largely, fulfill the five mentioned requirements. These results were achieved even without using any physical tests/experimental data for tuning the parameters, which is expected to reduce the error considerably. These are promising results for the idea of the gray box which is in its nascent stages and needs time to develop and be useful for commercial purposes.
Style APA, Harvard, Vancouver, ISO itp.
34

Febbo, Marco. "Advanced 4DT flight guidance and control software system". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11239/.

Pełny tekst źródła
Streszczenie:
The work presented in this thesis has been part of a Cranfield University research project. This thesis aims to design a flight control law for large cargo aircraft by using predictive control, which can assure flight motion along the flight path exactly and on time. In particular this work involves the modelling of a Boeing C-17 Globemaster III 6DOF model (used as study case), by using DATCOM and Matlab Simulink software. Then a predictive control algorithm has been developed. The majority of the work is done in a Matlab/Simulink environment. Finally the predictive control algorithm has been applied on the aircraft model and its performances, in tracking given trajectory optimized through a 4DT Research Software, have been evaluated.
Style APA, Harvard, Vancouver, ISO itp.
35

Vera, Barrera Rodrigo Felipe. "Un modelo predictivo para la localización de usuarios móviles en escenarios bajo techo". Tesis, Universidad de Chile, 2012. http://www.repositorio.uchile.cl/handle/2250/113512.

Pełny tekst źródła
Streszczenie:
Magíster en Ciencias, Mención Computación
A partir del surgimiento de la computación móvil, la necesidad de conocer la ubicación de recursos y/o personas ha sido imperante en el desarrollo de nuevas tecnologías y de soluciones que emplean este paradigma de computación. En particular, los sistemas de localización en tiempo real cobran cada día más importancia. Típicamente, este tipo de sistemas persiguen objetivos que están orientados a la seguridad, optimización y administración del uso de los recursos. Una gran cantidad de áreas de aplicación aprovechan cada vez más las ventajas de estas tecnologías y las incorporan en su plan de negocios. Estas aplicaciones van desde el seguimiento de activos dentro de un recinto cerrado, hasta el control de flota en empresas de transporte. El presente trabajo desarrolló un modelo predictivo para la estimación de la posición de los recursos en escenarios cerrados (indoor). Este modelo fue luego implementado a través en una aplicación de software que funciona en dispositivos móviles. La aplicación permite estimar la posición tanto del usuario local como de otros usuarios que están alrededor de él. Aunque el margen de error de la estimación es aún importante (del orden de 4-5 metros), el modelo predictivo cumple con el objetivo para el cual fue diseñado. Ese objetivo es que dos o más usuarios de la aplicación puedan encontrarse entre sí cara-a-cara, en base a la información entregada por la aplicación. La información necesaria para realizar la estimación de la posición de un recurso se obtiene de contrastar un modelo del espacio físico pre-cargado en la memoria del dispositivo, contra las señales inalámbricas observadas en tiempo-real. Se requiere que el entorno en el cual se desea implantar esta solución cuente con distintos puntos de accesos WiFi, los cuales puedan ser usados como referencia. La aplicación desarrollada permite construir de manera expedita y con la mínima información el modelo del decaimiento de las señales WiFi para toda la zona objetivo. La estimación de posición se realiza usando conjuntamente las redes WiFi escaneadas, y la información proporcionada por los sensores de movimiento de cada dispositivo. El intercambio de información con el resto de los usuarios se realiza a través de protocolos ad-hoc implementados sobre una red MANET, formada por los usuarios presentes en el recinto. La solución implementada se adapta fácilmente ante cambios en las referencias del recinto y permite que un mismo modelo funcione en distintos dispositivos con un leve cambio en la configuración. La calidad de la estimación es proporcional a la densidad de señales WiFi del ambiente. La versión actual del sistema permite, en un ambiente con densidad moderada, obtener márgenes de error aceptables para que un humano pueda encontrar a otra persona usando inspección visual.
Style APA, Harvard, Vancouver, ISO itp.
36

Khan, Khalid. "The Evaluation of Well-known Effort Estimation Models based on Predictive Accuracy Indicators". Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4778.

Pełny tekst źródła
Streszczenie:
Accurate and reliable effort estimation is still one of the most challenging processes in software engineering. There have been numbers of attempts to develop cost estimation models. However, the evaluation of model accuracy and reliability of those models have gained interest in the last decade. A model can be finely tuned according to specific data, but the issue remains there is the selection of the most appropriate model. A model predictive accuracy is determined by the difference of the various accuracy measures. The one with minimum relative error is considered to be the best fit. The model predictive accuracy is needed to be statistically significant in order to be the best fit. This practice evolved into model evaluation. Models predictive accuracy indicators need to be statistically tested before taking a decision to use a model for estimation. The aim of this thesis is to statistically evaluate well known effort estimation models according to their predictive accuracy indicators using two new approaches; bootstrap confidence intervals and permutation tests. In this thesis, the significance of the difference between various accuracy indicators were empirically tested on the projects obtained from the International Software Benchmarking Standard Group (ISBSG) data set. We selected projects of Un-Adjusted Function Points (UFP) of quality A. Then, the techniques; Analysis Of Variance ANOVA and regression to form Least Square (LS) set and Estimation by Analogy (EbA) set were used. Step wise ANOVA was used to form parametric model. K-NN algorithm was employed in order to obtain analogue projects for effort estimation use in EbA. It was found that the estimation reliability increased with the pre-processing of the data statistically, moreover the significance of the accuracy indicators were not only tested statistically but also with the help of more complex inferential statistical methods. The decision of selecting non-parametric methodology (EbA) for generating project estimates in not by chance but statistically proved.
Style APA, Harvard, Vancouver, ISO itp.
37

Loconsole, Annabella. "Definition and validation of requirements management measures". Doctoral thesis, Umeå : Department of Computing Science, Umeå Univ, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1467.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Aragón, Cabrera Gustavo Alejandro Verfasser], Matthias [Akademischer Betreuer] Jarke i Antonello [Akademischer Betreuer] [Monti. "Extended model predictive control software framework for real-time local management of complex energy systems / Gustavo Alejandro Aragón Cabrera ; Matthias Jarke, Antonello Monti". Aachen : Universitätsbibliothek der RWTH Aachen, 2021. http://d-nb.info/1231542179/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Kafka, Jan. "Analýza trhu operačních systémů". Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-198871.

Pełny tekst źródła
Streszczenie:
The reason of this work are operating systems, their significance, history, analysis of the current situation and attempt to predict the future. The first part introduces the basic concepts, the definition of operating system and brief history. The second part deals with the current situation on the market for operating systems, the main drivers of this sector and the business models used. The last part deals with the prediction of the situation on the market for operating systems, their future evolution, the probable evolution of their business model and estimate the near future from the business point of view. It were also made study of available new scientific publications about OS. Contribution of this thesis evaluate role and future operating systems and prediction business perspective in the industry branch.
Style APA, Harvard, Vancouver, ISO itp.
40

Satin, Ricardo Francisco de Pierre. "Um estudo exploratório sobre o uso de diferentes algoritmos de classificação, de seleção de métricas, e de agrupamento na construção de modelos de predição cruzada de defeitos entre projetos". Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/2552.

Pełny tekst źródła
Streszczenie:
Predizer defeitos em projetos de software é uma tarefa complexa, especialmente para aqueles projetos que estão em fases iniciais do desenvolvimento por, frequentemente, disponibilizarem de poucos dados para que modelos de predição sejam criados. A utilização da predição cruzada de defeitos entre projetos é indicada em tal situação, pois permite reaproveitar dados de projetos similares. Este trabalho propõe um estudo exploratório sobre o uso de diferentes algoritmos de classificação, seleção de métricas, e de agrupamento na construção de um modelo de predição cruzada de defeitos entre projetos. Esse modelo foi construído com o uso de uma medida de desempenho, obtida com a aplicação de algoritmos de classificação, como forma de encontrar e agrupar projetos semelhantes. Para tanto, foi estudada a aplicação conjunta de 8 algoritmos de classificação, 6 de seleção de atributos, e um de agrupamento em um conjunto de dados com 1283 projetos, resultando na construção de 61584 diferentes modelos de predição. Os algoritmos de classificação e de seleção de atributos tiveram seus desempenhos avaliados por meio de diferentes testes estatísticos que mostraram que: o Naive Bayes foi o classificador de melhor desempenho, em comparação com os outros 7 algoritmos; o par de algoritmos de seleção de atributos que apresentou melhor desempenho foi o formado pelo avaliador de atributos CFS e método de busca Genetic Search, em comparação com outros 6 pares. Considerando o algoritmo de agrupamento, a presente proposta parece ser promissora, uma vez que os resultados obtidos mostram evidências de que as predições usando agrupamento foram melhores que as predições realizadas sem qualquer agrupamento por similaridade, além de mostrar a diminuição do custo de treino e teste durante o processo de predição.
To predict defects in software projects is a complex task, especially for those projects that are in early stages of development by, often, providing few data for prediction models. The use of cross-project defect prediction is indicated in such a situation because it allows reuse data of similar projects. This work proposes an exploratory study on the use of different classification algorithms, of selection metrics, and grouping to build cross-project defect predictions models. This model was built using a performance measure, obtained by applying classification algorithms aim to find and group similar projects. Therefore, it was studied the application of 8 classification algorithms, 6 feature selection, and a cluster in a data set with 1283 projects, resulting in the construction of 61584 different prediction models. The classification algorithms and feature selection had their performance evaluated through different statistical tests showed that: the Naive Bayes was the best performance classifier, as compared with other 7 algorithms; the pair of feature selection algorithms that performed better was formed by CFS attribute evaluator and search method Genetic Search, compared with 6 other pairs. Considering the clustering algorithm, this proposal seems to be promising, since the results shows evidence that the predictions were best grouping using the predictions performed without any similarity clustering, and shows the decrease in training cost and testing during the prediction process.
Style APA, Harvard, Vancouver, ISO itp.
41

Hamza, Salma. "Une approche pragmatique pour mesurer la qualité des applications à base de composants logiciels". Thesis, Lorient, 2014. http://www.theses.fr/2014LORIS356/document.

Pełny tekst źródła
Streszczenie:
Ces dernières années, de nombreuses entreprises ont introduit la technologie orientée composant dans leurs développements logiciels. Le paradigme composant, qui prône l’assemblage de briques logiciels autonomes et réutilisables, est en effet une proposition intéressante pour diminuer les coûts de développement et de maintenance tout en augmentant la qualité des applications. Dans ce paradigme, comme dans tous les autres, les architectes et les développeurs doivent pouvoir évaluer au plus tôt la qualité de ce qu’ils produisent, en particulier tout au long du processus de conception et de codage. Les métriques sur le code sont des outils indispensables pour ce faire. Elles permettent, dans une certaine mesure, de prédire la qualité « externe » d’un composant ou d’une architecture en cours de codage. Diverses propositions de métriques ont été faites dans la littérature spécifiquement pour le monde composant. Malheureusement, aucune des métriques proposées n’a fait l’objet d’une étude sérieuse quant à leur complétude, leur cohésion et surtout quant à leur aptitude à prédire la qualité externe des artefacts développés. Pire encore, l’absence de prise en charge de ces métriques par les outils d’analyse de code du marché rend impossible leur usage industriel. En l’état, la prédiction de manière quantitative et « a priori » de la qualité de leurs développements est impossible. Le risque est donc important d’une augmentation des coûts consécutive à la découverte tardive de défauts. Dans le cadre de cette thèse, je propose une réponse pragmatique à ce problème. Partant du constat qu’une grande partie des frameworks industriels reposent sur la technologie orientée objet, j’ai étudié la possibilité d’utiliser certaines des métriques de codes "classiques", non propres au monde composant, pour évaluer les applications à base de composants. Parmi les métriques existantes, j’ai identifié un sous-ensemble d’entre elles qui, en s’interprétant et en s’appliquant à certains niveaux de granularité, peuvent potentiellement donner des indications sur le respect par les développeurs et les architectes des grands principes de l’ingénierie logicielle, en particulier sur le couplage et la cohésion. Ces deux principes sont en effet à l’origine même du paradigme composant. Ce sous-ensemble devait être également susceptible de représenter toutes les facettes d’une application orientée composant : vue interne d’un composant, son interface et vue compositionnelle au travers l’architecture. Cette suite de métrique, identifiée à la main, a été ensuite appliquée sur 10 applications OSGi open- source afin de s’assurer, par une étude de leur distribution, qu’elle véhiculait effectivement pour le monde composant une information pertinente. J’ai ensuite construit des modèles prédictifs de propriétés qualité externes partant de ces métriques internes : réutilisation, défaillance, etc. J’ai décidé de construire des modèles qui permettent de prédire l’existence et la fréquence des défauts et les bugs. Pour ce faire, je me suis basée sur des données externes provenant de l’historique des modifications et des bugs d’un panel de 6 gros projets OSGi matures (avec une période de maintenance de plusieurs années). Plusieurs outils statistiques ont été mis en œuvre pour la construction des modèles, notamment l’analyse en composantes principales et la régression logistique multivariée. Cette étude a montré qu’il est possible de prévoir avec ces modèles 80% à 92% de composants fréquemment buggés avec des rappels allant de 89% à 98%, selon le projet évalué. Les modèles destinés à prévoir l’existence d’un défaut sont moins fiables que le premier type de modèle. Ce travail de thèse confirme ainsi l’intérêt « pratique » d’user de métriques communes et bien outillées pour mesurer au plus tôt la qualité des applications dans le monde composant
Over the past decade, many companies proceeded with the introduction of component-oriented software technology in their development environments. The component paradigm that promotes the assembly of autonomous and reusable software bricks is indeed an interesting proposal to reduce development costs and maintenance while improving application quality. In this paradigm, as in all others, architects and developers need to evaluate as soon as possible the quality of what they produce, especially along the process of designing and coding. The code metrics are indispensable tools to do this. They provide, to a certain extent, the prediction of the quality of « external » component or architecture being encoded. Several proposals for metrics have been made in the literature especially for the component world. Unfortunately, none of the proposed metrics have been a serious study regarding their completeness, cohesion and especially for their ability to predict the external quality of developed artifacts. Even worse, the lack of support for these metrics with the code analysis tools in the market makes it impossible to be used in the industry. In this state, the prediction in a quantitative way and « a priori » the quality of their developments is impossible. The risk is therefore high for obtaining higher costs as a consequence of the late discovery of defects. In the context of this thesis, I propose a pragmatic solution to the problem. Based on the premise that much of the industrial frameworks are based on object-oriented technology, I have studied the possibility of using some « conventional » code metrics unpopular to component world, to evaluate component-based applications. Indeed, these metrics have the advantage of being well defined, known, equipped and especially to have been the subject of numerous empirical validations analyzing the predictive power for imperatives or objects codes. Among the existing metrics, I identified a subset of them which, by interpreting and applying to specific levels of granularity, can potentially provide guidance on the compliance of developers and architects of large principles of software engineering, particularly on the coupling and cohesion. These two principles are in fact the very source of the component paradigm. This subset has the ability to represent all aspects of a component-oriented application : internal view of a component, its interface and compositional view through architecture. This suite of metrics, identified by hand, was then applied to 10 open-source OSGi applications, in order to ensure, by studying of their distribution, that it effectively conveyed relevant information to the component world. I then built predictive models of external quality properties based on these internal metrics : reusability, failure, etc. The development of such models and the analysis of their power are only able to empirically validate the interest of the proposed metrics. It is also possible to compare the « power » of these models with other models from the literature specific to imperative and/or object world. I decided to build models that predict the existence and frequency of defects and bugs. To do this, I relied on external data from the history of changes and fixes a panel of 6 large mature OSGi projects (with a maintenance period of several years). Several statistical tools were used to build models, including principal component analysis and multivariate logistic regression. This study showed that it is possible to predict with these models 80% to 92% of frequently buggy components with reminders ranging from 89% to 98%, according to the evaluated projects. Models for predicting the existence of a defect are less reliable than the first type of model. This thesis confirms thus the interesting « practice » of using common and well equipped metrics to measure at the earliest application quality in the component world
Style APA, Harvard, Vancouver, ISO itp.
42

Alomari, Mohammad H. "Engineering System Design for Automated Space Weather Forecast. Designing Automatic Software Systems for the Large-Scale Analysis of Solar Data, Knowledge Extraction and the Prediction of Solar Activities Using Machine Learning Techniques". Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4248.

Pełny tekst źródła
Streszczenie:
Coronal Mass Ejections (CMEs) and solar flares are energetic events taking place at the Sun that can affect the space weather or the near-Earth environment by the release of vast quantities of electromagnetic radiation and charged particles. Solar active regions are the areas where most flares and CMEs originate. Studying the associations among sunspot groups, flares, filaments, and CMEs is helpful in understanding the possible cause and effect relationships between these events and features. Forecasting space weather in a timely manner is important for protecting technological systems and human life on earth and in space. The research presented in this thesis introduces novel, fully computerised, machine learning-based decision rules and models that can be used within a system design for automated space weather forecasting. The system design in this work consists of three stages: (1) designing computer tools to find the associations among sunspot groups, flares, filaments, and CMEs (2) applying machine learning algorithms to the associations¿ datasets and (3) studying the evolution patterns of sunspot groups using time-series methods. Machine learning algorithms are used to provide computerised learning rules and models that enable the system to provide automated prediction of CMEs, flares, and evolution patterns of sunspot groups. These numerical rules are extracted from the characteristics, associations, and time-series analysis of the available historical solar data. The training of machine learning algorithms is based on data sets created by investigating the associations among sunspots, filaments, flares, and CMEs. Evolution patterns of sunspot areas and McIntosh classifications are analysed using a statistical machine learning method, namely the Hidden Markov Model (HMM).
Style APA, Harvard, Vancouver, ISO itp.
43

Alomari, Mohammad Hani. "Engineering system design for automated space weather forecast : designing automatic software systems for the large-scale analysis of solar data, knowledge extraction and the prediction of solar activities using machine learning techniques". Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4248.

Pełny tekst źródła
Streszczenie:
Coronal Mass Ejections (CMEs) and solar flares are energetic events taking place at the Sun that can affect the space weather or the near-Earth environment by the release of vast quantities of electromagnetic radiation and charged particles. Solar active regions are the areas where most flares and CMEs originate. Studying the associations among sunspot groups, flares, filaments, and CMEs is helpful in understanding the possible cause and effect relationships between these events and features. Forecasting space weather in a timely manner is important for protecting technological systems and human life on earth and in space. The research presented in this thesis introduces novel, fully computerised, machine learning-based decision rules and models that can be used within a system design for automated space weather forecasting. The system design in this work consists of three stages: (1) designing computer tools to find the associations among sunspot groups, flares, filaments, and CMEs (2) applying machine learning algorithms to the associations' datasets and (3) studying the evolution patterns of sunspot groups using time-series methods. Machine learning algorithms are used to provide computerised learning rules and models that enable the system to provide automated prediction of CMEs, flares, and evolution patterns of sunspot groups. These numerical rules are extracted from the characteristics, associations, and time-series analysis of the available historical solar data. The training of machine learning algorithms is based on data sets created by investigating the associations among sunspots, filaments, flares, and CMEs. Evolution patterns of sunspot areas and McIntosh classifications are analysed using a statistical machine learning method, namely the Hidden Markov Model (HMM).
Style APA, Harvard, Vancouver, ISO itp.
44

Wilkerson, Jaxon. "Handoff of Advanced Driver Assistance Systems (ADAS) using a Driver-in-the-Loop Simulator and Model Predictive Control (MPC)". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595262540712316.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Cedro, Carlos Costa. "USAR: um modelo preditivo para avaliação da acessibilidade em tecnologias assistivas baseadas em realidade aumentada". Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1372.

Pełny tekst źródła
Streszczenie:
Atualmente cerca de 15% da população mundial possui algum tipo de deficiência. Para estas pessoas o uso das tecnologias assistivas é essencial. A realidade aumentada surge como uma importante alternativa, para a criação de novas tecnologias assistivas, devido às suas inúmeras formas de rastreamento do participante, que, se combinadas, conseguem proporcionar novas possibilidades de interação. Entretanto, o uso da realidade aumentada, no contexto das tecnologias assistivas, não é uma panaceia, pois cada deficiência é única, assim como cada pessoa possui suas próprias particularidades. Portanto, é importante analisar a acessibilidade destas aplicações, para que o benefício do seu uso possa ser realmente assegurado. Esta dissertação propõe um modelo preditivo de avaliação, baseado no Design Universal e na norma ISO 9241-171. Este modelo é capaz de avaliar a acessibilidade de aplicações de realidade aumentada, quando usadas como tecnologias assistivas. O processo de avaliação consiste no preenchimento de questionários, compostos por questões claras e inteligíveis, para que possam atender à um público multidisciplinar, sem a exigência de qualquer conhecimento prévio em avaliação de acessibilidade. O produto da avaliação feita pelos questionários é um indicador de acessibilidade, que representa o grau de conformidade com os requisitos de acessibilidade. A aplicação dos questionários é sensível ao contexto, cada questionário inclui guias de utilização, que contém os requisitos mínimos do participante, para cada critério de avaliação, desta forma, é possível obter uma visão holística da avaliação, que pode ser customizada para cada participante ou generalizada para uma deficiência específica. O principal objetivo desta dissertação é propor um modelo preditivo, para a avaliação da acessibilidade, servindo aos educadores especiais, desenvolvedores e consumidores de tecnologias assistivas, como critério para a utilização ou não das aplicações de realidade aumentada.
Currently about 15% of the world population has some type of disability. For these people the use of assistive technologies is essential. Augmented reality emerges as an important alternative to the creation of new assistive technologies, due to its numerous forms of participant tracking, which, if combined, can provide new possibilities for interaction. However, the use of augmented reality in the context of assistive technology is not a panacea, as each disability is unique, as well as each person has its own peculiarities. Therefore, it is important to analyze the accessibility of these applications, to ensure the benefit of its use effectively. This work proposes a predictive evaluation model, based on universal design and ISO 9241-171 standard. This model is able to evaluate the accessibility of augmented reality applications, when used as assistive technologies. The evaluation process consists of completing questionnaires, composed of clear and intelligible issues, so that they can meet the multidisciplinary public, without requiring any prior knowledge in evaluating accessibility. The product of evaluation made by questionnaires is a numerical indicator of accessibility, which represents the degree of compliance with accessibility requirements. The questions are sensitives to context, each questionnaire includes guidelines, which contains the minimum requirements of the participant, for each criteria, it is thus possible to obtain a holistic view of the evaluation, which can be customized for each participant or generalized to a specific disability. The main objective of this work is to propose a predictive model for the evaluation of accessibility, serving special educators, developers and assistive technology consumers, as a criteria for the decision of use of augmented reality applications.
Style APA, Harvard, Vancouver, ISO itp.
46

Murali, madhavan rathai Karthik. "Synthesis and real-time implementation of parameterized NMPC schemes for automotive semi-active suspension systems". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT052.

Pełny tekst źródła
Streszczenie:
Cette thèse traite de la synthèse et de la mise en œuvre en temps réel (RT) de schémas de contrôle prédictif non linéaire paramétré (pNMPC) pour les systèmes de suspension semi-active des automobiles. Le schéma pNMPC est basé sur une technique d'optimisation par simulation en boîte noire. Le point essentiel de la méthode est de paramétrer finement le profil d'entrée et de simuler le système pour chaque entrée paramétrée et d'obtenir la valeur approximative de l'objectif et de la violation des contraintes pour le problème pNMPC. Avec les résultats obtenus de la simulation, l'entrée admissible (si elle existe) ayant la valeur objective minimale ou, à défaut, la valeur de violation de contrainte la plus faible est sélectionnée et injectée dans le système et ceci est répété indéfiniment à chaque période de décision. La méthode a été validée expérimentalement sur dSPACE MicroAutoBoX II (MABXII) et les résultats montrent de bonnes performances de l'approche proposée. La méthode pNMPC a également été étendue à une méthode pNMPC parallélisée et la méthode proposée a été mise en œuvre pour le contrôle du système de suspension semi-active d'un demi-véhicule. Cette méthode a été mise en œuvre grâce à des unités de traitement graphique (GPU) qui servent de plate-forme modèle pour la mise en œuvre d'algorithmes parallèles par le biais de ses processeurs multi-cœurs. De plus, une version stochastique de la méthode pNMPC parallélisée est proposée sous le nom de schéma pNMPC à Scénario-Stochastique (SS-pNMPC). Cette méthode a été mise en œuvre et testée sur plusieurs cartes NVIDIA embarquées pour valider la faisabilité de la méthode proposée pour le contrôle du système de suspension semi-active d'un demi-véhicule. En général, les schémas pNMPC parallélisés offrent de bonnes performances et se prêtent bien à un large espace de paramétrage en entrée. Enfin, la thèse propose un outil logiciel appelé "pNMPC - A code generation software tool for implementation of derivative free pNMPC scheme for embedded control systems". L'outil logiciel de génération de code (S/W) a été programmé en C/C++ et propose également une interface avec MATLAB/Simulink. Le logiciel de génération de code a été testé pour divers exemples, tant en simulation que sur du matériel embarqué en temps réel (MABXII), et les résultats semblent prometteurs et viables pour la mise en œuvre de la RT pour des applications réelles. L'outil de génération de code S/W comprend également une fonction de génération de code GPU pour une mise en œuvre parallèle. Pour conclure, la thèse a été menée dans le cadre du projet EMPHYSIS et les objectifs du projet s'alignent sur cette thèse et les méthodes pNMPC proposées sont compatibles avec la norme eFMI
This thesis discusses the synthesis and real-time (RT) implementation of parameterized Nonlinear Model Predictive Control (pNMPC) schemes for automotive semi-active suspension systems. The pNMPC scheme uses a black-box simulation-based optimization method. The crux of the method is to finitely parameterize the input profile and simulate the system for each parameterized input and obtain the approximate objective and constraint violation value for the pNMPC problem. With the obtained results from the simulation, the input with minimum objective value or the least constraint violation value is selected and injected into the system and this is repeated in a receding horizon fashion. The method was experimentally validated on dSPACE MicroAutoBoX II (MABXII) and the results display good performance of the proposed approach. The pNMPC method was also augmented to parallelized pNMPC and the proposed method was implemented for control of semi-active suspension system for a half car vehicle. This method was implemented by virtue of Graphic Processing Units (GPUs) which serves as a paragon platform for implementation of parallel algorithms through its multi-core processors. Also, a stochastic version of the parallelized pNMPC method is proposed which is termed as Scenario-Stochastic pNMPC (SS-pNMPC) scheme and the method was implemented and tested on several NVIDIA embedded boards to verify and validate the RT feasibility of the proposed method for control of semi-active suspension system for a half car vehicle. In general, the parallelized pNMPC schemes provide good performance and also, fares well for large input parameterization space. Finally, the thesis proposes a software tool termed “pNMPC – A code generation software tool for implementation of derivative free pNMPC scheme for embedded control systems”. The code generation software (S/W) tool was programmed in C/C++ and also, provides interface to MATLAB/Simulink. The S/W tested for variety of examples both in simulation as well as on RT embedded hardware (MABXII) and the results looks promising and viable for RT implementation for real world applications. The code generation S/W tool also includes GPU code generation feature for parallel implementation. To conclude, the thesis was conducted under the purview of the EMPHYSIS project and the goals of the project align with this thesis and the proposed pNMPC methods are amenable with eFMI standard
Style APA, Harvard, Vancouver, ISO itp.
47

Zhang, Hong. "Software stability assessment using multiple prediction models". Thèse, 2003. http://hdl.handle.net/1866/14513.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

Luo, Yan. "Statistical defect prediction models for software quality assurance". Thesis, 2007. http://spectrum.library.concordia.ca/975638/1/MR34446.pdf.

Pełny tekst źródła
Streszczenie:
Software defects entail a highly-significant cost penalty in lost productivity and post-release maintenance. Early defect prevention and removal techniques can substantially enhance the profit realized on software products. The motivation for software quality improvement is most often expressed in terms of increased customer satisfaction with higher product quality, or more generally, as a need to position SAP Inc as a leader in quality software development. Thus, knowledge about how many defects to expect in a software product at any given stage during its development process is a very valuable asset. The great challenge, however, is to devise efficient and reliable prediction models for software defects. The first problem addressed in this thesis is software reliability growth modeling. We introduce an anisotropic Laplace test statistic that not only takes into account the activity in the system but also the proportion of reliability growth within the model. The major part of this thesis is devoted to statistical models that we have developed to predict software defects. We present a software defect prediction model using operating characteristic curves. The main idea behind our proposed technique is to use geometric insight in helping construct an efficient prediction method to reliably predict the number of failures at any given stage during the software development process. Our predictive approach uses the number of detected faults in the testing phase. Data from actual SAP projects is used to illustrate the much improved performance of the proposed method in comparison with existing prediction approaches
Style APA, Harvard, Vancouver, ISO itp.
49

Coley, Terry Ronald. "Prediction of scanning tunneling microscope images by computational quantum chemistry: chemical models and software design". Thesis, 1993. https://thesis.library.caltech.edu/5310/1/Coley_tr_1993.pdf.

Pełny tekst źródła
Streszczenie:
We have created chemical models for predicting and interpreting STM images of several specific systems. Detailed studies are made of transition metal dichalcogenides (MoS_2 and MoTe_2), Xe on Ni (110), C_3H_4 on Ni (110) and n-butyl benzene on a graphite model (C_(42)O_6H_(12). In the case of MoS_2 we study the ambiguity in the STM images regarding the assignment of peaks to the subsurface metal or the surface chalcogenide. In the Ni models we study STM imaging mechanisms for cases where the adsorbate states lie far above and below the metal Fermi level. The large n-butyl on graphite system models a system where adsorbate states can play a direct role in the imaging. Results from the cluster studies are related to various STM imaging modes, including constant current mode, constant height mode, and barrier height imaging. Two new procedures are developed to aid in computational prediction of STM images. First, we implement an algorithm for computing Bardeen-type tunneling matrix elements from ab initio wave functions in Gaussian basis sets. Second, we show how to obtain state densities as a function of energy for bulk substrate/adsorbate systems using only Fock matrix elements from cluster calculations. Initial results are presented for a linear chain of Ni atoms with a perturbing Xe atom. A software environment for computational chemistry developed in the course of performing these calculations is presented. Tools for creating computational servers to perform chemistry calculations are described. Embedded in each chemistry server is a public domain control language created by J. Ousterhout at the University of California, Berkeley. This allows the development of a variety of clients for controlling the servers using a common language. Clients can be simple text "scripts" that organize a calculation, graphical interfaces, or control streams from other programs. All software entities are designed in an object oriented fashion discussed in the text.
Style APA, Harvard, Vancouver, ISO itp.
50

Huang, Hsiu-Min Chang 1958. "Training experience satisfaction prediction based on trainees' general information". Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-08-1656.

Pełny tekst źródła
Streszczenie:
Training is a powerful and required method to equip human resources with tools to keep their organizations competitive in the markets. Typically at the end of class, trainees are asked to give their feelings about or satisfaction with the training. Although there are various reasons for conducting training evaluations, the common theme is the need to continuously improve a training program in the future. Among training evaluation methods, post-training surveys or questionnaires are the most commonly used way to get trainees’ reaction about the training program and “the forms will tell you to what extent you’ve been successful.” (Kirkpatrick 2006) A higher satisfaction score means more trainees were satisfied with the training. A total of 40 prediction models grouped into 10-GIQs prediction models and 6-GIQs prediction models were built in this work to predict the total training satisfaction based on trainees’ general information which included a trainee’s desire to take training, a trainee’s attitude in training class and other information related to the trainee’s work environment and other characteristics. The best models selected from 10-GIQs and 6-GIQs prediction models performed the prediction work with the prediction quality of PRED (0.15) >= 99% and PRED (0.15) >= 98%, separately. An interesting observation discovered in this work is that the training satisfaction could be predicted based on trainees information that was not related to any training experience at all. The dominant factors on training satisfaction were the trainee’s attitude in training class and the trainee’s desire to take the training which was found in 10-GIQs prediction models and 6-GIQs prediction models, separately.
text
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii