Dissertations / Theses on the topic 'SOFTWARE PREDICTION MODELS'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'SOFTWARE PREDICTION MODELS.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Bowes, David Hutchinson. "Factors affecting the performance of trainable models for software defect prediction." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/10978.
Full textAskari, Mina. "Information Theoretic Evaluation of Change Prediction Models for Large-Scale Software." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/1139.
Full textIn this thesis, we first analyze the information generated during the development process, which can be obtained through mining the software repositories. We observe that the change data follows a Zipf distribution and exhibits self-similarity. Based on the extracted data, we then develop three probabilistic models to predict which files will have changes or bugs. One purpose of creating these models is to rank the files of the software that are most susceptible to having faults.
The first model is Maximum Likelihood Estimation (MLE), which simply counts the number of events i. e. , changes or bugs that occur in to each file, and normalizes the counts to compute a probability distribution. The second model is Reflexive Exponential Decay (RED), in which we postulate that the predictive rate of modification in a file is incremented by any modification to that file and decays exponentially. The result of a new bug occurring to that file is a new exponential effect added to the first one. The third model is called RED Co-Changes (REDCC). With each modification to a given file, the REDCC model not only increments its predictive rate, but also increments the rate for other files that are related to the given file through previous co-changes.
We then present an information-theoretic approach to evaluate the performance of different prediction models. In this approach, the closeness of model distribution to the actual unknown probability distribution of the system is measured using cross entropy. We evaluate our prediction models empirically using the proposed information-theoretic approach for six large open source systems. Based on this evaluation, we observe that of our three prediction models, the REDCC model predicts the distribution that is closest to the actual distribution for all the studied systems.
Tran, Qui Can Cuong. "Empirical evaluation of defect identification indicators and defect prediction models." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2553.
Full textLiu, Qin. "Optimal utilization of historical data sets for the construction of software cost prediction models." Thesis, Northumbria University, 2006. http://nrl.northumbria.ac.uk/2129/.
Full textBrosig, Fabian [Verfasser], and S. [Akademischer Betreuer] Kounev. "Architecture-Level Software Performance Models for Online Performance Prediction / Fabian Maria Konrad Brosig. Betreuer: S. Kounev." Karlsruhe : KIT-Bibliothek, 2014. http://d-nb.info/105980316X/34.
Full textChun, Zhang Jing. "Trigonometric polynomial high order neural network group models for financial data simulation & prediction /." [Campblelltown, N.S.W.] : The author, 1998. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030721.152829/index.html.
Full textMcDonald, Simon Francis. "Better clinical decisions for less effort : building prediction software models to improve anti-coagulation care and prevent thrombosis and strokes." Thesis, Lancaster University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539665.
Full textHall, Otto. "Inference of buffer queue times in data processing systems using Gaussian Processes : An introduction to latency prediction for dynamic software optimization in high-end trading systems." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214791.
Full textDenna studie undersöker huruvida Gaussian Process Regression kan appliceras för att utvärdera buffer-kötider i storskaliga dataprocesseringssystem. Dessutom utforskas ifall dataströmsfrekvenser kan generaliseras till en liten delmängd av utfallsrymden. Medmålet att erhålla en grund för dynamisk mjukvaruoptimering introduceras en lovandestartpunkt för fortsatt forskning. Studien riktas mot Direct Market Access system för handel på finansiella marknader, somprocesserar enorma mängder marknadsdata dagligen. På grund av vissa begränsningar axlas ett naivt tillvägagångssätt och väntetider modelleras som en funktion av enbartdatagenomströmning i åtta små historiska tidsinterval. Tränings- och testdataset representeras från ren marknadsdata och pruning-tekniker används för att krympa dataseten med en ungefärlig faktor om 0.0005, för att uppnå beräkningsmässig genomförbarhet. Vidare tas fyra olika implementationer av Gaussian Process Regression i beaktning. De resulterande algorithmerna presterar bra på krympta dataset, med en medel R2 statisticpå 0.8399 över sex testdataset, alla av ungefär samma storlek som träningsdatasetet. Tester på icke krympta dataset indikerar vissa brister från pruning, där input vektorermotsvararande låga latenstider är associerade med mindre exakthet. Slutsatsen dras att beroende på applikation kan dessa brister göra modellen obrukbar. För studiens syftefinnes emellertid att latenstider kan sannerligen modelleras av regressionsalgoritmer. Slutligen diskuteras metoder för förbättrning med hänsyn till både pruning och GaussianProcess Regression, och det öppnas upp för lovande vidare forskning.
Vlad, Iulian Teodor. "Mathematical Methods to Predict the Dynamic Shape Evolution of Cancer Growth based on Spatio-Temporal Bayesian and Geometrical Models." Doctoral thesis, Universitat Jaume I, 2016. http://hdl.handle.net/10803/670303.
Full textEl objetivo de esta investigación es observar la dinámica de los tumores, desarrollar e implementarnuevos métodos y algoritmos para la predicción del crecimiento tumoral. Queremos ofrecer algunasherramientas para ayudar a los médicos a comprender y tratar esta enfermedad. Utilizando unmétodo de predicción , y comparándolo con la evolución real de un tumor, un médico puede constata si el tratamiento prescrito tiene el efecto deseado, y de acuerdo con ello, si es necesario, tomar la decisión de intervención quirúrgica. El plan de la tesis es el siguiente. En el primer capítulo recordamos brevemente algunaspropiedades y procesos de clasificación de procesos puntuales con algunos ejemplosespacio-temporales. El capítulo 2 presenta una breve descripción de la teoría de las bases de Levy y se da la integración con respecto a dicha base, recordamos resultados estándar sobre procesosespaciales de Cox, y finalmente proponemos diferentes tipos de modelos de crecimien to y un nuevo algoritmo, el Cobweb, que es presentado y desarrollado en base a la metodología propuesta. Los capítulos 3, 4 y 5 están dedicados a presentar nuevos métodos de predicción.
SARCIA', SALVATORE ALESSANDRO. "An Approach to improving parametric estimation models in the case of violation of assumptions based upon risk analysis." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2009. http://hdl.handle.net/2108/1048.
Full textWiese, Igor Scaliante. "Predição de mudanças conjuntas de artefatos de software com base em informações contextuais." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-02122016-140016/.
Full textCo-change prediction aims to make developers aware of which artifacts may change together with the artifact they are working on. In the past, researchers relied on structural analysis to build prediction models. More recently, hybrid approaches relying on historical information and textual analysis have been proposed. Despite the advances in the area, software developers still do not use these approaches widely, presumably because of the number of false recommendations. The hypothesis of this thesis is that contextual information of software changes collected from issues, developers\' communication, and commit metadata describe the circumstances and conditions under which a co-change occurs and this is useful to predict co-changes. The aim of this thesis is to use contextual information to build co-change prediction models improving the overall accuracy, especially decreasing the amount of false recommendations. We built predictive models specific for each pair of files using contextual information and the Random Forest machine learning algorithm. The approach was evaluated in 129 versions of 10 open source projects from the Apache Software Foundation. We compared our approach to a baseline model based on association rules, which is often used in the literature. We evaluated the performance of the prediction models, investigating the influence of data aggregation to build training and test sets, as well as the identification of the most relevant contextual information. The results indicate that models based on contextual information can correctly predict 88% of co-change instances, against 19% achieved by the association rules model. This indicates that models based on contextual information can be 3 times more accurate. Models created with contextual information collected in each software version were more accurate than models built from an arbitrary amount of contextual information collected from more than one version. The most important pieces of contextual information to build the prediction models were: number of lines of code added or modified, number of lines of code removed, code churn, number of words in the discussion and description of a task, number of comments, and role of developers in the discussion (measured by the closeness value obtained from the communication social network). We asked project developers about the relevance of the results obtained by the prediction models based on contextual information. According to them, the results can help new developers to the project, since these developers have no knowledge about the architecture and are usually not familiar with the artifacts history. Thus, our results indicate that prediction models based on the contextual information are useful to support developers during the maintenance and evolution activities
Rodrigues, Genaina Nunes. "A model driven approach for software reliability prediction." Thesis, University College London (University of London), 2008. http://discovery.ucl.ac.uk/1446004/.
Full textGhose, Susmita. "Analysis of errors in software reliability prediction systems and application of model uncertainty theory to provide better predictions." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3781.
Full textThesis research directed by: Mechanical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Vasudev, R. Sashin, and Ashok Reddy Vanga. "Accuracy of Software Reliability Prediction from Different Approaches." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1298.
Full textsvra06@student.bth.se
Abdel-Ghaly, A. A. "Analysis of predictive quality of software reliability models." Thesis, City University London, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370836.
Full textDennison, Thomas E. "Fitting and prediction uncertainty for a software reliability model." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23678.
Full textBowring, James Frederick. "Modeling and Predicting Software Behaviors." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/19754.
Full textCahill, Jaspar. "Machine learning techniques to improve software quality." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/41730/1/Jaspar_Cahill_Thesis.pdf.
Full textVan, Koten Chikako, and n/a. "Bayesian statistical models for predicting software effort using small datasets." University of Otago. Department of Information Science, 2007. http://adt.otago.ac.nz./public/adt-NZDU20071009.120134.
Full textVoorhees, David P. "Predicting software Size and Development Effort: Models Based on Stepwise Refinement." NSUWorks, 2005. http://nsuworks.nova.edu/gscis_etd/903.
Full textYun, Seok Jun. "Productivity prediction model based on Bayesian analysis and productivity console." Texas A&M University, 2003. http://hdl.handle.net/1969.1/2305.
Full textGhibellini, Alessandro. "Trend prediction in financial time series: a model and a software framework." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24708/.
Full textFahmi, Mazen. "Evaluating count models for predicting post-release faults in object-oriented software." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=31228.
Full textWang, Yin-Han. "Model and software development for predicting fish growth in trout raceways." Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4751.
Full textTitle from document title page. Document formatted into pages; contains xii, 105 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 47).
Durán, Alcaide Ángel. "Development of high-performance algorithms for a new generation of versatile molecular descriptors. The Pentacle software." Doctoral thesis, Universitat Pompeu Fabra, 2010. http://hdl.handle.net/10803/7201.
Full textEl trabajo que se presenta en esta tesis se ha centrado en el desarrollo de algoritmos de altas prestaciones para la obtención de una nueva generación de descriptores moleculares, con numerosas ventajas con respecto a sus predecesores, adecuados para diversas aplicaciones en el área del diseño de fármacos, y en su implementación en un programa científico de calidad comercial (Pentacle). Inicialmente se desarrolló un nuevo algoritmo de discretización de campos de interacción molecular (AMANDA) que permite extraer eficientemente las regiones de máximo interés. Este algoritmo fue incorporado en una nueva generación de descriptores moleculares independientes del alineamiento, denominados GRIND-2. La rapidez y eficiencia del nuevo algoritmo permitieron aplicar estos descriptores en cribados virtuales. Por último, se puso a punto un nuevo algoritmo de codificación independiente de alineamiento (CLACC) que permite obtener modelos cuantitativos de relación estructura-actividad con mejor capacidad predictiva y mucho más fáciles de interpretar que los obtenidos con otros métodos.
Adekile, Olusegun. "Object-oriented software development effort prediction using design patterns from object interaction analysis." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2329.
Full textMudalige, Gihan Ravideva. "Predictive analysis and optimisation of pipelined wavefront applications using reusable analytic models." Thesis, University of Warwick, 2009. http://wrap.warwick.ac.uk/3773/.
Full textReichert, Thomas. "Development of 3D lattice models for predicting nonlinear timber joint behaviour." Thesis, Edinburgh Napier University, 2009. http://researchrepository.napier.ac.uk/Output/2827.
Full textNdenga, Malanga Kennedy. "Predicting post-release software faults in open source software as a means of measuring intrinsic software product quality." Electronic Thesis or Diss., Paris 8, 2017. http://www.theses.fr/2017PA080099.
Full textFaulty software have expensive consequences. To mitigate these consequences, software developers have to identify and fix faulty software components before releasing their products. Similarly, users have to gauge the delivered quality of software before adopting it. However, the abstract nature and multiple dimensions of software quality impede organizations from measuring software quality. Software quality metrics can be used as proxies of software quality. There is need for a software process metric that can guarantee consistent superior fault prediction performances across different contexts. This research sought to determine a predictor for software faults that exhibits the best prediction performance, requires least effort to detect software faults, and has a minimum cost of misclassifying components. It also investigated the effect of combining predictors on performance of software fault prediction models. Experimental data was derived from four OSS projects. Logistic Regression was used to predict bug status while Linear Regression was used to predict number of bugs per file. Models built with Change Burst metrics registered overall better performance relative to those built with Change, Code Churn, Developer Networks and Source Code software metrics. Change Burst metrics recorded the highest values for numerical performance measures, exhibited the highest fault detection probabilities and had the least cost of mis-classification of components. The study found out that Change Burst metrics could effectively predict software faults
Bürger, Adrian [Verfasser], and Moritz [Akademischer Betreuer] Diehl. "Nonlinear mixed-integer model predictive control of renewable energy systems : : methods, software, and experiments." Freiburg : Universität, 2020. http://d-nb.info/1225682150/34.
Full textPeker, Serhat. "A Novel User Activity Prediction Model For Context Aware Computing Systems." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613662/index.pdf.
Full texthence, they are aware of the users'
context and use that information to deliver personalized recommendations about everyday tasks. In this manner, predicting user&rsquo
s next activity preferences with high accuracy improves the personalized service quality of context aware recommender systems and naturally provides user satisfaction. Predicting activities of people is useful and the studies on this issue in ubiquitous environment are considerably insufficient. Thus, this thesis proposes an activity prediction model to forecast a user&rsquo
s next activity preference using past preferences of the user in certain contexts and current contexts of user in ubiquitous environment. The proposed model presents a new approach for activity prediction by taking advantage of ontology. A prototype application is implemented to demonstrate the applicability of this proposed model and the obtained outputs of a sample case on this application revealed that the proposed model can reasonably predict the next activities of the users.
Puerto, Valencia J. (Jose). "Predictive model creation approach using layered subsystems quantified data collection from LTE L2 software system." Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201907192705.
Full textDareini, Ali. "Prediction and analysis of model’s parameters of Li-ion battery cells." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-11799.
Full textFebbo, Marco. "Advanced 4DT flight guidance and control software system." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11239/.
Full textVera, Barrera Rodrigo Felipe. "Un modelo predictivo para la localización de usuarios móviles en escenarios bajo techo." Tesis, Universidad de Chile, 2012. http://www.repositorio.uchile.cl/handle/2250/113512.
Full textA partir del surgimiento de la computación móvil, la necesidad de conocer la ubicación de recursos y/o personas ha sido imperante en el desarrollo de nuevas tecnologías y de soluciones que emplean este paradigma de computación. En particular, los sistemas de localización en tiempo real cobran cada día más importancia. Típicamente, este tipo de sistemas persiguen objetivos que están orientados a la seguridad, optimización y administración del uso de los recursos. Una gran cantidad de áreas de aplicación aprovechan cada vez más las ventajas de estas tecnologías y las incorporan en su plan de negocios. Estas aplicaciones van desde el seguimiento de activos dentro de un recinto cerrado, hasta el control de flota en empresas de transporte. El presente trabajo desarrolló un modelo predictivo para la estimación de la posición de los recursos en escenarios cerrados (indoor). Este modelo fue luego implementado a través en una aplicación de software que funciona en dispositivos móviles. La aplicación permite estimar la posición tanto del usuario local como de otros usuarios que están alrededor de él. Aunque el margen de error de la estimación es aún importante (del orden de 4-5 metros), el modelo predictivo cumple con el objetivo para el cual fue diseñado. Ese objetivo es que dos o más usuarios de la aplicación puedan encontrarse entre sí cara-a-cara, en base a la información entregada por la aplicación. La información necesaria para realizar la estimación de la posición de un recurso se obtiene de contrastar un modelo del espacio físico pre-cargado en la memoria del dispositivo, contra las señales inalámbricas observadas en tiempo-real. Se requiere que el entorno en el cual se desea implantar esta solución cuente con distintos puntos de accesos WiFi, los cuales puedan ser usados como referencia. La aplicación desarrollada permite construir de manera expedita y con la mínima información el modelo del decaimiento de las señales WiFi para toda la zona objetivo. La estimación de posición se realiza usando conjuntamente las redes WiFi escaneadas, y la información proporcionada por los sensores de movimiento de cada dispositivo. El intercambio de información con el resto de los usuarios se realiza a través de protocolos ad-hoc implementados sobre una red MANET, formada por los usuarios presentes en el recinto. La solución implementada se adapta fácilmente ante cambios en las referencias del recinto y permite que un mismo modelo funcione en distintos dispositivos con un leve cambio en la configuración. La calidad de la estimación es proporcional a la densidad de señales WiFi del ambiente. La versión actual del sistema permite, en un ambiente con densidad moderada, obtener márgenes de error aceptables para que un humano pueda encontrar a otra persona usando inspección visual.
Khan, Khalid. "The Evaluation of Well-known Effort Estimation Models based on Predictive Accuracy Indicators." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4778.
Full textLoconsole, Annabella. "Definition and validation of requirements management measures." Doctoral thesis, Umeå : Department of Computing Science, Umeå Univ, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1467.
Full textAragón, Cabrera Gustavo Alejandro Verfasser], Matthias [Akademischer Betreuer] Jarke, and Antonello [Akademischer Betreuer] [Monti. "Extended model predictive control software framework for real-time local management of complex energy systems / Gustavo Alejandro Aragón Cabrera ; Matthias Jarke, Antonello Monti." Aachen : Universitätsbibliothek der RWTH Aachen, 2021. http://d-nb.info/1231542179/34.
Full textKafka, Jan. "Analýza trhu operačních systémů." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-198871.
Full textSatin, Ricardo Francisco de Pierre. "Um estudo exploratório sobre o uso de diferentes algoritmos de classificação, de seleção de métricas, e de agrupamento na construção de modelos de predição cruzada de defeitos entre projetos." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/2552.
Full textTo predict defects in software projects is a complex task, especially for those projects that are in early stages of development by, often, providing few data for prediction models. The use of cross-project defect prediction is indicated in such a situation because it allows reuse data of similar projects. This work proposes an exploratory study on the use of different classification algorithms, of selection metrics, and grouping to build cross-project defect predictions models. This model was built using a performance measure, obtained by applying classification algorithms aim to find and group similar projects. Therefore, it was studied the application of 8 classification algorithms, 6 feature selection, and a cluster in a data set with 1283 projects, resulting in the construction of 61584 different prediction models. The classification algorithms and feature selection had their performance evaluated through different statistical tests showed that: the Naive Bayes was the best performance classifier, as compared with other 7 algorithms; the pair of feature selection algorithms that performed better was formed by CFS attribute evaluator and search method Genetic Search, compared with 6 other pairs. Considering the clustering algorithm, this proposal seems to be promising, since the results shows evidence that the predictions were best grouping using the predictions performed without any similarity clustering, and shows the decrease in training cost and testing during the prediction process.
Hamza, Salma. "Une approche pragmatique pour mesurer la qualité des applications à base de composants logiciels." Thesis, Lorient, 2014. http://www.theses.fr/2014LORIS356/document.
Full textOver the past decade, many companies proceeded with the introduction of component-oriented software technology in their development environments. The component paradigm that promotes the assembly of autonomous and reusable software bricks is indeed an interesting proposal to reduce development costs and maintenance while improving application quality. In this paradigm, as in all others, architects and developers need to evaluate as soon as possible the quality of what they produce, especially along the process of designing and coding. The code metrics are indispensable tools to do this. They provide, to a certain extent, the prediction of the quality of « external » component or architecture being encoded. Several proposals for metrics have been made in the literature especially for the component world. Unfortunately, none of the proposed metrics have been a serious study regarding their completeness, cohesion and especially for their ability to predict the external quality of developed artifacts. Even worse, the lack of support for these metrics with the code analysis tools in the market makes it impossible to be used in the industry. In this state, the prediction in a quantitative way and « a priori » the quality of their developments is impossible. The risk is therefore high for obtaining higher costs as a consequence of the late discovery of defects. In the context of this thesis, I propose a pragmatic solution to the problem. Based on the premise that much of the industrial frameworks are based on object-oriented technology, I have studied the possibility of using some « conventional » code metrics unpopular to component world, to evaluate component-based applications. Indeed, these metrics have the advantage of being well defined, known, equipped and especially to have been the subject of numerous empirical validations analyzing the predictive power for imperatives or objects codes. Among the existing metrics, I identified a subset of them which, by interpreting and applying to specific levels of granularity, can potentially provide guidance on the compliance of developers and architects of large principles of software engineering, particularly on the coupling and cohesion. These two principles are in fact the very source of the component paradigm. This subset has the ability to represent all aspects of a component-oriented application : internal view of a component, its interface and compositional view through architecture. This suite of metrics, identified by hand, was then applied to 10 open-source OSGi applications, in order to ensure, by studying of their distribution, that it effectively conveyed relevant information to the component world. I then built predictive models of external quality properties based on these internal metrics : reusability, failure, etc. The development of such models and the analysis of their power are only able to empirically validate the interest of the proposed metrics. It is also possible to compare the « power » of these models with other models from the literature specific to imperative and/or object world. I decided to build models that predict the existence and frequency of defects and bugs. To do this, I relied on external data from the history of changes and fixes a panel of 6 large mature OSGi projects (with a maintenance period of several years). Several statistical tools were used to build models, including principal component analysis and multivariate logistic regression. This study showed that it is possible to predict with these models 80% to 92% of frequently buggy components with reminders ranging from 89% to 98%, according to the evaluated projects. Models for predicting the existence of a defect are less reliable than the first type of model. This thesis confirms thus the interesting « practice » of using common and well equipped metrics to measure at the earliest application quality in the component world
Alomari, Mohammad H. "Engineering System Design for Automated Space Weather Forecast. Designing Automatic Software Systems for the Large-Scale Analysis of Solar Data, Knowledge Extraction and the Prediction of Solar Activities Using Machine Learning Techniques." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4248.
Full textAlomari, Mohammad Hani. "Engineering system design for automated space weather forecast : designing automatic software systems for the large-scale analysis of solar data, knowledge extraction and the prediction of solar activities using machine learning techniques." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4248.
Full textWilkerson, Jaxon. "Handoff of Advanced Driver Assistance Systems (ADAS) using a Driver-in-the-Loop Simulator and Model Predictive Control (MPC)." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1595262540712316.
Full textCedro, Carlos Costa. "USAR: um modelo preditivo para avaliação da acessibilidade em tecnologias assistivas baseadas em realidade aumentada." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1372.
Full textCurrently about 15% of the world population has some type of disability. For these people the use of assistive technologies is essential. Augmented reality emerges as an important alternative to the creation of new assistive technologies, due to its numerous forms of participant tracking, which, if combined, can provide new possibilities for interaction. However, the use of augmented reality in the context of assistive technology is not a panacea, as each disability is unique, as well as each person has its own peculiarities. Therefore, it is important to analyze the accessibility of these applications, to ensure the benefit of its use effectively. This work proposes a predictive evaluation model, based on universal design and ISO 9241-171 standard. This model is able to evaluate the accessibility of augmented reality applications, when used as assistive technologies. The evaluation process consists of completing questionnaires, composed of clear and intelligible issues, so that they can meet the multidisciplinary public, without requiring any prior knowledge in evaluating accessibility. The product of evaluation made by questionnaires is a numerical indicator of accessibility, which represents the degree of compliance with accessibility requirements. The questions are sensitives to context, each questionnaire includes guidelines, which contains the minimum requirements of the participant, for each criteria, it is thus possible to obtain a holistic view of the evaluation, which can be customized for each participant or generalized to a specific disability. The main objective of this work is to propose a predictive model for the evaluation of accessibility, serving special educators, developers and assistive technology consumers, as a criteria for the decision of use of augmented reality applications.
Murali, madhavan rathai Karthik. "Synthesis and real-time implementation of parameterized NMPC schemes for automotive semi-active suspension systems." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT052.
Full textThis thesis discusses the synthesis and real-time (RT) implementation of parameterized Nonlinear Model Predictive Control (pNMPC) schemes for automotive semi-active suspension systems. The pNMPC scheme uses a black-box simulation-based optimization method. The crux of the method is to finitely parameterize the input profile and simulate the system for each parameterized input and obtain the approximate objective and constraint violation value for the pNMPC problem. With the obtained results from the simulation, the input with minimum objective value or the least constraint violation value is selected and injected into the system and this is repeated in a receding horizon fashion. The method was experimentally validated on dSPACE MicroAutoBoX II (MABXII) and the results display good performance of the proposed approach. The pNMPC method was also augmented to parallelized pNMPC and the proposed method was implemented for control of semi-active suspension system for a half car vehicle. This method was implemented by virtue of Graphic Processing Units (GPUs) which serves as a paragon platform for implementation of parallel algorithms through its multi-core processors. Also, a stochastic version of the parallelized pNMPC method is proposed which is termed as Scenario-Stochastic pNMPC (SS-pNMPC) scheme and the method was implemented and tested on several NVIDIA embedded boards to verify and validate the RT feasibility of the proposed method for control of semi-active suspension system for a half car vehicle. In general, the parallelized pNMPC schemes provide good performance and also, fares well for large input parameterization space. Finally, the thesis proposes a software tool termed “pNMPC – A code generation software tool for implementation of derivative free pNMPC scheme for embedded control systems”. The code generation software (S/W) tool was programmed in C/C++ and also, provides interface to MATLAB/Simulink. The S/W tested for variety of examples both in simulation as well as on RT embedded hardware (MABXII) and the results looks promising and viable for RT implementation for real world applications. The code generation S/W tool also includes GPU code generation feature for parallel implementation. To conclude, the thesis was conducted under the purview of the EMPHYSIS project and the goals of the project align with this thesis and the proposed pNMPC methods are amenable with eFMI standard
Zhang, Hong. "Software stability assessment using multiple prediction models." Thèse, 2003. http://hdl.handle.net/1866/14513.
Full textLuo, Yan. "Statistical defect prediction models for software quality assurance." Thesis, 2007. http://spectrum.library.concordia.ca/975638/1/MR34446.pdf.
Full textColey, Terry Ronald. "Prediction of scanning tunneling microscope images by computational quantum chemistry: chemical models and software design." Thesis, 1993. https://thesis.library.caltech.edu/5310/1/Coley_tr_1993.pdf.
Full textHuang, Hsiu-Min Chang 1958. "Training experience satisfaction prediction based on trainees' general information." Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-08-1656.
Full texttext