Academic literature on the topic 'SOFTWARE PREDICTION MODELS'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'SOFTWARE PREDICTION MODELS.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "SOFTWARE PREDICTION MODELS"

1

Balogun, A. O., A. O. Bajeh, H. A. Mojeed, and A. G. Akintola. "Software defect prediction: A multi-criteria decision-making approach." Nigerian Journal of Technological Research 15, no. 1 (April 30, 2020): 35–42. http://dx.doi.org/10.4314/njtr.v15i1.7.

Full text
Abstract:
Failure of software systems as a result of software testing is very much rampant as modern software systems are large and complex. Software testing which is an integral part of the software development life cycle (SDLC), consumes both human and capital resources. As such, software defect prediction (SDP) mechanisms are deployed to strengthen the software testing phase in SDLC by predicting defect prone modules or components in software systems. Machine learning models are used for developing the SDP models with great successes achieved. Moreover, some studies have highlighted that a combination of machine learning models as a form of an ensemble is better than single SDP models in terms of prediction accuracy. However, the efficiency of machine learning models can change with diverse predictive evaluation metrics. Thus, more studies are needed to establish the effectiveness of ensemble SDP models over single SDP models. This study proposes the deployment of Multi-Criteria Decision Method (MCDM) techniques to rank machine learning models. Analytic Network Process (ANP) and Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) which are types of MCDM techniques are deployed on 9 machine learning models with 11 performance evaluation metrics and 11 software defects datasets. The experimental results showed that ensemble SDP models are best appropriate SDP models as Boosted SMO and Boosted PART ranked highest for each of the MCDM techniques. Besides, the experimental results also validated the stand of not considering accuracy as the only performance evaluation metrics for SDP models. Conclusively, more performance metrics other than predictive accuracy should be considered when ranking and evaluating machine learning models. Keywords: Ensemble; Multi-Criteria Decision Method; Software Defect Prediction
APA, Harvard, Vancouver, ISO, and other styles
2

Malhotra, Ruchika, and Juhi Jain. "Predicting Software Defects for Object-Oriented Software Using Search-based Techniques." International Journal of Software Engineering and Knowledge Engineering 31, no. 02 (February 2021): 193–215. http://dx.doi.org/10.1142/s0218194021500054.

Full text
Abstract:
Development without any defect is unsubstantial. Timely detection of software defects favors the proper resource utilization saving time, effort and money. With the increasing size and complexity of software, demand for accurate and efficient prediction models is increasing. Recently, search-based techniques (SBTs) have fascinated many researchers for Software Defect Prediction (SDP). The goal of this study is to conduct an empirical evaluation to assess the applicability of SBTs for predicting software defects in object-oriented (OO) softwares. In this study, 16 SBTs are exploited to build defect prediction models for 13 OO software projects. Stable performance measures — GMean, Balance and Receiver Operating Characteristic-Area Under Curve (ROC-AUC) are employed to probe into the predictive capability of developed models, taking into consideration the imbalanced nature of software datasets. Proper measures are taken to handle the stochastic behavior of SBTs. The significance of results is statistically validated using the Friedman test complied with Wilcoxon post hoc analysis. The results confirm that software defects can be detected in the early phases of software development with help of SBTs. This paper identifies the effective subset of SBTs that will aid software practitioners to timely detect the probable software defects, therefore, saving resources and bringing up good quality softwares. Eight SBTs — sUpervised Classification System (UCS), Bioinformatics-oriented hierarchical evolutionary learning (BIOHEL), CHC, Genetic Algorithm-based Classifier System with Adaptive Discretization Intervals (GA_ADI), Genetic Algorithm-based Classifier System with Intervalar Rule (GA_INT), Memetic Pittsburgh Learning Classifier System (MPLCS), Population-Based Incremental Learning (PBIL) and Steady-State Genetic Algorithm for Instance Selection (SGA) are found to be statistically good defect predictors.
APA, Harvard, Vancouver, ISO, and other styles
3

Vandecruys, Olivier, David Martens, Bart Baesens, Christophe Mues, Manu De Backer, and Raf Haesen. "Mining software repositories for comprehensible software fault prediction models." Journal of Systems and Software 81, no. 5 (May 2008): 823–39. http://dx.doi.org/10.1016/j.jss.2007.07.034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zaim, Amirul, Johanna Ahmad, Noor Hidayah Zakaria, Goh Eg Su, and Hidra Amnur. "Software Defect Prediction Framework Using Hybrid Software Metric." JOIV : International Journal on Informatics Visualization 6, no. 4 (December 31, 2022): 921. http://dx.doi.org/10.30630/joiv.6.4.1258.

Full text
Abstract:
Software fault prediction is widely used in the software development industry. Moreover, software development has accelerated significantly during this epidemic. However, the main problem is that most fault prediction models disregard object-oriented metrics, and even academician researcher concentrate on predicting software problems early in the development process. This research highlights a procedure that includes an object-oriented metric to predict the software fault at the class level and feature selection techniques to assess the effectiveness of the machine learning algorithm to predict the software fault. This research aims to assess the effectiveness of software fault prediction using feature selection techniques. In the present work, software metric has been used in defect prediction. Feature selection techniques were included for selecting the best feature from the dataset. The results show that process metric had slightly better accuracy than the code metric.
APA, Harvard, Vancouver, ISO, and other styles
5

Kalouptsoglou, Ilias, Miltiadis Siavvas, Dionysios Kehagias, Alexandros Chatzigeorgiou, and Apostolos Ampatzoglou. "Examining the Capacity of Text Mining and Software Metrics in Vulnerability Prediction." Entropy 24, no. 5 (May 5, 2022): 651. http://dx.doi.org/10.3390/e24050651.

Full text
Abstract:
Software security is a very important aspect for software development organizations who wish to provide high-quality and dependable software to their consumers. A crucial part of software security is the early detection of software vulnerabilities. Vulnerability prediction is a mechanism that facilitates the identification (and, in turn, the mitigation) of vulnerabilities early enough during the software development cycle. The scientific community has recently focused a lot of attention on developing Deep Learning models using text mining techniques for predicting the existence of vulnerabilities in software components. However, there are also studies that examine whether the utilization of statically extracted software metrics can lead to adequate Vulnerability Prediction Models. In this paper, both software metrics- and text mining-based Vulnerability Prediction Models are constructed and compared. A combination of software metrics and text tokens using deep-learning models is examined as well in order to investigate if a combined model can lead to more accurate vulnerability prediction. For the purposes of the present study, a vulnerability dataset containing vulnerabilities from real-world software products is utilized and extended. The results of our analysis indicate that text mining-based models outperform software metrics-based models with respect to their F2-score, whereas enriching the text mining-based models with software metrics was not found to provide any added value to their predictive performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Shatnawi, Raed. "Software fault prediction using machine learning techniques with metric thresholds." International Journal of Knowledge-based and Intelligent Engineering Systems 25, no. 2 (July 26, 2021): 159–72. http://dx.doi.org/10.3233/kes-210061.

Full text
Abstract:
BACKGROUND: Fault data is vital to predicting the fault-proneness in large systems. Predicting faulty classes helps in allocating the appropriate testing resources for future releases. However, current fault data face challenges such as unlabeled instances and data imbalance. These challenges degrade the performance of the prediction models. Data imbalance happens because the majority of classes are labeled as not faulty whereas the minority of classes are labeled as faulty. AIM: The research proposes to improve fault prediction using software metrics in combination with threshold values. Statistical techniques are proposed to improve the quality of the datasets and therefore the quality of the fault prediction. METHOD: Threshold values of object-oriented metrics are used to label classes as faulty to improve the fault prediction models The resulting datasets are used to build prediction models using five machine learning techniques. The use of threshold values is validated on ten large object-oriented systems. RESULTS: The models are built for the datasets with and without the use of thresholds. The combination of thresholds with machine learning has improved the fault prediction models significantly for the five classifiers. CONCLUSION: Threshold values can be used to label software classes as fault-prone and can be used to improve machine learners in predicting the fault-prone classes.
APA, Harvard, Vancouver, ISO, and other styles
7

Eldho, K. J. "Impact of Unbalanced Classification on the Performance of Software Defect Prediction Models." Indian Journal of Science and Technology 15, no. 6 (February 15, 2022): 237–42. http://dx.doi.org/10.17485/ijst/v15i6.2193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Karunanithi, N., D. Whitley, and Y. K. Malaiya. "Prediction of software reliability using connectionist models." IEEE Transactions on Software Engineering 18, no. 7 (July 1992): 563–74. http://dx.doi.org/10.1109/32.148475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fenton, N. E., and M. Neil. "A critique of software defect prediction models." IEEE Transactions on Software Engineering 25, no. 5 (1999): 675–89. http://dx.doi.org/10.1109/32.815326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lawson, John S., Craig W. Wesselman, and Del T. Scott. "Simple Plots Improve Software Reliability Prediction Models." Quality Engineering 15, no. 3 (April 2003): 411–17. http://dx.doi.org/10.1081/qen-120018040.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "SOFTWARE PREDICTION MODELS"

1

Bowes, David Hutchinson. "Factors affecting the performance of trainable models for software defect prediction." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/10978.

Full text
Abstract:
Context. Reports suggest that defects in code cost the US in excess of $50billion per year to put right. Defect Prediction is an important part of Software Engineering. It allows developers to prioritise the code that needs to be inspected when trying to reduce the number of defects in code. A small change in the number of defects found will have a significant impact on the cost of producing software. Aims. The aim of this dissertation is to investigate the factors which a ect the performance of defect prediction models. Identifying the causes of variation in the way that variables are computed should help to improve the precision of defect prediction models and hence improve the cost e ectiveness of defect prediction. Methods. This dissertation is by published work. The first three papers examine variation in the independent variables (code metrics) and the dependent variable (number/location of defects). The fourth and fifth papers investigate the e ect that di erent learners and datasets have on the predictive performance of defect prediction models. The final paper investigates the reported use of di erent machine learning approaches in studies published between 2000 and 2010. Results. The first and second papers show that independent variables are sensitive to the measurement protocol used, this suggests that the way data is collected a ects the performance of defect prediction. The third paper shows that dependent variable data may be untrustworthy as there is no reliable method for labelling a unit of code as defective or not. The fourth and fifth papers show that the dataset and learner used when producing defect prediction models have an e ect on the performance of the models. The final paper shows that the approaches used by researchers to build defect prediction models is variable, with good practices being ignored in many papers. Conclusions. The measurement protocols for independent and dependent variables used for defect prediction need to be clearly described so that results can be compared like with like. It is possible that the predictive results of one research group have a higher performance value than another research group because of the way that they calculated the metrics rather than the method of building the model used to predict the defect prone modules. The machine learning approaches used by researchers need to be clearly reported in order to be able to improve the quality of defect prediction studies and allow a larger corpus of reliable results to be gathered.
APA, Harvard, Vancouver, ISO, and other styles
2

Askari, Mina. "Information Theoretic Evaluation of Change Prediction Models for Large-Scale Software." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/1139.

Full text
Abstract:
During software development and maintenance, as a software system evolves, changes are made and bugs are fixed in various files. In large-scale systems, file histories are stored in software repositories, such as CVS, which record modifications. By studying software repositories, we can learn about open source software development rocesses. Knowing where these changes will happen in advance, gives power to managers and developers to concentrate on those files. Due to the unpredictability in software development process, proposing an accurate change prediction model is hard. It is even harder to compare different models with the actual model of changes that is not available.

In this thesis, we first analyze the information generated during the development process, which can be obtained through mining the software repositories. We observe that the change data follows a Zipf distribution and exhibits self-similarity. Based on the extracted data, we then develop three probabilistic models to predict which files will have changes or bugs. One purpose of creating these models is to rank the files of the software that are most susceptible to having faults.

The first model is Maximum Likelihood Estimation (MLE), which simply counts the number of events i. e. , changes or bugs that occur in to each file, and normalizes the counts to compute a probability distribution. The second model is Reflexive Exponential Decay (RED), in which we postulate that the predictive rate of modification in a file is incremented by any modification to that file and decays exponentially. The result of a new bug occurring to that file is a new exponential effect added to the first one. The third model is called RED Co-Changes (REDCC). With each modification to a given file, the REDCC model not only increments its predictive rate, but also increments the rate for other files that are related to the given file through previous co-changes.

We then present an information-theoretic approach to evaluate the performance of different prediction models. In this approach, the closeness of model distribution to the actual unknown probability distribution of the system is measured using cross entropy. We evaluate our prediction models empirically using the proposed information-theoretic approach for six large open source systems. Based on this evaluation, we observe that of our three prediction models, the REDCC model predicts the distribution that is closest to the actual distribution for all the studied systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Tran, Qui Can Cuong. "Empirical evaluation of defect identification indicators and defect prediction models." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2553.

Full text
Abstract:
Context. Quality assurance plays a vital role in the software engineering development process. It can be considered as one of the activities, to observe the execution of software project to validate if it behaves as expected or not. Quality assurance activities contribute to the success of software project by reducing the risks of software’s quality. Accurate planning, launching and controlling quality assurance activities on time can help to improve the performance of software projects. However, quality assurance activities also consume time and cost. One of the reasons is that they may not focus on the potential defect-prone area. In some of the latest and more accurate findings, researchers suggested that quality assurance activities should focus on the scope that may have the potential of defect; and defect predictors should be used to support them in order to save time and cost. Many available models recommend that the project’s history information be used as defect indicator to predict the number of defects in the software project. Objectives. In this thesis, new models are defined to predict the number of defects in the classes of single software systems. In addition, the new models are built based on the combination of product metrics as defect predictors. Methods. In the systematic review a number of article sources are used, including IEEE Xplore, ACM Digital Library, and Springer Link, in order to find the existing models related to the topic. In this context, open source projects are used as training sets to extract information about occurred defects and the system evolution. The training data is then used for the definition of the prediction models. Afterwards, the defined models are applied on other systems that provide test data, so information that was not used for the training of the models; to validate the accuracy and correctness of the models Results. Two models are built. One model is built to predict the number of defects of one class. One model is built to predict whether one class contains bug or no bug.. Conclusions. The proposed models are the combination of product metrics as defect predictors that can be used either to predict the number of defects of one class or to predict if one class contains bugs or no bugs. This combination of product metrics as defect predictors can improve the accuracy of defect prediction and quality assurance activities; by giving hints on potential defect prone classes before defect search activities will be performed. Therefore, it can improve the software development and quality assurance in terms of time and cost
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Qin. "Optimal utilization of historical data sets for the construction of software cost prediction models." Thesis, Northumbria University, 2006. http://nrl.northumbria.ac.uk/2129/.

Full text
Abstract:
The accurate prediction of software development cost at early stage of development life-cycle may have a vital economic impact and provide fundamental information for management decision making. However, it is not well understood in practice how to optimally utilize historical software project data for the construction of cost predictions. This is because the analysis of historical data sets for software cost estimation leads to many practical difficulties. In addition, there has been little research done to prove the benefits. To overcome these limitations, this research proposes a preliminary data analysis framework, which is an extension of Maxwell's study. The proposed framework is based on a set of statistical analysis methods such as correlation analysis, stepwise ANOVA, univariate analysis, etc. and provides a formal basis for the erection of cost prediction models from his¬torical data sets. The proposed framework is empirically evaluated against commonly used prediction methods, namely Ordinary Least-Square Regression (OLS), Robust Regression (RR), Classification and Regression Trees (CART), K-Nearest Neighbour (KNN), and is also applied to both heterogeneous and homogeneous data sets. Formal statistical significance testing was performed for the comparisons. The results from the comparative evaluation suggest that the proposed preliminary data analysis framework is capable to construct more accurate prediction models for all selected prediction techniques. The framework processed predictor variables are statistic significant, at 95% confidence level for both parametric techniques (OLS and RR) and one non-parametric technique (CART). Both the heterogeneous data set and homogenous data set benefit from the application of the proposed framework for improving project effort prediction accuracy. The homogeneous data set is more effective after being processed by the framework. Overall, the evaluation results demonstrate that the proposed framework has an excellent applicability. Further research could focus on two main purposes: First, improve the applicability by integrating missing data techniques such as listwise deletion (LD), mean imputation (MI), etc., for handling missing values in historical data sets. Second, apply benchmarking to enable comparisons, i.e. allowing companies to compare themselves with respect to their productivity or quality.
APA, Harvard, Vancouver, ISO, and other styles
5

Brosig, Fabian [Verfasser], and S. [Akademischer Betreuer] Kounev. "Architecture-Level Software Performance Models for Online Performance Prediction / Fabian Maria Konrad Brosig. Betreuer: S. Kounev." Karlsruhe : KIT-Bibliothek, 2014. http://d-nb.info/105980316X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chun, Zhang Jing. "Trigonometric polynomial high order neural network group models for financial data simulation & prediction /." [Campblelltown, N.S.W.] : The author, 1998. http://library.uws.edu.au/adt-NUWS/public/adt-NUWS20030721.152829/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

McDonald, Simon Francis. "Better clinical decisions for less effort : building prediction software models to improve anti-coagulation care and prevent thrombosis and strokes." Thesis, Lancaster University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539665.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hall, Otto. "Inference of buffer queue times in data processing systems using Gaussian Processes : An introduction to latency prediction for dynamic software optimization in high-end trading systems." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214791.

Full text
Abstract:
This study investigates whether Gaussian Process Regression can be applied to evaluate buffer queue times in large scale data processing systems. It is additionally considered whether high-frequency data stream rates can be generalized into a small subset of the sample space. With the aim of providing basis for dynamic software optimization, a promising foundation for continued research is introduced. The study is intended to contribute to Direct Market Access financial trading systems which processes immense amounts of market data daily. Due to certain limitations, we shoulder a naïve approach and model latencies as a function of only data throughput in eight small historical intervals. The training and test sets are represented from raw market data, and we resort to pruning operations to shrink the datasets by a factor of approximately 0.0005 in order to achieve computational feasibility. We further consider four different implementations of Gaussian Process Regression. The resulting algorithms perform well on pruned datasets, with an average R2 statistic of 0.8399 over six test sets of approximately equal size as the training set. Testing on non-pruned datasets indicate shortcomings from the generalization procedure, where input vectors corresponding to low-latency target values are associated with less accuracy. We conclude that depending on application, the shortcomings may be make the model intractable. However for the purposes of this study it is found that buffer queue times can indeed be modelled by regression algorithms. We discuss several methods for improvements, both in regards to pruning procedures and Gaussian Processes, and open up for promising continued research.
Denna studie undersöker huruvida Gaussian Process Regression kan appliceras för att utvärdera buffer-kötider i storskaliga dataprocesseringssystem. Dessutom utforskas ifall dataströmsfrekvenser kan generaliseras till en liten delmängd av utfallsrymden. Medmålet att erhålla en grund för dynamisk mjukvaruoptimering introduceras en lovandestartpunkt för fortsatt forskning. Studien riktas mot Direct Market Access system för handel på finansiella marknader, somprocesserar enorma mängder marknadsdata dagligen. På grund av vissa begränsningar axlas ett naivt tillvägagångssätt och väntetider modelleras som en funktion av enbartdatagenomströmning i åtta små historiska tidsinterval. Tränings- och testdataset representeras från ren marknadsdata och pruning-tekniker används för att krympa dataseten med en ungefärlig faktor om 0.0005, för att uppnå beräkningsmässig genomförbarhet. Vidare tas fyra olika implementationer av Gaussian Process Regression i beaktning. De resulterande algorithmerna presterar bra på krympta dataset, med en medel R2 statisticpå 0.8399 över sex testdataset, alla av ungefär samma storlek som träningsdatasetet. Tester på icke krympta dataset indikerar vissa brister från pruning, där input vektorermotsvararande låga latenstider är associerade med mindre exakthet. Slutsatsen dras att beroende på applikation kan dessa brister göra modellen obrukbar. För studiens syftefinnes emellertid att latenstider kan sannerligen modelleras av regressionsalgoritmer. Slutligen diskuteras metoder för förbättrning med hänsyn till både pruning och GaussianProcess Regression, och det öppnas upp för lovande vidare forskning.
APA, Harvard, Vancouver, ISO, and other styles
9

Vlad, Iulian Teodor. "Mathematical Methods to Predict the Dynamic Shape Evolution of Cancer Growth based on Spatio-Temporal Bayesian and Geometrical Models." Doctoral thesis, Universitat Jaume I, 2016. http://hdl.handle.net/10803/670303.

Full text
Abstract:
The aim of this research is to observe the dynamics of cancer tumors and to develop and implement new methods and algorithms for prediction of tumor growth. I offer some tools to help physicians for a better understanding this disease and to check if the prescribed treatment have the desired results. The plan of the thesis is the following. In Chapter 1 I briefly recall some properties and classification of points processes with some examples of spatio-temporal point processes. Chapter 2 presents a short overview of the theory of Levy bases and integration with respect to such basis is given, I recall standard results about spatial Cox processes, and finally I propose different types of growth models and a new algorithm, the Cobweb, which is presented and developed based on the proposed methodology. Chapters 3, 4 and 5 are dedicated to present new prediction methods. The implementation in Matlab software comes in Chapter 6. The thesis ends with some conclusion and future research.
El objetivo de esta investigación es observar la dinámica de los tumores, desarrollar e implementarnuevos métodos y algoritmos para la predicción del crecimiento tumoral. Queremos ofrecer algunasherramientas para ayudar a los médicos a comprender y tratar esta enfermedad. Utilizando unmétodo de predicción , y comparándolo con la evolución real de un tumor, un médico puede constata si el tratamiento prescrito tiene el efecto deseado, y de acuerdo con ello, si es necesario, tomar la decisión de intervención quirúrgica. El plan de la tesis es el siguiente. En el primer capítulo recordamos brevemente algunaspropiedades y procesos de clasificación de procesos puntuales con algunos ejemplosespacio-temporales. El capítulo 2 presenta una breve descripción de la teoría de las bases de Levy y se da la integración con respecto a dicha base, recordamos resultados estándar sobre procesosespaciales de Cox, y finalmente proponemos diferentes tipos de modelos de crecimien to y un nuevo algoritmo, el Cobweb, que es presentado y desarrollado en base a la metodología propuesta. Los capítulos 3, 4 y 5 están dedicados a presentar nuevos métodos de predicción.
APA, Harvard, Vancouver, ISO, and other styles
10

SARCIA', SALVATORE ALESSANDRO. "An Approach to improving parametric estimation models in the case of violation of assumptions based upon risk analysis." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2009. http://hdl.handle.net/2108/1048.

Full text
Abstract:
In this work, we show the mathematical reasons why parametric models fall short of providing correct estimates and define an approach that overcomes the causes of these shortfalls. The approach aims at improving parametric estimation models when any regression model assumption is violated for the data being analyzed. Violations can be that, the errors are x-correlated, the model is not linear, the sample is heteroscedastic, or the error probability distribution is not Gaussian. If data violates the regression assumptions and we do not deal with the consequences of these violations, we cannot improve the model and estimates will be incorrect forever. The novelty of this work is that we define and use a feed-forward multi-layer neural network for discrimination problems to calculate prediction intervals (i.e. evaluate uncertainty), make estimates, and detect improvement needs. The primary difference from traditional methodologies is that the proposed approach can deal with scope error, model error, and assumption error at the same time. The approach can be applied for prediction, inference, and model improvement over any situation and context without making specific assumptions. An important benefit of the approach is that, it can be completely automated as a stand-alone estimation methodology or used for supporting experts and organizations together with other estimation techniques (e.g., human judgment, parametric models). Unlike other methodologies, the proposed approach focuses on the model improvement by integrating the estimation activity into a wider process that we call the Estimation Improvement Process as an instantiation of the Quality Improvement Paradigm. This approach aids mature organizations in learning from their experience and improving their processes over time with respect to managing their estimation activities. To provide an exposition of the approach, we use an old NASA COCOMO data set to (1) build an evolvable neural network model and (2) show how a parametric model, e.g., a regression model, can be improved and evolved with the new project data.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "SOFTWARE PREDICTION MODELS"

1

Rauscher, Harold M. The microcomputer scientific software series 4: Testing prediction accuracy. St. Paul, Minn: U.S. Dept. of Agriculture, Forest Service, North Central Forest Experiment Station, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramamurthy, Karthikeyan N. MATLAB software for the code excited linear prediction algorithm: The Federal Standard, 1016. San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA): Morgan & Claypool Publishers, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Metrics for process models: Empirical foundations of verification, error prediction, and guidelines for correctness. Berlin: Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Matthew, O'Keefe, Kerr Christopher, United States. Dept. of Energy. Office of Biological and Environmental Research., and Goddard Space Flight Center, eds. Second International Workshop on Software Engineering and Code Design in Parallel Meteorological and Oceanographic Applications: Proceedings of a workshop sponsored by the U.S. Department of Energy, Office of Biological and Environmental Research; the Department of Defense, High Performance Computing and Modernization Office; and the NASA Goddard Space Flight Center, Seasonal-to-Interannual Prediction Project, and held at the Camelback Inn, Scottsdale, Arizona, June 15-18, 1998. Greenbelt, Md: National Aeronautics and Space Administration, Goddard Space Flight Center, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dennison, Thomas E. Fitting and prediction uncertainty for a software reliability model. Monterey, Calif: Naval Postgraduate School, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fernandez-Camacho, Eduardo. Model Predictive Control in the Process Industry. London: Springer London, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ahmad, Anees. Software to model AXAF-I image quality: Final report. [Washington, DC: National Aeronautics and Space Administration, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhen, Feng, and United States. National Aeronautics and Space Administration., eds. Software to model AXAF-I image quality: Final report. [Washington, DC: National Aeronautics and Space Administration, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Feng, and United States. National Aeronautics and Space Administration., eds. Software to model AXAF-I image quality: Final report. [Washington, DC: National Aeronautics and Space Administration, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Grigor'ev, Anatoliy, and Evgeniy Isaev. Methods and algorithms of data processing. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1032305.

Full text
Abstract:
The tutorial deals with selected methods and algorithms of data processing, the sequence of solving problems of processing and analysis of data to create models behavior of the object taking into account all the components of its mathematical model. Describes the types of technological methods for the use of software and hardware for solving problems in this area. The algorithms of distributions, regressions vremenny series, transform them with the aim of obtaining mathematical models and prediction of the behavior information and economic systems (objects). The second edition is supplemented by materials that are in demand by researchers in the part of the correct use of clustering algorithms. Are elements of the classification algorithms to identify their capabilities, strengths and weaknesses. Are the procedures of justification and verify the adequacy of the results of the cluster analysis, conducted a comparison and evaluation of different clustering techniques, given information about visualization of multidimensional data and examples of practical application of clustering algorithms. Meets the requirements of Federal state educational standards of higher education of the last generation. For students of economic specialties, specialists, and graduate students.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "SOFTWARE PREDICTION MODELS"

1

Okumoto, Kazu. "Customer-Perceived Software Reliability Predictions: Beyond Defect Prediction Models." In Springer Series in Reliability Engineering, 219–49. London: Springer London, 2013. http://dx.doi.org/10.1007/978-1-4471-4971-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Santos, Geanderson, Amanda Santana, Gustavo Vale, and Eduardo Figueiredo. "Yet Another Model! A Study on Model’s Similarities for Defect and Code Smells." In Fundamental Approaches to Software Engineering, 282–305. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-30826-0_16.

Full text
Abstract:
AbstractSoftware defect and code smell prediction help developers identify problems in the code and fix them before they degrade the quality or the user experience. The prediction of software defects and code smells is challenging, since it involves many factors inherent to the development process. Many studies propose machine learning models for defects and code smells. However, we have not found studies that explore and compare these machine learning models, nor that focus on the explainability of the models. This analysis allows us to verify which features and quality attributes influence software defects and code smells. Hence, developers can use this information to predict if a class may be faulty or smelly through the evaluation of a few features and quality attributes. In this study, we fill this gap by comparing machine learning models for predicting defects and seven code smells. We trained in a dataset composed of 19,024 classes and 70 software features that range from different quality attributes extracted from 14 Java open-source projects. We then ensemble five machine learning models and employed explainability concepts to explore the redundancies in the models using the top-10 software features and quality attributes that are known to contribute to the defects and code smell predictions. Furthermore, we conclude that although the quality attributes vary among the models, the complexity, documentation, and size are the most relevant. More specifically, Nesting Level Else-If is the only software feature relevant to all models.
APA, Harvard, Vancouver, ISO, and other styles
3

Tanwar, Harshita, and Misha Kakkar. "A Review of Software Defect Prediction Models." In Data Management, Analytics and Innovation, 89–97. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1402-5_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bal, Pravas Ranjan, Nachiketa Jena, and Durga Prasad Mohapatra. "Software Reliability Prediction Based on Ensemble Models." In Proceeding of International Conference on Intelligent Communication, Control and Devices, 895–902. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-1708-7_105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Awoke, Temesgen, Minakhi Rout, Lipika Mohanty, and Suresh Chandra Satapathy. "Bitcoin Price Prediction and Analysis Using Deep Learning Models." In Communication Software and Networks, 631–40. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-5397-4_63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

De Lucia, Andrea, Eugenio Pompella, and Silvio Stefanucci. "Assessing Effort Prediction Models for Corrective Software Maintenance." In Enterprise Information Systems VI, 55–62. Dordrecht: Springer Netherlands, 2006. http://dx.doi.org/10.1007/1-4020-3675-2_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kuperberg, Michael, Klaus Krogmann, and Ralf Reussner. "Performance Prediction for Black-Box Components Using Reengineered Parametric Behaviour Models." In Component-Based Software Engineering, 48–63. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-87891-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Czyczyn-Egird, Daniel, and Adam Slowik. "Defect Prediction in Software Using Predictive Models Based on Historical Data." In Advances in Intelligent Systems and Computing, 96–103. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-99608-0_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gallotti, Stefano, Carlo Ghezzi, Raffaela Mirandola, and Giordano Tamburrelli. "Quality Prediction of Service Compositions through Probabilistic Model Checking." In Quality of Software Architectures. Models and Architectures, 119–34. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-87879-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pohlkötter, Fabian J., Dominik Straubinger, Alexander M. Kuhn, Christian Imgrund, and William Tekouo. "Unlocking the Potential of Digital Twins." In Advances in Automotive Production Technology – Towards Software-Defined Manufacturing and Resilient Supply Chains, 190–99. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-27933-1_18.

Full text
Abstract:
AbstractIncreasing competitive pressure is confronting the automotive industry with major challenges. As a result, conventional reactive maintenance is being transformed into predictive maintenance. In this context, wearing and aging effects no longer lead to plant failure since they are predicted at an earlier stage based on comprehensive data analysis.Furthermore, the evolution towards Smart Factory has given rise to virtual commissioning in the planning phase of production plants. In this process, a Hardware-in-the-Loop (HiL) system combines the real controls (e.g., PLC) and a virtual model of the plant. These HiL systems are used to simulate commissioning activities in advance, thus saving time and money during actual commissioning. The resulting complex virtual models are not further used in the series production.This paper builds upon virtual commissioning models to develop a Digital Twin, which provides inputs for predictive maintenance. The resulting approach is a methodology for building a hybrid predictive maintenance system. A hybrid prediction model combines the advantages of data-driven and physical models. Data-driven models analyse and predict wearing patterns based on real machine data. Physical models are used to reproduce the behaviour of a system. From the simulation of the hybrid model, additional insights for the predictions can be derived.The conceptual methodology for a hybrid predictive maintenance system is validated by the successful implementation in a bottleneck process of the electric engine production for an automotive manufacturer. Ultimately, an outlook on further possible applications of the hybrid model is presented.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "SOFTWARE PREDICTION MODELS"

1

Ketata, Aymen, Carlos Moreno, Sebastian Fischmeister, Jia Liang, and Krzysztof Czarnecki. "Performance prediction upon toolchain migration in model-based software." In 2015 ACM/IEEE 18th International Conference on Model Driven Engineering Languages and Systems (MODELS). IEEE, 2015. http://dx.doi.org/10.1109/models.2015.7338261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lincke, Rüdiger, Tobias Gutzmann, and Welf Löwe. "Software Quality Prediction Models Compared." In 2010 10th International Conference on Quality Software (QSIC). IEEE, 2010. http://dx.doi.org/10.1109/qsic.2010.9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mockus, Audris. "Defect prediction and software risk." In PROMISE '14: The 10th International Conference on Predictive Models in Software Engineering. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2639490.2639511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bluvband, Zigmund, Sergey Porotsky, and Michael Talmor. "Advanced models for software reliability prediction." In Integrity (RAMS). IEEE, 2011. http://dx.doi.org/10.1109/rams.2011.5754487.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shafiabady, Aida, Mohd Naz'ri Mahrin, and Masoud Samadi. "Investigation of software maintainability prediction models." In 2016 18th International Conference on Advanced Communication Technology (ICACT). IEEE, 2016. http://dx.doi.org/10.1109/icact.2016.7423557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Shafiabady, Aida, Mohd Naz'ri Mahrin, and Masoud Samadi. "Investigation of software maintainability prediction models." In 2016 18th International Conference on Advanced Communication Technology (ICACT). IEEE, 2016. http://dx.doi.org/10.1109/icact.2016.7423558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wiese, Igor Scaliante, Filipe Roseiro Côgo, Reginaldo Ré, Igor Steinmacher, and Marco Aurélio Gerosa. "Social metrics included in prediction models on software engineering." In PROMISE '14: The 10th International Conference on Predictive Models in Software Engineering. New York, NY, USA: ACM, 2014. http://dx.doi.org/10.1145/2639490.2639505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fenton, N., M. Neil, W. Marsh, P. Hearty, L. Radlinski, and P. Krause. "Project Data Incorporating Qualitative Factors for Improved Software Defect Prediction." In 2007 3rd International Workshop on Predictor Models in Software Engineering. IEEE, 2007. http://dx.doi.org/10.1109/promise.2007.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hu, Q. p., M. Xie, and S. h. Ng. "Early Software Reliability Prediction with ANN Models." In 2006 12th Pacific Rim International Symposium on Dependable Computing (PRDC'06). IEEE, 2006. http://dx.doi.org/10.1109/prdc.2006.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zeshan, Furkh, and Radziah Mohamad. "Software architecture reliability prediction models: An overview." In 2011 5th Malaysian Conference in Software Engineering (MySEC). IEEE, 2011. http://dx.doi.org/10.1109/mysec.2011.6140654.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "SOFTWARE PREDICTION MODELS"

1

Johnson, G., D. Lawrence, and H. Yu. Conceptual Software Reliability Prediction Models for Nuclear Power Plant Safety Systems. Office of Scientific and Technical Information (OSTI), April 2000. http://dx.doi.org/10.2172/791856.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng and Wang. L52025 Calibration of the PRCI Thermal Analysis Model for Hot Tap Welding. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), January 2004. http://dx.doi.org/10.55274/r0010298.

Full text
Abstract:
In-service welding is a common industrial practice for both maintenance and repair purpose. Its applications include, but not limited to repair of pipeline damages caused by construction or corrosion, and hot tap welding used to add branch connections to existing pipelines. In-service welding enables maintaining and repairing pipelines without removing them from service. Such welding operations generate significant economic and environmental benefits, for example, no interruption of pipeline operations and no venting of pipeline contents. One of the common problems associated with in-service welding is hydrogen cracking. Pipeline operating conditions combined with unscrupulous welding procedures could lead to high heat-affected zone (HAZ) hardness values and this, in turn, could cause hydrogen cracking. The risk of hydrogen cracking is particularly high for older pipeline materials with high carbon equivalent. The objective of the project was to produce a significantly improved HAZ hardness prediction procedure over the procedure in the current PRCI thermal analysis software by utilizing state-of-the-art phase transformation models for steels. Systematic validation of the prediction algorithms was conducted using extensive experimental data of actual welds. The hardness prediction model is expected to become the basis on which the hardness prediction module of the PRCI thermal analysis software will be upgraded and improved.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Yingxuan, Cheng Yan, and Liqin Zhao. The value of radiomics-based machine learning for hepatocellular carcinoma after TACE: a systematic evaluation and Meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, June 2022. http://dx.doi.org/10.37766/inplasy2022.6.0100.

Full text
Abstract:
Review question / Objective: Meta-analysis was performed to predict the efficacy and survival status of patients with hepatocellular carcinoma after the application of TACE, applying clinical models, radiomic models and combined models for non-invasive assessment.We performed a Meta-analysis on the prediction of efficacy and survival status after TACE for hepatocellular carcinoma. Condition being studied: Patients were scanned using CT or MR machines, and some patients had multiple follow-up records, and imaging feature extraction software was applied to extract regions of interest and build multiple prediction models.Literature screening was conducted by two reviewers independently, who had more than 3 years’ experience in imaging diagnosis and was cross-checked. Disagreements were settled by a third reviewer.
APA, Harvard, Vancouver, ISO, and other styles
4

Abdolmaleki, Kourosh. PR453-205101-R01 Prediction of On-bottom Wave Kinematics in Shallow Water. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), May 2022. http://dx.doi.org/10.55274/r0012225.

Full text
Abstract:
This report examines a novel methodology for approximate prediction of the on-bottom kinematics in shallow waters and shore approach regions. The method involves simulation of generic shallow water scenarios in the Danish Hydraulic Institute MIKE software by assuming a range of seabed slopes and sea states. The simulation results are compiled in a database and a machine learning model is fitted for fast extraction of the desired surface or bottom data. The outcome of this scope of work is very useful when a pipeline stability assessment is required in shallow water areas, where no site-specific met-ocean engineering data is available. In the future, this database could be expanded to cover more ranges of input data and be implemented in the PRCI On-Bottom Stability software.
APA, Harvard, Vancouver, ISO, and other styles
5

Leis, B. N., and N. D. Ghadiali. L51720 Pipe Axial Flaw Failure Criteria - PAFFC Version 1.0 Users Manual and Software. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), May 1994. http://dx.doi.org/10.55274/r0011357.

Full text
Abstract:
In the early 1970's, the Pipeline Research Council International, Inc.(PRCI) developed a failure criterion for pipes that had a predominately empirical basis. This criterion was based on flaw sixes that existed prior to pressurization and did not address possible growth due to the pressure in service or in a hydrostatic test or during the hold time at pressure in a hydrotest. So long as that criterion was used within the scope of the underlying database and empirical calibration, the results of its predictions were reasonably accurate. However, with the advent of newer steels and the related increased toughness that supported significant stable flaw growth, it became evident that this criterion should be updated. This updating led to the PRCI ductile flaw growth model (DFGM) that specifically accounted for the stable growth observed at flaws controlled by the steel's toughness and a limit-states analysis that addressed plastic-collapse at the flaw. This capability provided an accurate basis to assess flaw criticality in pipelines and also the means to develop hydrotest plans on a pipeline specific basis. Unfortunately, this enhanced capability came at the expense of increased complexity that made this new capability difficult to use on a day-today basis. To counter this complexity, this capability has been recast in the form of a PC computer program. Benefit: This topical report contains the computer program and technical manual for a failure criterion that will predict the behavior of an axially oriented, partially through the wall flaw in a pipeline. The model has been given the acronym PAFFC which stands for Pipe Axial Flaw Failure Criteria. PAFFC is an extension of a previously developed ductile flaw growth model, L51543, and can account for both a flaw's time dependent growth under pressure as well as its unstable growth leading to failure. As part of the output, the user is presented with a graphical depiction of the flaw sizes in terms of combinations of flaw length and depth, that will fail (or survive) a given operating or test pressure. As compared to existing criteria, this model provides a more accurate prediction of flaw behavior for a broad range of pipeline conditions.
APA, Harvard, Vancouver, ISO, and other styles
6

Howard, Isaac, Thomas Allard, Ashley Carey, Matthew Priddy, Alta Knizley, and Jameson Shannon. Development of CORPS-STIF 1.0 with application to ultra-high performance concrete (UHPC). Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40440.

Full text
Abstract:
This report introduces the first release of CORPS-STIF (Concrete Observations Repository and Predictive Software – Structural and Thermodynamical Integrated Framework). CORPS-STIF is envisioned to be used as a tool to optimize material constituents and geometries of mass concrete placements specifically for ultra-high performance concretes (UHPCs). An observations repository (OR) containing results of 649 mechanical property tests and 10 thermodynamical tests were recorded to be used as inputs for current and future releases. A thermodynamical integrated framework (TIF) was developed where the heat transfer coefficient was a function of temperature and determined at each time step. A structural integrated framework (SIF) modeled strength development in cylinders that underwent isothermal curing. CORPS-STIF represents a step toward understanding and predicting strength gain of UHPC for full-scale structures and specifically in mass concrete.
APA, Harvard, Vancouver, ISO, and other styles
7

Kirk. L51768 Pipeline Free Span Design-Volume 1 Design Guideline. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), April 1997. http://dx.doi.org/10.55274/r0011298.

Full text
Abstract:
Vol. 1, Design Guideline The first phase of the project was dedicated to the testing and calibration of a numerical model. The model is capable of predicting the dynamic cross-flow response of a pipeline span caused by vortex shedding. The numerical model was originally developed by Exxon Production Research Co. (EPRCo), Lambrakos (1991) and has been made available to the project. This project results in minimization of intervention work in relation to submarine pipeline design without jeopardizing pipeline safety. The main objective of the Guideline is to present procedures and methodologies for evaluating free spans in submarine pipeline systems. The Guideline addresses specifically vortex-induced vibrations caused by wave and current action. A force model describing the cyclic lift force generated by vortex shedding has been tested and calibrated as part of the Guideline preparation work. This model is adequate to use for calculating the hydrodynamic response of a free spanning pipeline. Vol. 2, Software and User Guide The second phase of the project concentrated on the development of the Guideline document itself and the associated software and how to operate the software, FREESPAN, used for assessing free spanning submarine pipelines. �
APA, Harvard, Vancouver, ISO, and other styles
8

Cacuci, Dan G., Ruixian Fang, and Madalina C. Badea. MULTI-PRED: A Software Module for Predictive Modeling of Coupled Multi-Physics Systems: User's Manual. Office of Scientific and Technical Information (OSTI), February 2018. http://dx.doi.org/10.2172/1503664.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Leis, Brian. L51794A Failure Criterion for Residual Strength of Corrosion Defects in Moderate to High Toughness Pipe. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), January 2000. http://dx.doi.org/10.55274/r0011253.

Full text
Abstract:
This project extends the investigation of the remaining strength of blunt and sharp flaws in pipe to develop a new, simple equation, known as PCORRC, for predicting the remaining strength of corrosion defects in moderate- to high-toughness steels that fail by the mechanism of plastic collapse. This report summarizes the development of this criterion, which began with the enhancement of a special-purpose, analytical, finite-element-based software model (PCORR) for analyzing complex loadings on corrosion and other blunt defects. The analytical tool was then used to compare the influence of different variables on the behavior of blunt corrosion defects and to develop an equation to reliably and conservatively predict failure of corrosion defects in moderate- to high-toughness steels. The PCORR software and the PCORRC equation have been compared against the experimental database and have been shown to reduce excess conservatism in predicting failure of actual corrosion defects that were likely to have been controlled by the plastic collapse mechanism. Because of the general nature and theoretical foundation of these developments, both the software tool and the equation can be extended in future work to develop similar criteria for combinations of defects and loadings not addressed by this version of the PCORRC equation such as interaction of separated adjacent defects and axial loads on defects.
APA, Harvard, Vancouver, ISO, and other styles
10

Bruce and Yushanov. L52056 Enhancement of PRCI Thermal Analysis Model for Assessment of Attachments. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), August 2004. http://dx.doi.org/10.55274/r0010436.

Full text
Abstract:
Welds made onto in-service pipelines tend to cool at an accelerated rate as the result of the flowing content"s ability to remove heat from the pipe wall. These welds are therefore likely to have high heat-affected zone (HAZ) hardness values and to be susceptible to hydrogen cracking. The use of thermal analysis modeling allows welding parameters (i.e., required heat input levels) to be selected based on anticipated weld cooling rates. Both the Battelle model and the recently developed PRCI Thermal Analysis Model for Hot Tap Welding assume that the pipe material is the most susceptible material being welded. Some attachments (e.g., hot formed fittings, etc.) have a significantly less favorable chemical composition (i.e., higher carbon equivalent level) than the pipe material. As a result, for some in-service welding applications, the attachment material may be more susceptible to cracking than the pipe material. Modifications were made to the finite-element solver of the PRCI model to enable hardness prediction in both the pipe and attachment material. The source code for the modified finite-element solver was provided to Technical Toolboxes - PRCI"s commercial partner for software marketing and distribution. The required modifications to the user interface were also developed. In addition, user interface modifications required to rectify a number of faults that were identified and to improve the user interface were also developed. The incorporation of these enhancements and improvements, which are described herein, will require modification by Technical Toolboxes of the Visual Basic-based version of the software that is currently being marketed (V4.2.1). Following the incorporation of these enhancements and improvements, validation trials should be carried out.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography