To see the other types of publications on this topic, follow the link: Prediction models.

Journal articles on the topic 'Prediction models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Prediction models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Geweke, John, and Gianni Amisano. "Prediction with Misspecified Models." American Economic Review 102, no. 3 (May 1, 2012): 482–86. http://dx.doi.org/10.1257/aer.102.3.482.

Full text
Abstract:
The assumption that one of a set of prediction models is a literal description of reality formally underlies many formal econometric methods, including Bayesian model averaging and most approaches to model selection. Prediction pooling does not invoke this assumption and leads to predictions that improve on those based on Bayesian model averaging, as assessed by the log predictive score. The paper shows that the improvement is substantial using a pool consisting of a dynamic stochastic general equilibrium model, a vector autoregression, and a dynamic factor model, in conjunction with standard US postwar quarterly macroeconomic time series.
APA, Harvard, Vancouver, ISO, and other styles
2

Archer, Graeme, Michael Balls, Leon H. Bruner, Rodger D. Curren, Julia H. Fentem, Hermann-Georg Holzhütter, Manfred Liebsch, David P. Lovell, and Jacqueline A. Southee. "The Validation of Toxicological Prediction Models." Alternatives to Laboratory Animals 25, no. 5 (September 1997): 505–16. http://dx.doi.org/10.1177/026119299702500507.

Full text
Abstract:
An alternative method is shown to consist of two parts: the test system itself; and a prediction model for converting in vitro endpoints into predictions of in vivo toxicity. For the alternative method to be relevant and reliable, it is important that its prediction model component is of high predictive power and is sufficiently robust against sources of data variability. In other words, the prediction model must be subjected to criticism, leading successful models to the state of confirmation. It is shown that there are certain circumstances in which a new prediction model may be introduced without the necessity to generate new test system data.
APA, Harvard, Vancouver, ISO, and other styles
3

Stenhaug, Benjamin A., and Benjamin W. Domingue. "Predictive Fit Metrics for Item Response Models." Applied Psychological Measurement 46, no. 2 (February 13, 2022): 136–55. http://dx.doi.org/10.1177/01466216211066603.

Full text
Abstract:
The fit of an item response model is typically conceptualized as whether a given model could have generated the data. In this study, for an alternative view of fit, “predictive fit,” based on the model’s ability to predict new data is advocated. The authors define two prediction tasks: “missing responses prediction”—where the goal is to predict an in-sample person’s response to an in-sample item—and “missing persons prediction”—where the goal is to predict an out-of-sample person’s string of responses. Based on these prediction tasks, two predictive fit metrics are derived for item response models that assess how well an estimated item response model fits the data-generating model. These metrics are based on long-run out-of-sample predictive performance (i.e., if the data-generating model produced infinite amounts of data, what is the quality of a “model’s predictions on average?”). Simulation studies are conducted to identify the prediction-maximizing model across a variety of conditions. For example, defining prediction in terms of missing responses, greater average person ability, and greater item discrimination are all associated with the 3PL model producing relatively worse predictions, and thus lead to greater minimum sample sizes for the 3PL model. In each simulation, the prediction-maximizing model to the model selected by Akaike’s information criterion, Bayesian information criterion (BIC), and likelihood ratio tests are compared. It is found that performance of these methods depends on the prediction task of interest. In general, likelihood ratio tests often select overly flexible models, while BIC selects overly parsimonious models. The authors use Programme for International Student Assessment data to demonstrate how to use cross-validation to directly estimate the predictive fit metrics in practice. The implications for item response model selection in operational settings are discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Ansah, Kwabena, Ismail Wafaa Denwar, and Justice Kwame Appati. "Intelligent Models for Stock Price Prediction." Journal of Information Technology Research 15, no. 1 (January 2022): 1–17. http://dx.doi.org/10.4018/jitr.298616.

Full text
Abstract:
Prediction of the stock price is a crucial task as predicting it may lead to profits. Stock price prediction is a challenge owing to non-stationary and chaotic data. Thus, the projection becomes challenging among the investors and shareholders to invest the money to make profits. This paper is a review of stock price prediction, focusing on metrics, models, and datasets. It presents a detailed review of 30 research papers suggesting the methodologies, such as Support Vector Machine Random Forest, Linear Regression, Recursive Neural Network, and Long Short-Term Movement based on the stock price prediction. Aside from predictions, the limitations, and future works are discussed in the papers reviewed. The commonly used technique for achieving effective stock price prediction is the RF, LSTM, and SVM techniques. Despite the research efforts, the current stock price prediction technique has many limits. From this survey, it is observed that the stock market prediction is a complicated task, and other factors should be considered to accurately and efficiently predict the future.
APA, Harvard, Vancouver, ISO, and other styles
5

Karpac, Dusan, and Viera Bartosova. "The verification of prediction and classification ability of selected Slovak prediction models and their emplacement in forecasts of financial health of a company in aspect of globalization." SHS Web of Conferences 74 (2020): 06010. http://dx.doi.org/10.1051/shsconf/20207406010.

Full text
Abstract:
Predicting financial health of a company is in this global world necessary for each business entity, especially for the international ones, as it´s very important to know financial stability. Forecasting business failure is a worldwide known term, in a global notion, and there is a lot of prediction models constructed to compute financial health of a company and, by that, state whether a company inclines to financial boom or bankruptcy. Globalized prediction models compute financial health of companies, but the vast majority of models predicting business failure are constructed solely for the conditions of a particular country or even just for a specific sector of a national economy. Field of financial predictions regarding to international view consists of elementary used models, for example, such as Altman´s Z-score or Beerman´s index, which are globally know and used as basic of many other modificated models. Following article deals with selected Slovak prediction models designed to Slovak conditions, states how these models stand in this global world, what is their international connection to the worldwide economies, and also states verification of their prediction ability in a specific sector. The verification of predictive ability of the models is defined by ROC analysis and through results the paper demonstrates the most suitable prediction models to use in the selected sector.
APA, Harvard, Vancouver, ISO, and other styles
6

Martínez-Fernández, Pelayo, Zulima Fernández-Muñiz, Ana Cernea, Juan Luis Fernández-Martínez, and Andrzej Kloczkowski. "Three Mathematical Models for COVID-19 Prediction." Mathematics 11, no. 3 (January 17, 2023): 506. http://dx.doi.org/10.3390/math11030506.

Full text
Abstract:
The COVID-19 outbreak was a major event that greatly impacted the economy and the health systems around the world. Understanding the behavior of the virus and being able to perform long-term and short-term future predictions of the daily new cases is a working field for machine learning methods and mathematical models. This paper compares Verhulst’s, Gompertz´s, and SIR models from the point of view of their efficiency to describe the behavior of COVID-19 in Spain. These mathematical models are used to predict the future of the pandemic by first solving the corresponding inverse problems to identify the model parameters in each wave separately, using as observed data the daily cases in the past. The posterior distributions of the model parameters are then inferred via the Metropolis–Hastings algorithm, comparing the robustness of each prediction model and making different representations to visualize the results obtained concerning the posterior distribution of the model parameters and their predictions. The knowledge acquired is used to perform predictions about the evolution of both the daily number of infected cases and the total number of cases during each wave. As a main conclusion, predictive models are incomplete without a corresponding uncertainty analysis of the corresponding inverse problem. The invariance of the output (posterior prediction) with respect to the forward predictive model that is used shows that the methodology shown in this paper can be used to adopt decisions in real practice (public health).
APA, Harvard, Vancouver, ISO, and other styles
7

Pace, Michael L. "Prediction and the aquatic sciences." Canadian Journal of Fisheries and Aquatic Sciences 58, no. 1 (January 1, 2001): 63–72. http://dx.doi.org/10.1139/f00-151.

Full text
Abstract:
The need for prediction is now widely recognized and frequently articulated as an objective of research programs in aquatic science. This recognition is partly the legacy of earlier advocacy by the school of empirical limnologists. This school, however, presented prediction narrowly and failed to account for the diversity of predictive approaches as well to set prediction within the proper scientific context. Examples from time series analysis and probabilistic models oriented toward management provide an expanded view of approaches and prospects for prediction. The context and rationale for prediction is enhanced understanding. Thus, prediction is correctly viewed as an aid to building scientific knowledge with better understanding leading to improved predictions. Experience, however, suggests that the most effective predictive models represent condensed models of key features in aquatic systems. Prediction remains important for the future of aquatic sciences. Predictions are required in the assessment of environmental concerns and for testing scientific fundamentals. Technology is driving enormous advances in the ability to study aquatic systems. If these advances are not accompanied by improvements in predictive capability, aquatic research will have failed in delivering on promised objectives. This situation should spark discomfort in aquatic scientists and foster creative approaches toward prediction.
APA, Harvard, Vancouver, ISO, and other styles
8

Ben-Haim, Yakov, and François M. Hemez. "Robustness, fidelity and prediction-looseness of models." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 468, no. 2137 (September 14, 2011): 227–44. http://dx.doi.org/10.1098/rspa.2011.0050.

Full text
Abstract:
Assessment of the credibility of a mathematical or numerical model of a complex system must combine three components: (i) the fidelity of the model to test data, e.g. as quantified by a mean-squared error; (ii) the robustness, of model fidelity, to lack of understanding of the underlying processes; and (iii) the prediction-looseness of the model. ‘Prediction-looseness’ is the range of predictions of models that are equivalent in terms of fidelity. The main result of this paper asserts that fidelity, robustness and prediction-looseness are mutually antagonistic. A change in the model that enhances one of these attributes will cause deterioration of another. In particular, increasing the fidelity to test data will decrease the robustness to imperfect understanding of the process. Likewise, increasing the robustness will increase the predictive looseness . The conclusion is that focusing only on fidelity-to-data is not a sound decision-making strategy for model building and validation. A better strategy is to explore the trade-offs between robustness-to-uncertainty, fidelity to data and tightness of predictions. Our analysis is based on info-gap models of uncertainty, which can be applied to cases of severe uncertainty and lack of knowledge.
APA, Harvard, Vancouver, ISO, and other styles
9

Genç, Onur, Bilal Gonen, and Mehmet Ardıçlıoğlu. "A comparative evaluation of shear stress modeling based on machine learning methods in small streams." Journal of Hydroinformatics 17, no. 5 (April 28, 2015): 805–16. http://dx.doi.org/10.2166/hydro.2015.142.

Full text
Abstract:
Predicting shear stress distribution has proved to be a critical problem to solve. Hence, the basic objective of this paper is to develop a prediction of shear stress distribution by machine learning algorithms including artificial neural networks, classification and regression tree, generalized linear models. The data set, which is large and feature-rich, is utilized to improve machine learning-based predictive models and extract the most important predictive factors. The 10-fold cross-validation approach was used to determine the performances of prediction methods. The predictive performances of the proposed models were found to be very close to each other. However, the results indicated that the artificial neural network, which has the R value of 0.92 ± 0.03, achieved the best classification performance overall accuracy on the 10-fold holdout sample. The predictions of all machine learning models were well correlated with measurement data.
APA, Harvard, Vancouver, ISO, and other styles
10

Kappen, Teus H., and Linda M. Peelen. "Prediction models." Current Opinion in Anaesthesiology 29, no. 6 (December 2016): 717–26. http://dx.doi.org/10.1097/aco.0000000000000386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Afshartous, David, and Jan de Leeuw. "Prediction in Multilevel Models." Journal of Educational and Behavioral Statistics 30, no. 2 (June 2005): 109–39. http://dx.doi.org/10.3102/10769986030002109.

Full text
Abstract:
Multilevel modeling is an increasingly popular technique for analyzing hierarchical data. This article addresses the problem of predicting a future observable y*j in thej th group of a hierarchical data set. Three prediction rules are considered and several analytical results on the relative performance of these prediction rules are demonstrated. In addition, the prediction rules are assessed by means of a Monte Carlo study that extensively covers both the sample size and parameter space. Specifically, the sample size space concerns the various combinations of Level 1 (individual) and Level 2 (group) sample sizes, while the parameter space concerns different intraclass correlation values. The three prediction rules employ OLS, prior, and multilevel estimators for the Level 1 coefficientsβj The multilevel prediction rule performs the best across all design conditions, and the prior prediction rule degrades as the number of groups, J, increases. Finally, this article investigates the robustness of the multilevel prediction rule to misspecifications of the Level 2 model.
APA, Harvard, Vancouver, ISO, and other styles
12

Siemens, Angela, Spencer J. Anderson, S. Rod Rassekh, Colin J. D. Ross, and Bruce C. Carleton. "A Systematic Review of Polygenic Models for Predicting Drug Outcomes." Journal of Personalized Medicine 12, no. 9 (August 27, 2022): 1394. http://dx.doi.org/10.3390/jpm12091394.

Full text
Abstract:
Polygenic models have emerged as promising prediction tools for the prediction of complex traits. Currently, the majority of polygenic models are developed in the context of predicting disease risk, but polygenic models may also prove useful in predicting drug outcomes. This study sought to understand how polygenic models incorporating pharmacogenetic variants are being used in the prediction of drug outcomes. A systematic review was conducted with the aim of gaining insights into the methods used to construct polygenic models, as well as their performance in drug outcome prediction. The search uncovered 89 papers that incorporated pharmacogenetic variants in the development of polygenic models. It was found that the most common polygenic models were constructed for drug dosing predictions in anticoagulant therapies (n = 27). While nearly all studies found a significant association with their polygenic model and the investigated drug outcome (93.3%), less than half (47.2%) compared the performance of the polygenic model against clinical predictors, and even fewer (40.4%) sought to validate model predictions in an independent cohort. Additionally, the heterogeneity of reported performance measures makes the comparison of models across studies challenging. These findings highlight key considerations for future work in developing polygenic models in pharmacogenomic research.
APA, Harvard, Vancouver, ISO, and other styles
13

Rau, Cheng-Shyuan, Shao-Chun Wu, Jung-Fang Chuang, Chun-Ying Huang, Hang-Tsung Liu, Peng-Chen Chien, and Ching-Hua Hsieh. "Machine Learning Models of Survival Prediction in Trauma Patients." Journal of Clinical Medicine 8, no. 6 (June 5, 2019): 799. http://dx.doi.org/10.3390/jcm8060799.

Full text
Abstract:
Background: We aimed to build a model using machine learning for the prediction of survival in trauma patients and compared these model predictions to those predicted by the most commonly used algorithm, the Trauma and Injury Severity Score (TRISS). Methods: Enrolled hospitalized trauma patients from 2009 to 2016 were divided into a training dataset (70% of the original data set) for generation of a plausible model under supervised classification, and a test dataset (30% of the original data set) to test the performance of the model. The training and test datasets comprised 13,208 (12,871 survival and 337 mortality) and 5603 (5473 survival and 130 mortality) patients, respectively. With the provision of additional information such as pre-existing comorbidity status or laboratory data, logistic regression (LR), support vector machine (SVM), and neural network (NN) (with the Stuttgart Neural Network Simulator (RSNNS)) were used to build models of survival prediction and compared to the predictive performance of TRISS. Predictive performance was evaluated by accuracy, sensitivity, and specificity, as well as by area under the curve (AUC) measures of receiver operating characteristic curves. Results: In the validation dataset, NN and the TRISS presented the highest score (82.0%) for balanced accuracy, followed by SVM (75.2%) and LR (71.8%) models. In the test dataset, NN had the highest balanced accuracy (75.1%), followed by the TRISS (70.2%), SVM (70.6%), and LR (68.9%) models. All four models (LR, SVM, NN, and TRISS) exhibited a high accuracy of more than 97.5% and a sensitivity of more than 98.6%. However, NN exhibited the highest specificity (51.5%), followed by the TRISS (41.5%), SVM (40.8%), and LR (38.5%) models. Conclusions: These four models (LR, SVM, NN, and TRISS) exhibited a similar high accuracy and sensitivity in predicting the survival of the trauma patients. In the test dataset, the NN model had the highest balanced accuracy and predictive specificity.
APA, Harvard, Vancouver, ISO, and other styles
14

Salehi, Mahdi, Mahmoud Lari Dashtbayaz, and Masomeh Heydari. "Audit fees prediction using fuzzy models." Problems and Perspectives in Management 14, no. 2 (May 11, 2016): 104–17. http://dx.doi.org/10.21511/ppm.14(2).2016.11.

Full text
Abstract:
The current study aims to predict the optimal amount of independent audit fees based on the factors influencing audit fees. To identify the factors influencing audit fees, the stakeholders of 30 auditing firms, members of the Iranian Association of Certified Public Accountants in Tehran selected randomly, were interviewed. Finally, the linear programming model for audit fees and its determinants is defined and sum of squared error is used to solve the function with minimum. Also, given that the data are quantitative and comparative and normally distributed, Pearson’s correlation coefficient is used to test the research hypotheses. The results show that a positive significant correlation exists between the variables of expected time to perform audit procedures, the number of accounting documents, audit operation risk, complexity of operations, existence of specific rules and regulations governing the activities of the entity
APA, Harvard, Vancouver, ISO, and other styles
15

Siek, M., and D. P. Solomatine. "Nonlinear chaotic model for predicting storm surges." Nonlinear Processes in Geophysics 17, no. 5 (September 6, 2010): 405–20. http://dx.doi.org/10.5194/npg-17-405-2010.

Full text
Abstract:
Abstract. This paper addresses the use of the methods of nonlinear dynamics and chaos theory for building a predictive chaotic model from time series. The chaotic model predictions are made by the adaptive local models based on the dynamical neighbors found in the reconstructed phase space of the observables. We implemented the univariate and multivariate chaotic models with direct and multi-steps prediction techniques and optimized these models using an exhaustive search method. The built models were tested for predicting storm surge dynamics for different stormy conditions in the North Sea, and are compared to neural network models. The results show that the chaotic models can generally provide reliable and accurate short-term storm surge predictions.
APA, Harvard, Vancouver, ISO, and other styles
16

Lan, Yu, and Daniel F. Heitjan. "Adaptive parametric prediction of event times in clinical trials." Clinical Trials 15, no. 2 (January 29, 2018): 159–68. http://dx.doi.org/10.1177/1740774517750633.

Full text
Abstract:
Background: In event-based clinical trials, it is common to conduct interim analyses at planned landmark event counts. Accurate prediction of the timing of these events can support logistical planning and the efficient allocation of resources. As the trial progresses, one may wish to use the accumulating data to refine predictions. Purpose: Available methods to predict event times include parametric cure and non-cure models and a nonparametric approach involving Bayesian bootstrap simulation. The parametric methods work well when their underlying assumptions are met, and the nonparametric method gives calibrated but inefficient predictions across a range of true models. In the early stages of a trial, when predictions have high marginal value, it is difficult to infer the form of the underlying model. We seek to develop a method that will adaptively identify the best-fitting model and use it to create robust predictions. Methods: At each prediction time, we repeat the following steps: (1) resample the data; (2) identify, from among a set of candidate models, the one with the highest posterior probability; and (3) sample from the predictive posterior of the data under the selected model. Results: A Monte Carlo study demonstrates that the adaptive method produces prediction intervals whose coverage is robust within the family of selected models. The intervals are generally wider than those produced assuming the correct model, but narrower than nonparametric prediction intervals. We demonstrate our method with applications to two completed trials: The International Chronic Granulomatous Disease study and Radiation Therapy Oncology Group trial 0129. Limitations: Intervals produced under any method can be badly calibrated when the sample size is small and unhelpfully wide when predicting the remote future. Early predictions can be inaccurate if there are changes in enrollment practices or trends in survival. Conclusions: An adaptive event-time prediction method that selects the model given the available data can give improved robustness compared to methods based on less flexible parametric models.
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Debby D., Haoran Xie, and Hong Yan. "Proteo-chemometrics interaction fingerprints of protein–ligand complexes predict binding affinity." Bioinformatics 37, no. 17 (February 27, 2021): 2570–79. http://dx.doi.org/10.1093/bioinformatics/btab132.

Full text
Abstract:
Abstract Motivation Reliable predictive models of protein–ligand binding affinity are required in many areas of biomedical research. Accurate prediction based on current descriptors or molecular fingerprints (FPs) remains a challenge. We develop novel interaction FPs (IFPs) to encode protein–ligand interactions and use them to improve the prediction. Results Proteo-chemometrics IFPs (PrtCmm IFPs) formed by combining extended connectivity fingerprints (ECFPs) with the proteo-chemometrics concept. Combining PrtCmm IFPs with machine-learning models led to efficient scoring models, which were validated on the PDBbind v2019 core set and CSAR-HiQ sets. The PrtCmm IFP Score outperformed several other models in predicting protein–ligand binding affinities. Besides, conventional ECFPs were simplified to generate new IFPs, which provided consistent but faster predictions. The relationship between the base atom properties of ECFPs and the accuracy of predictions was also investigated. Availability PrtCmm IFP has been implemented in the IFP Score Toolkit on github (https://github.com/debbydanwang/IFPscore). Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
18

Hu, Xiaoping, Laurence V. Madden, Simon Edwards, and Xiangming Xu. "Combining Models is More Likely to Give Better Predictions than Single Models." Phytopathology® 105, no. 9 (September 2015): 1174–82. http://dx.doi.org/10.1094/phyto-11-14-0315-r.

Full text
Abstract:
In agricultural research, it is often difficult to construct a single “best” predictive model based on data collected under field conditions. We studied the relative prediction performance of combining empirical linear models over the single best model in relation to number of models to be combined, number of variates in the models, magnitude of residual errors, and weighting schemes. Two scenarios were simulated: the modeler did or did not know the relative of performance of the models to be combined. For the former case, model averaging is achieved either through weights based on the Akaike Information Criterion (AIC) statistic or with arithmetic averaging; for the latter case, only the arithmetic averaging is possible (because the relative model predictive performance is not known for a common dataset). In addition to two experimental datasets on oat mycotoxins in relation to environmental variables, two datasets were generated assuming a consistent correlation structure among explanatory variates with two magnitudes of residual errors. For the majority of cases, model averaging resulted in improved prediction performance over the single-model predictions, especially when a modeler does not have the information of relative model performance. The fewer variates in the models to be combined, the greater is improvement of model averaging over the single-model predictions. Combining models led to very little improvement over individual models when there were many variates in individual models. Overall, simple arithmetic averaging resulted in slightly better performance than the AIC-based weighted averaging. The advantage in model averaging is also noticeable for larger residual errors. This study suggests that model averaging generally performs better than single-model predictions, especially when a modeler does not have information on the relative performance of the candidate models.
APA, Harvard, Vancouver, ISO, and other styles
19

Klunder, Jet H., Sofie L. Panneman, Emma Wallace, Ralph de Vries, Karlijn J. Joling, Otto R. Maarsingh, and Hein P. J. van Hout. "Prediction models for the prediction of unplanned hospital admissions in community-dwelling older adults: A systematic review." PLOS ONE 17, no. 9 (September 23, 2022): e0275116. http://dx.doi.org/10.1371/journal.pone.0275116.

Full text
Abstract:
Background Identification of community-dwelling older adults at risk of unplanned hospitalizations is of importance to facilitate preventive interventions. Our objective was to review and appraise the methodological quality and predictive performance of prediction models for predicting unplanned hospitalizations in community-dwelling older adults Methods and findings We searched MEDLINE, EMBASE and CINAHL from August 2013 to January 2021. Additionally, we checked references of the identified articles for the inclusion of relevant publications and added studies from two previous reviews that fulfilled the eligibility criteria. We included prospective and retrospective studies with any follow-up period that recruited adults aged 65 and over and developed a prediction model predicting unplanned hospitalizations. We included models with at least one (internal or external) validation cohort. The models had to be intended to be used in a primary care setting. Two authors independently assessed studies for inclusion and undertook data extraction following recommendations of the CHARMS checklist, while quality assessment was performed using the PROBAST tool. A total of 19 studies met the inclusion criteria. Prediction horizon ranged from 4.5 months to 4 years. Most frequently included variables were specific medical diagnoses (n = 11), previous hospital admission (n = 11), age (n = 11), and sex or gender (n = 8). Predictive performance in terms of area under the curve ranged from 0.61 to 0.78. Models developed to predict potentially preventable hospitalizations tended to have better predictive performance than models predicting hospitalizations in general. Overall, risk of bias was high, predominantly in the analysis domain. Conclusions Models developed to predict preventable hospitalizations tended to have better predictive performance than models to predict all-cause hospitalizations. There is however substantial room for improvement on the reporting and analysis of studies. We recommend better adherence to the TRIPOD guidelines.
APA, Harvard, Vancouver, ISO, and other styles
20

Kim, Donghyun, Heechan Han, Wonjoon Wang, Yujin Kang, Hoyong Lee, and Hung Soo Kim. "Application of Deep Learning Models and Network Method for Comprehensive Air-Quality Index Prediction." Applied Sciences 12, no. 13 (July 1, 2022): 6699. http://dx.doi.org/10.3390/app12136699.

Full text
Abstract:
Accurate pollutant prediction is essential in fields such as meteorology, meteorological disasters, and climate change studies. In this study, long short-term memory (LSTM) and deep neural network (DNN) models were applied to six pollutants and comprehensive air-quality index (CAI) predictions from 2015 to 2020 in Korea. In addition, we used the network method to find the best data sources that provide factors affecting comprehensive air-quality index behaviors. This study had two steps: (1) predicting the six pollutants, including fine dust (PM10), fine particulate matter (PM2.5), ozone (O3), sulfurous acid gas (SO2), nitrogen dioxide (NO2), and carbon monoxide (CO) using the LSTM model; (2) forecasting the CAI using the six predicted pollutants in the first step as predictors of DNNs. The predictive ability of each model for the six pollutants and CAI prediction was evaluated by comparing it with the observed air-quality data. This study showed that combining a DNN model with the network method provided a high predictive power, and this combination could be a remarkable strength in CAI prediction. As the need for disaster management increases, it is anticipated that the LSTM and DNN models with the network method have ample potential to track the dynamics of air pollution behaviors.
APA, Harvard, Vancouver, ISO, and other styles
21

Lyu, Xiaozhong, Cuiqing Jiang, Yong Ding, Zhao Wang, and Yao Liu. "Sales Prediction by Integrating the Heat and Sentiments of Product Dimensions." Sustainability 11, no. 3 (February 11, 2019): 913. http://dx.doi.org/10.3390/su11030913.

Full text
Abstract:
Online word-of-mouth (eWOM) disseminated on social media contains a considerable amount of important information that can predict sales. However, the accuracy of sales prediction models using big data on eWOM is still unsatisfactory. We argue that eWOM contains the heat and sentiments of product dimensions, which can improve the accuracy of prediction models based on multiattribute attitude theory. In this paper, we propose a dynamic topic analysis (DTA) framework to extract the heat and sentiments of product dimensions from big data on eWOM. Ultimately, we propose an autoregressive heat-sentiment (ARHS) model that integrates the heat and sentiments of dimensions into the benchmark predictive model to forecast daily sales. We conduct an empirical study of the movie industry and confirm that the ARHS model is better than other models in predicting movie box-office revenues. The robustness check with regard to predicting opening-week revenues based on a back-propagation neural network also suggests that the heat and sentiments of dimensions can improve the accuracy of sales predictions when the machine-learning method is used.
APA, Harvard, Vancouver, ISO, and other styles
22

Rejovitzky, Elisha, and Eli Altus. "On single damage variable models for fatigue." International Journal of Damage Mechanics 22, no. 2 (April 16, 2012): 268–84. http://dx.doi.org/10.1177/1056789512443902.

Full text
Abstract:
This study focuses on an analytical investigation of the common characteristics of fatigue models based on a single damage variable. The general single damage variable constitutive equation is used to extract several fundamental properties. It is shown that at constant amplitude loads, damage evolution results are sufficient for predicting fatigue life under any load history. Two-level fatigue envelopes constitute an indirect measure of the damage evolution and form an alternative basis for life prediction. In addition, high-to-low and low-to-high envelopes are anti-symmetrical with respect to each other. A new integral formula for life prediction under random loads is verified with the models of Manson and Hashin, and also developed analytically for other models including Chaboche, resulting in analytical predictions. The Palmgren – Miner rule is found to yield an upper bound for fatigue life predictions under random loads, regardless of the load distribution and the specific single damage variable model.
APA, Harvard, Vancouver, ISO, and other styles
23

Merrill, Zachary, Subashan Perera, and Rakié Cham. "Torso Segment Parameter Prediction in Working Adults." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 1257–61. http://dx.doi.org/10.1177/1541931218621289.

Full text
Abstract:
Body segment parameters (BSPs) such as segment mass, center of mass, and radius of gyration are used as inputs in static and dynamic ergonomic and biomechanical models used to predict joint and muscle forces, and related risks of musculoskeletal injury. Because these models are sensitive to BSP values, accurate and representative parameters are necessary for injury risk prediction. While previous studies have determined segment parameters in the general population, as well as the impact of age and obesity levels on these parameters, estimated errors in the prediction of BSPs can be as large as 40% (Durkin, 2003). Thus, more precise values are required for attempting to predict injury risk in individuals. This study aims to provide statistical models for predicting torso segment parameters in working adults using whole body dual energy x-ray absorptiometry (DXA) scan data along with a set of anthropometric measurements. The statistical models were developed on a training subset of the study population, and validated on a testing subset. When comparing the model predictions to the actual BSPs of the testing subset, the predictions were, on average, within 5% of the calculated parameters, while previously developed predictions (de Leva, 1996) had average errors of up to 30%, indicating that the new statistical models greatly increase the accuracy in predicting BSPs.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Yitan, Thomas Brettin, Yvonne A. Evrard, Fangfang Xia, Alexander Partin, Maulik Shukla, Hyunseung Yoo, James H. Doroshow, and Rick L. Stevens. "Enhanced Co-Expression Extrapolation (COXEN) Gene Selection Method for Building Anti-Cancer Drug Response Prediction Models." Genes 11, no. 9 (September 11, 2020): 1070. http://dx.doi.org/10.3390/genes11091070.

Full text
Abstract:
The co-expression extrapolation (COXEN) method has been successfully used in multiple studies to select genes for predicting the response of tumor cells to a specific drug treatment. Here, we enhance the COXEN method to select genes that are predictive of the efficacies of multiple drugs for building general drug response prediction models that are not specific to a particular drug. The enhanced COXEN method first ranks the genes according to their prediction power for each individual drug and then takes a union of top predictive genes of all the drugs, among which the algorithm further selects genes whose co-expression patterns are well preserved between cancer cases for building prediction models. We apply the proposed method on benchmark in vitro drug screening datasets and compare the performance of prediction models built based on the genes selected by the enhanced COXEN method to that of models built on genes selected by the original COXEN method and randomly picked genes. Models built with the enhanced COXEN method always present a statistically significantly improved prediction performance (adjusted p-value ≤ 0.05). Our results demonstrate the enhanced COXEN method can dramatically increase the power of gene expression data for predicting drug response.
APA, Harvard, Vancouver, ISO, and other styles
25

Lema, Guillermo. "Risk Prediction Models." Mayo Clinic Proceedings 96, no. 4 (April 2021): 1095. http://dx.doi.org/10.1016/j.mayocp.2021.02.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Hadayeghi, Alireza, Amer S. Shalaby, and Bhagwant N. Persaud. "Safety Prediction Models." Transportation Research Record: Journal of the Transportation Research Board 2019, no. 1 (January 2007): 225–36. http://dx.doi.org/10.3141/2019-27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

DETSKY, ALLAN S. "Clinical prediction models." Acta Anaesthesiologica Scandinavica 39 (June 1995): 134–35. http://dx.doi.org/10.1111/j.1399-6576.1995.tb04295.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Steyerberg, Ewout W., and Hester F. Lingsma. "Validating prediction models." BMJ 336, no. 7648 (April 10, 2008): 789.2–789. http://dx.doi.org/10.1136/bmj.39542.610000.3a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Liao, Lawrence, and Daniel B. Mark. "Clinical prediction models." Journal of the American College of Cardiology 42, no. 5 (September 2003): 851–53. http://dx.doi.org/10.1016/s0735-1097(03)00836-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Tripepi, G., G. Heinze, K. J. Jager, V. S. Stel, F. W. Dekker, and C. Zoccali. "Risk prediction models." Nephrology Dialysis Transplantation 28, no. 8 (May 7, 2013): 1975–80. http://dx.doi.org/10.1093/ndt/gft095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ranstam, J., J. A. Cook, and G. S. Collins. "Clinical prediction models." British Journal of Surgery 103, no. 13 (November 30, 2016): 1886. http://dx.doi.org/10.1002/bjs.10242.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Mijderwijk, Hendrik-Jan, Thomas Beez, Daniel Hänggi, and Daan Nieboer. "Clinical prediction models." Child's Nervous System 36, no. 5 (March 17, 2020): 895–97. http://dx.doi.org/10.1007/s00381-020-04577-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Safaei, Nima, Babak Safaei, Seyedhouman Seyedekrami, Mojtaba Talafidaryani, Arezoo Masoud, Shaodong Wang, Qing Li, and Mahdi Moqri. "E-CatBoost: An efficient machine learning framework for predicting ICU mortality using the eICU Collaborative Research Database." PLOS ONE 17, no. 5 (May 5, 2022): e0262895. http://dx.doi.org/10.1371/journal.pone.0262895.

Full text
Abstract:
Improving the Intensive Care Unit (ICU) management network and building cost-effective and well-managed healthcare systems are high priorities for healthcare units. Creating accurate and explainable mortality prediction models helps identify the most critical risk factors in the patients’ survival/death status and early detect the most in-need patients. This study proposes a highly accurate and efficient machine learning model for predicting ICU mortality status upon discharge using the information available during the first 24 hours of admission. The most important features in mortality prediction are identified, and the effects of changing each feature on the prediction are studied. We used supervised machine learning models and illness severity scoring systems to benchmark the mortality prediction. We also implemented a combination of SHAP, LIME, partial dependence, and individual conditional expectation plots to explain the predictions made by the best-performing model (CatBoost). We proposed E-CatBoost, an optimized and efficient patient mortality prediction model, which can accurately predict the patients’ discharge status using only ten input features. We used eICU-CRD v2.0 to train and validate the models; the dataset contains information on over 200,000 ICU admissions. The patients were divided into twelve disease groups, and models were fitted and tuned for each group. The models’ predictive performance was evaluated using the area under a receiver operating curve (AUROC). The AUROC scores were 0.86 [std:0.02] to 0.92 [std:0.02] for CatBoost and 0.83 [std:0.02] to 0.91 [std:0.03] for E-CatBoost models across the defined disease groups; if measured over the entire patient population, their AUROC scores were 7 to 18 and 2 to 12 percent higher than the baseline models, respectively. Based on SHAP explanations, we found age, heart rate, respiratory rate, blood urine nitrogen, and creatinine level as the most critical cross-disease features in mortality predictions.
APA, Harvard, Vancouver, ISO, and other styles
34

Muckli, Lars, Lucy S. Petro, and Fraser W. Smith. "Backwards is the way forward: Feedback in the cortical hierarchy predicts the expected future." Behavioral and Brain Sciences 36, no. 3 (May 10, 2013): 221. http://dx.doi.org/10.1017/s0140525x12002361.

Full text
Abstract:
AbstractClark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models).
APA, Harvard, Vancouver, ISO, and other styles
35

Masdiantini, Putu Riesty, and Ni Made Sindy Warasniasih. "Laporan Keuangan dan Prediksi Kebangkrutan Perusahaan." Jurnal Ilmiah Akuntansi 5, no. 1 (June 25, 2020): 196. http://dx.doi.org/10.23887/jia.v5i1.25119.

Full text
Abstract:
This study aims to determine differences in bankruptcy predictions at company’s sub-sector of cosmetics and household listed on the Indonesia Stock Exchange (IDX) using the Altman model, Springate model, Zmijewski model, Taffler model, and Fulmer model, and to determine the bankruptcy prediction model that is the most accurate of the five bankruptcy prediction models. This study uses secondary data in the form of company financial statements for the period 2014-2018. Data analysis techniques in this study used the Kruskal-Wallis test. The results showed there were differences in bankruptcy predictions using the Altman model, Springate model, Zmijewski model, Taffler model, and Fulmer model. The Zmijewski, Taffler, and Fulmer models have the same accuracy level of 100% so that the three prediction models are the most accurate prediction models for predicting the potential bankruptcy at companies sub-sector of cosmetics and household listed on the IDX.
APA, Harvard, Vancouver, ISO, and other styles
36

Rudrappa, Gujanatti. "Machine Learning Models Applied for Rainfall Prediction." Revista Gestão Inovação e Tecnologias 11, no. 3 (June 30, 2021): 179–87. http://dx.doi.org/10.47059/revistageintec.v11i3.1926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sentas, Antonis, Lina Karamoutsou, Nikos Charizopoulos, Thomas Psilovikos, Aris Psilovikos, and Athanasios Loukas. "The Use of Stochastic Models for Short-Term Prediction of Water Parameters of the Thesaurus Dam, River Nestos, Greece." Proceedings 2, no. 11 (July 30, 2018): 634. http://dx.doi.org/10.3390/proceedings2110634.

Full text
Abstract:
The scope of this paper is to evaluate the short-term predictive capacity of the stochastic models ARIMA, Transfer Function (TF) and Artificial Neural Networks for water parameters, specifically for 1, 2 and 3 steps forward (m = 1, 2 and 3). The comparison of statistical parameters indicated that ARIMA models could be proposed as short-term prediction models. In some cases that TF models resulted in better predictions, the difference with ARIMA was minimal and since the latter are simpler in their construction, they are proposed for short-term prediction. Artificial Neural Networks didn’t show a good short-term predictive capacity in comparison with the aforementioned models.
APA, Harvard, Vancouver, ISO, and other styles
38

Lira Cortes, Ana Laura, and Carlos Fuentes Silva. "Artificial Intelligence Models for Crime Prediction in Urban Spaces." Machine Learning and Applications: An International Journal 8, no. 1 (March 31, 2021): 1–13. http://dx.doi.org/10.5121/mlaij.2021.8101.

Full text
Abstract:
This work presents research based on evidence with neural networks for the development of predictive crime models, finding the data sets used are focused on historical crime data, crime classification, types of theft at different scales of space and time, counting crime and conflict points in urban areas. Among some results, 81% precision is observed in the prediction of the Neural Network algorithm and ranges in the prediction of crime occurrence at a space-time point between 75% and 90% using LSTM (Long-ShortSpace-Time). It is also observed in this review, that in the field of justice, systems based on intelligent technologies have been incorporated, to carry out activities such as legal advice, prediction and decisionmaking, national and international cooperation in the fight against crime, police and intelligence services, control systems with facial recognition, search and processing of legal information, predictive surveillance, the definition of criminal models under the criteria of criminal records, history of incidents in different regions of the city, location of the police force, established businesses, etc., that is, they make predictions in the urban context of public security and justice. Finally, the ethical considerations and principles related to predictive developments based on artificial intelligence are presented, which seek to guarantee aspects such as privacy, privacy and the impartiality of the algorithms, as well as avoid the processing of data under biases or distinctions. Therefore, it is concluded that the scenario for the development, research, and operation of predictive crime solutions with neural networks and artificial intelligence in urban contexts, is viable and necessary in Mexico, representing an innovative and effective alternative that contributes to the attention of insecurity, since according to the indices of intentional homicides, the crime rates of organized crime and violence with firearms, according to statistics from INEGI, the Global Peace Index and the Government of Mexico, remain in increase.
APA, Harvard, Vancouver, ISO, and other styles
39

Hier Majumder, C. A., E. Bélanger, S. DeRosier, D. A. Yuen, and A. P. Vincent. "Data assimilation for plume models." Nonlinear Processes in Geophysics 12, no. 2 (February 9, 2005): 257–67. http://dx.doi.org/10.5194/npg-12-257-2005.

Full text
Abstract:
Abstract. We use a four-dimensional variational data assimilation (4D-VAR) algorithm to observe the growth of 2-D plumes from a point heat source. In order to test the predictability of the 4D-VAR technique for 2-D plumes, we perturb the initial conditions and compare the resulting predictions to the predictions given by a direct numerical simulation (DNS) without any 4D-VAR correction. We have studied plumes in fluids with Rayleigh numbers between 106 and 107 and Prandtl numbers between 0.7 and 70, and we find the quality of the prediction to have a definite dependence on both the Rayleigh and Prandtl numbers. As the Rayleigh number is increased, so is the quality of the prediction, due to an increase of the inertial effects in the adjoint equations for momentum and energy. The horizon predictability time, or how far into the future the 4D-VAR method can predict, decreases as Rayleigh number increases. The quality of the prediction is decreased as Prandtl number increases, however. Quality also decreases with increased prediction time.
APA, Harvard, Vancouver, ISO, and other styles
40

Brüdigam, Tim, Johannes Teutsch, Dirk Wollherr, Marion Leibold, and Martin Buss. "Probabilistic model predictive control for extended prediction horizons." at - Automatisierungstechnik 69, no. 9 (September 1, 2021): 759–70. http://dx.doi.org/10.1515/auto-2021-0025.

Full text
Abstract:
Abstract Detailed prediction models with robust constraints and small sampling times in Model Predictive Control yield conservative behavior and large computational effort, especially for longer prediction horizons. Here, we extend and combine previous Model Predictive Control methods that account for prediction uncertainty and reduce computational complexity. The proposed method uses robust constraints on a detailed model for short-term predictions, while probabilistic constraints are employed on a simplified model with increased sampling time for long-term predictions. The underlying methods are introduced before presenting the proposed Model Predictive Control approach. The advantages of the proposed method are shown in a mobile robot simulation example.
APA, Harvard, Vancouver, ISO, and other styles
41

Olusanya, Micheal O., Ropo Ebenezer Ogunsakin, Meenu Ghai, and Matthew Adekunle Adeleke. "Accuracy of Machine Learning Classification Models for the Prediction of Type 2 Diabetes Mellitus: A Systematic Survey and Meta-Analysis Approach." International Journal of Environmental Research and Public Health 19, no. 21 (November 1, 2022): 14280. http://dx.doi.org/10.3390/ijerph192114280.

Full text
Abstract:
Soft-computing and statistical learning models have gained substantial momentum in predicting type 2 diabetes mellitus (T2DM) disease. This paper reviews recent soft-computing and statistical learning models in T2DM using a meta-analysis approach. We searched for papers using soft-computing and statistical learning models focused on T2DM published between 2010 and 2021 on three different search engines. Of 1215 studies identified, 34 with 136952 patients met our inclusion criteria. The pooled algorithm’s performance was able to predict T2DM with an overall accuracy of 0.86 (95% confidence interval [CI] of [0.82, 0.89]). The classification of diabetes prediction was significantly greater in models with a screening and diagnosis (pooled proportion [95% CI] = 0.91 [0.74, 0.97]) when compared to models with nephropathy (pooled proportion = 0.48 [0.76, 0.89] to 0.88 [0.83, 0.91]). For the prediction of T2DM, the decision trees (DT) models had a pooled accuracy of 0.88 [95% CI: 0.82, 0.92], and the neural network (NN) models had a pooled accuracy of 0.85 [95% CI: 0.79, 0.89]. Meta-regression did not provide any statistically significant findings for the heterogeneous accuracy in studies with different diabetes predictions, sample sizes, and impact factors. Additionally, ML models showed high accuracy for the prediction of T2DM. The predictive accuracy of ML algorithms in T2DM is promising, mainly through DT and NN models. However, there is heterogeneity among ML models. We compared the results and models and concluded that this evidence might help clinicians interpret data and implement optimum models for their dataset for T2DM prediction.
APA, Harvard, Vancouver, ISO, and other styles
42

Wynants, L., Y. Vergouwe, S. Van Huffel, D. Timmerman, and B. Van Calster. "Does ignoring clustering in multicenter data influence the performance of prediction models? A simulation study." Statistical Methods in Medical Research 27, no. 6 (September 19, 2016): 1723–36. http://dx.doi.org/10.1177/0962280216668555.

Full text
Abstract:
Clinical risk prediction models are increasingly being developed and validated on multicenter datasets. In this article, we present a comprehensive framework for the evaluation of the predictive performance of prediction models at the center level and the population level, considering population-averaged predictions, center-specific predictions, and predictions assuming an average random center effect. We demonstrated in a simulation study that calibration slopes do not only deviate from one because of over- or underfitting of patterns in the development dataset, but also as a result of the choice of the model (standard versus mixed effects logistic regression), the type of predictions (marginal versus conditional versus assuming an average random effect), and the level of model validation (center versus population). In particular, when data is heavily clustered (ICC 20%), center-specific predictions offer the best predictive performance at the population level and the center level. We recommend that models should reflect the data structure, while the level of model validation should reflect the research question.
APA, Harvard, Vancouver, ISO, and other styles
43

Carballo, Alba, María Durbán, and Dae-Jin Lee. "Out-of-Sample Prediction in Multidimensional P-Spline Models." Mathematics 9, no. 15 (July 26, 2021): 1761. http://dx.doi.org/10.3390/math9151761.

Full text
Abstract:
The prediction of out-of-sample values is an interesting problem in any regression model. In the context of penalized smoothing using a mixed-model reparameterization, a general framework has been proposed for predicting in additive models but without interaction terms. The aim of this paper is to generalize this work, extending the methodology proposed in the multidimensional case, to models that include interaction terms, i.e., when prediction is carried out in a multidimensional setting. Our method fits the data, predicts new observations at the same time, and uses constraints to ensure a consistent fit or impose further restrictions on predictions. We have also developed this method for the so-called smooth-ANOVA model, which allows us to include interaction terms that can be decomposed into the sum of several smooth functions. We also develop this methodology for the so-called smooth-ANOVA models, which allow us to include interaction terms that can be decomposed as a sum of several smooth functions. To illustrate the method, two real data sets were used, one for predicting the mortality of the U.S. population in a logarithmic scale, and the other for predicting the aboveground biomass of Populus trees as a smooth function of height and diameter. We examine the performance of interaction and the smooth-ANOVA model through simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
44

Rahmandad, Hazhir, Ran Xu, and Navid Ghaffarzadegan. "Enhancing long-term forecasting: Learning from COVID-19 models." PLOS Computational Biology 18, no. 5 (May 19, 2022): e1010100. http://dx.doi.org/10.1371/journal.pcbi.1010100.

Full text
Abstract:
While much effort has gone into building predictive models of the COVID-19 pandemic, some have argued that early exponential growth combined with the stochastic nature of epidemics make the long-term prediction of contagion trajectories impossible. We conduct two complementary studies to assess model features supporting better long-term predictions. First, we leverage the diverse models contributing to the CDC repository of COVID-19 USA death projections to identify factors associated with prediction accuracy across different projection horizons. We find that better long-term predictions correlate with: (1) capturing the physics of transmission (instead of using black-box models); (2) projecting human behavioral reactions to an evolving pandemic; and (3) resetting state variables to account for randomness not captured in the model before starting projection. Second, we introduce a very simple model, SEIRb, that incorporates these features, and few other nuances, offers informative predictions for as far as 20-weeks ahead, with accuracy comparable with the best models in the CDC set. Key to the long-term predictive power of multi-wave COVID-19 trajectories is capturing behavioral responses endogenously: balancing feedbacks where the perceived risk of death continuously changes transmission rates through the adoption and relaxation of various Non-Pharmaceutical Interventions (NPIs).
APA, Harvard, Vancouver, ISO, and other styles
45

Halabi, Susan, Cai Li, and Sheng Luo. "Developing and Validating Risk Assessment Models of Clinical Outcomes in Modern Oncology." JCO Precision Oncology, no. 3 (December 2019): 1–12. http://dx.doi.org/10.1200/po.19.00068.

Full text
Abstract:
The identification of prognostic factors and building of risk assessment prognostic models will continue to play a major role in 21st century medicine in patient management and decision making. Investigators often are interested in examining the relationship among host, tumor-related, and environmental variables in predicting clinical outcomes. We distinguish between static and dynamic prediction models. In static prediction modeling, variables collected at baseline typically are used in building models. On the other hand, dynamic predictive models leverage the longitudinal data of covariates collected during treatment or follow-up and hence provide accurate predictions of patients’ prognoses. To date, most risk assessment models in oncology have been based on static models. In this article, we cover topics related to the analysis of prognostic factors, centering on factors that are both relevant at the time of diagnosis or initial treatment and during treatment. We describe the types of risk prediction and then provide a brief description of the penalized regression methods. We then review the state-of-the art methods for dynamic prediction and compare the strengths and limitations of these methods. Although static models will continue to play an important role in oncology, developing and validating dynamic models of clinical outcomes need to take a higher priority. A framework for developing and validating dynamic tools in oncology seems to still be needed. One of the limitations in oncology that may constrain modelers is the lack of access to longitudinal biomarker data. It is highly recommended that the next generation of risk assessments consider longitudinal biomarker data and outcomes so that prediction can be continually updated.
APA, Harvard, Vancouver, ISO, and other styles
46

Ławryńczuk, Maciej, and Piotr Tatjewski. "Nonlinear predictive control based on neural multi-models." International Journal of Applied Mathematics and Computer Science 20, no. 1 (March 1, 2010): 7–21. http://dx.doi.org/10.2478/v10006-010-0001-y.

Full text
Abstract:
Nonlinear predictive control based on neural multi-modelsThis paper discusses neural multi-models based on Multi Layer Perceptron (MLP) networks and a computationally efficient nonlinear Model Predictive Control (MPC) algorithm which uses such models. Thanks to the nature of the model it calculates future predictions without using previous predictions. This means that, unlike the classical Nonlinear Auto Regressive with eXternal input (NARX) model, the multi-model is not used recurrently in MPC, and the prediction error is not propagated. In order to avoid nonlinear optimisation, in the discussed suboptimal MPC algorithm the neural multi-model is linearised on-line and, as a result, the future control policy is found by solving of a quadratic programming problem.
APA, Harvard, Vancouver, ISO, and other styles
47

Butler, Éadaoin M., José G. B. Derraik, Rachael W. Taylor, and Wayne S. Cutfield. "Prediction Models for Early Childhood Obesity: Applicability and Existing Issues." Hormone Research in Paediatrics 90, no. 6 (2018): 358–67. http://dx.doi.org/10.1159/000496563.

Full text
Abstract:
Statistical models have been developed for the prediction or diagnosis of a wide range of outcomes. However, to our knowledge, only 7 published studies have reported models to specifically predict overweight and/or obesity in early childhood. These models were developed using known risk factors and vary greatly in terms of their discrimination and predictive capacities. There are currently no established guidelines on what constitutes an acceptable level of risk (i.e., risk threshold) for childhood obesity prediction models, but these should be set following consideration of the consequences of false-positive and false-negative predictions, as well as any relevant clinical guidelines. To date, no studies have examined the impact of using early childhood obesity prediction models as intervention tools. While these are potentially valuable to inform targeted interventions, the heterogeneity of the existing models and the lack of consensus on adequate thresholds limit their usefulness in practice.
APA, Harvard, Vancouver, ISO, and other styles
48

Mehdipour, Farhad, Wisanu Boonrat, April Naviza, Vimita Vidhya, and Marianne Cherrington. "Reducing profiling bias in crime risk prediction models." Rere Āwhio - The Journal of Applied Research and Practice, no. 1 (2021): 86–93. http://dx.doi.org/10.34074/rere.00108.

Full text
Abstract:
Crime risk prediction and predictive policing can lead to safer communities, by focusing on crime hotspots. Yet predictive tools should be reliable, and their outputs should be valid, especially across diverse cultures. Machine learning methods in policing systems are topical as they seem to be causing unintended consequences that exacerbate social injustice. Research into machine learning algorithm bias is prevalent, but bias, as it relates to predictive policing, is limited. In this paper, we summarise the findings of nascent scholarship on the topic of bias in predictive policing. The unique contribution of this paper is in the use of a typical police prediction modelling process to unpack how and why such bias can creep into algorithms that have high predictive accuracy. Our research finds that especially when resources are limited, trust in machine learning outputs is elevated; systemic bias of preceding assumptions may replicate. Recommendations include a call for human oversight in machine learning methods with sensitive applications such as automated crime prediction methods. Routine reviews of prediction outputs can ensure unwarranted community targeting is not magnified.
APA, Harvard, Vancouver, ISO, and other styles
49

Yaniv, Ilan, and Robin M. Hogarth. "Judgmental Versus Statistical Prediction: Information Asymmetry and Combination Rules." Psychological Science 4, no. 1 (January 1993): 58–62. http://dx.doi.org/10.1111/j.1467-9280.1993.tb00558.x.

Full text
Abstract:
The relative predictive accuracy of humans and statistical models has long been the subject of controversy even though models have demonstrated superior performance in many studies. We propose that relative performance depends on the amount of contextual information available and whether it is distributed symmetrically to humans and models. Given their different strengths, human and statistical predictions can be profitably combined to improve prediction.
APA, Harvard, Vancouver, ISO, and other styles
50

Jeong, Jiseok, and Changwan Kim. "Comparison of Machine Learning Approaches for Medium-to-Long-Term Financial Distress Predictions in the Construction Industry." Buildings 12, no. 10 (October 20, 2022): 1759. http://dx.doi.org/10.3390/buildings12101759.

Full text
Abstract:
A method for predicting the financial status of construction companies after a medium-to-long-term period can help stakeholders in large construction projects make decisions to select an appropriate company for the project. This study compares the performances of various prediction models. It proposes an appropriate model for predicting the financial distress of construction companies considering three, five, and seven years ahead of the prediction point. To establish the prediction model, a financial ratio was selected, which was adopted in existing studies on medium-to-long-term predictions in other industries, as an additional input variable. To compare the performances of the prediction models, single-machine learning and ensemble models’ performances were compared. The comprehensive performance comparison of these models was based on the average value of the prediction performance and the results of the Friedman test. The comparison result determined that the random subspace (RS) model exhibited the best performance in predicting the financial status of construction companies after a medium-to-long-term period. The proposed model can be effectively employed to help large-scale project stakeholders avoid damage caused by the financial distress of construction companies during the project implementation process.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography