To see the other types of publications on this topic, follow the link: PREDICTION MODELS APPLICATIONS.

Journal articles on the topic 'PREDICTION MODELS APPLICATIONS'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'PREDICTION MODELS APPLICATIONS.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Chung, Chang-Jo. "Spatial Prediction Models and Applications." GEOINFORMATICS 12, no. 2 (2001): 58–59. http://dx.doi.org/10.6010/geoinformatics.12.58.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dammann, Maximilian Peter, Wolfgang Steger, and Kristin Paetzold-Byhain. "OPTIMISED MODELS FOR AR/VR BY USING GEOMETRIC COMPLEXITY METRICS TO CONTROL TESSELLATION." Proceedings of the Design Society 3 (June 19, 2023): 2855–64. http://dx.doi.org/10.1017/pds.2023.286.

Full text
Abstract:
AbstractAR/VR applications are a valuable tool in product design and lifecycle. But the integration of AR/VR is not seamless, as CAD models need to be prepared for the AR/VR applications. One necessary data transformation is the tessellation of the analytically described geometry. To ensure the usability, visual quality and evaluability of the AR/VR application, time consuming optimisation is needed depending on the product complexity and the performance of the target device.Widespread approaches to this problem are based on iterative mesh decimation. This approach ignores the varying importance of geometries and the required visual quality in engineering applications. Our predictive approach is an alternative that enables optimisation without iterative process steps on the tessellated geometry.The contribution presents an approach that uses surface-based prediction and enables predictions of the perceived visual quality of the geometries. This contains the investigation of different geometric complexity metrics gathered from literature as basis for prediction models. The approach is implemented in a geometry preparation tool and the results are compared with other approaches.
APA, Harvard, Vancouver, ISO, and other styles
3

Lei, Xiangdong, Changhui Peng, Haiyan Wang, and Xiaolu Zhou. "Individual height–diameter models for young black spruce (Picea mariana) and jack pine (Pinus banksiana) plantations in New Brunswick, Canada." Forestry Chronicle 85, no. 1 (January 1, 2009): 43–56. http://dx.doi.org/10.5558/tfc85043-1.

Full text
Abstract:
Historically, height–diameter models have mainly been developed for mature trees; consequently, few height–diameter models have been calibrated for young forest stands. In order to develop equations predicting the height of trees with small diameters, 46 individual height–diameter models were fitted and tested in young black spruce (Picea mariana) and jack pine (Pinus banksiana) plantations between the ages of 4 to 8 years, measured from 182 plots in New Brunswick, Canada. The models were divided into 2 groups: a diameter group and a second group applying both diameter and additional stand- or tree-level variables (composite models). There was little difference in predicting tree height among the former models (Group I) while the latter models (Group II) generally provided better prediction. Based on goodness of fit (R2and MSE), prediction ability (the bias and its associated prediction and tolerance intervals in absolute and relative terms), and ease of application, 2 Group II models were recommended for predicting individual tree heights within young black spruce and jack pine forest stands. Mean stand height was required for application of these models. The resultant tolerance intervals indicated that most errors (95%) associated with height predictions would be within the following limits (a 95% confidence level): [-0.54 m, 0.54 m] or [-14.7%, 15.9%] for black spruce and [-0.77 m, 0.77 m] or [-17.1%, 18.6%] for jack pine. The recommended models are statistically reliable for growth and yield applications, regeneration assessment and management planning. Key words: composite model, linear model, model calibration, model validation, prediction interval, tolerance interval
APA, Harvard, Vancouver, ISO, and other styles
4

Pintelas, Emmanuel, Meletis Liaskos, Ioannis E. Livieris, Sotiris Kotsiantis, and Panagiotis Pintelas. "Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction." Journal of Imaging 6, no. 6 (May 28, 2020): 37. http://dx.doi.org/10.3390/jimaging6060037.

Full text
Abstract:
Image classification is a very popular machine learning domain in which deep convolutional neural networks have mainly emerged on such applications. These networks manage to achieve remarkable performance in terms of prediction accuracy but they are considered as black box models since they lack the ability to interpret their inner working mechanism and explain the main reasoning of their predictions. There is a variety of real world tasks, such as medical applications, in which interpretability and explainability play a significant role. Making decisions on critical issues such as cancer prediction utilizing black box models in order to achieve high prediction accuracy but without provision for any sort of explanation for its prediction, accuracy cannot be considered as sufficient and ethnically acceptable. Reasoning and explanation is essential in order to trust these models and support such critical predictions. Nevertheless, the definition and the validation of the quality of a prediction model’s explanation can be considered in general extremely subjective and unclear. In this work, an accurate and interpretable machine learning framework is proposed, for image classification problems able to make high quality explanations. For this task, it is developed a feature extraction and explanation extraction framework, proposing also three basic general conditions which validate the quality of any model’s prediction explanation for any application domain. The feature extraction framework will extract and create transparent and meaningful high level features for images, while the explanation extraction framework will be responsible for creating good explanations relying on these extracted features and the prediction model’s inner function with respect to the proposed conditions. As a case study application, brain tumor magnetic resonance images were utilized for predicting glioma cancer. Our results demonstrate the efficiency of the proposed model since it managed to achieve sufficient prediction accuracy being also interpretable and explainable in simple human terms.
APA, Harvard, Vancouver, ISO, and other styles
5

Moskolaï, Waytehad Rose, Wahabou Abdou, Albert Dipanda, and Kolyang. "Application of Deep Learning Architectures for Satellite Image Time Series Prediction: A Review." Remote Sensing 13, no. 23 (November 27, 2021): 4822. http://dx.doi.org/10.3390/rs13234822.

Full text
Abstract:
Satellite image time series (SITS) is a sequence of satellite images that record a given area at several consecutive times. The aim of such sequences is to use not only spatial information but also the temporal dimension of the data, which is used for multiple real-world applications, such as classification, segmentation, anomaly detection, and prediction. Several traditional machine learning algorithms have been developed and successfully applied to time series for predictions. However, these methods have limitations in some situations, thus deep learning (DL) techniques have been introduced to achieve the best performance. Reviews of machine learning and DL methods for time series prediction problems have been conducted in previous studies. However, to the best of our knowledge, none of these surveys have addressed the specific case of works using DL techniques and satellite images as datasets for predictions. Therefore, this paper concentrates on the DL applications for SITS prediction, giving an overview of the main elements used to design and evaluate the predictive models, namely the architectures, data, optimization functions, and evaluation metrics. The reviewed DL-based models are divided into three categories, namely recurrent neural network-based models, hybrid models, and feed-forward-based models (convolutional neural networks and multi-layer perceptron). The main characteristics of satellite images and the major existing applications in the field of SITS prediction are also presented in this article. These applications include weather forecasting, precipitation nowcasting, spatio-temporal analysis, and missing data reconstruction. Finally, current limitations and proposed workable solutions related to the use of DL for SITS prediction are also highlighted.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Donghyun, Heechan Han, Wonjoon Wang, Yujin Kang, Hoyong Lee, and Hung Soo Kim. "Application of Deep Learning Models and Network Method for Comprehensive Air-Quality Index Prediction." Applied Sciences 12, no. 13 (July 1, 2022): 6699. http://dx.doi.org/10.3390/app12136699.

Full text
Abstract:
Accurate pollutant prediction is essential in fields such as meteorology, meteorological disasters, and climate change studies. In this study, long short-term memory (LSTM) and deep neural network (DNN) models were applied to six pollutants and comprehensive air-quality index (CAI) predictions from 2015 to 2020 in Korea. In addition, we used the network method to find the best data sources that provide factors affecting comprehensive air-quality index behaviors. This study had two steps: (1) predicting the six pollutants, including fine dust (PM10), fine particulate matter (PM2.5), ozone (O3), sulfurous acid gas (SO2), nitrogen dioxide (NO2), and carbon monoxide (CO) using the LSTM model; (2) forecasting the CAI using the six predicted pollutants in the first step as predictors of DNNs. The predictive ability of each model for the six pollutants and CAI prediction was evaluated by comparing it with the observed air-quality data. This study showed that combining a DNN model with the network method provided a high predictive power, and this combination could be a remarkable strength in CAI prediction. As the need for disaster management increases, it is anticipated that the LSTM and DNN models with the network method have ample potential to track the dynamics of air pollution behaviors.
APA, Harvard, Vancouver, ISO, and other styles
7

Colditz, Graham A., and Esther K. Wei. "Risk Prediction Models: Applications in Cancer Prevention." Current Epidemiology Reports 2, no. 4 (September 30, 2015): 245–50. http://dx.doi.org/10.1007/s40471-015-0057-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

He, Jianqin, Yong Hu, Xiangzhou Zhang, Lijuan Wu, Lemuel R. Waitman, and Mei Liu. "Multi-perspective predictive modeling for acute kidney injury in general hospital populations using electronic medical records." JAMIA Open 2, no. 1 (November 15, 2018): 115–22. http://dx.doi.org/10.1093/jamiaopen/ooy043.

Full text
Abstract:
Abstract Objectives Acute kidney injury (AKI) in hospitalized patients puts them at much higher risk for developing future health problems such as chronic kidney disease, stroke, and heart disease. Accurate AKI prediction would allow timely prevention and intervention. However, current AKI prediction researches pay less attention to model building strategies that meet complex clinical application scenario. This study aims to build and evaluate AKI prediction models from multiple perspectives that reflect different clinical applications. Materials and Methods A retrospective cohort of 76 957 encounters and relevant clinical variables were extracted from a tertiary care, academic hospital electronic medical record (EMR) system between November 2007 and December 2016. Five machine learning methods were used to build prediction models. Prediction tasks from 4 clinical perspectives with different modeling and evaluation strategies were designed to build and evaluate the models. Results Experimental analysis of the AKI prediction models built from 4 different clinical perspectives suggest a realistic prediction performance in cross-validated area under the curve ranging from 0.720 to 0.764. Discussion Results show that models built at admission is effective for predicting AKI events in the next day; models built using data with a fixed lead time to AKI onset is still effective in the dynamic clinical application scenario in which each patient’s lead time to AKI onset is different. Conclusion To our best knowledge, this is the first systematic study to explore multiple clinical perspectives in building predictive models for AKI in the general inpatient population to reflect real performance in clinical application.
APA, Harvard, Vancouver, ISO, and other styles
9

Hong, Feng, Lu Tian, and Viswanath Devanarayan. "Improving the Robustness of Variable Selection and Predictive Performance of Regularized Generalized Linear Models and Cox Proportional Hazard Models." Mathematics 11, no. 3 (January 20, 2023): 557. http://dx.doi.org/10.3390/math11030557.

Full text
Abstract:
High-dimensional data applications often entail the use of various statistical and machine-learning algorithms to identify an optimal signature based on biomarkers and other patient characteristics that predicts the desired clinical outcome in biomedical research. Both the composition and predictive performance of such biomarker signatures are critical in various biomedical research applications. In the presence of a large number of features, however, a conventional regression analysis approach fails to yield a good prediction model. A widely used remedy is to introduce regularization in fitting the relevant regression model. In particular, a L1 penalty on the regression coefficients is extremely useful, and very efficient numerical algorithms have been developed for fitting such models with different types of responses. This L1-based regularization tends to generate a parsimonious prediction model with promising prediction performance, i.e., feature selection is achieved along with construction of the prediction model. The variable selection, and hence the composition of the signature, as well as the prediction performance of the model depend on the choice of the penalty parameter used in the L1 regularization. The penalty parameter is often chosen by K-fold cross-validation. However, such an algorithm tends to be unstable and may yield very different choices of the penalty parameter across multiple runs on the same dataset. In addition, the predictive performance estimates from the internal cross-validation procedure in this algorithm tend to be inflated. In this paper, we propose a Monte Carlo approach to improve the robustness of regularization parameter selection, along with an additional cross-validation wrapper for objectively evaluating the predictive performance of the final model. We demonstrate the improvements via simulations and illustrate the application via a real dataset.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhao, Zeyuan, Ping Li, Yongjie Dai, Zhaoe Min, and Lei Chen. "Multi-Task Deep Evidential Sequence Learning for Trustworthy Alzheimer’s Disease Progression Prediction." Applied Sciences 13, no. 15 (August 3, 2023): 8953. http://dx.doi.org/10.3390/app13158953.

Full text
Abstract:
Alzheimer’s disease (AD) is an irreversible neurodegenerative disease. Providing trustworthy AD progression predictions for at-risk individuals contributes to early identification of AD patients and holds significant value in discovering effective treatments and empowering the patient in taking proactive care. Recently, although numerous disease progression models based on machine learning have emerged, they often focus solely on enhancing predictive accuracy and ignore the measurement of result reliability. Consequently, this oversight adversely affects the recognition and acceptance of these models in clinical applications. To address these problems, we propose a multi-task evidential sequence learning model for the trustworthy prediction of disease progression. Specifically, we incorporate evidential deep learning into the multi-task learning framework based on recurrent neural networks. We simultaneously perform AD clinical diagnosis and cognitive score predictions while quantifying the uncertainty of each prediction without incurring additional computational costs by leveraging the Dirichlet and Normal-Inverse-Gamma distributions. Moreover, an adaptive weighting scheme is introduced to automatically balance between tasks for more effective training. Finally, experimental results on the TADPOLE dataset validate that our model not only has a comparable predictive performance to similar models but also offers reliable quantification of prediction uncertainties, providing a crucial supplementary factor for risk-sensitive AD progression prediction applications.
APA, Harvard, Vancouver, ISO, and other styles
11

Daigger, Glen T., and Daniel Nolasco. "Evaluation and design of full-scale wastewater treatment plants using biological process models." Water Science and Technology 31, no. 2 (January 1, 1995): 245–55. http://dx.doi.org/10.2166/wst.1995.0112.

Full text
Abstract:
Results from application of the IAWQ Activated Sludge Model No. 1, either with or without the excess biological phosphorus removal model of Dold, to thirteen full-scale wastewater treatment plants are presented. For nitrogen removal applications the model is capable of accurately predicting full-scale plant performance and trends in performance, even using model default parameters. Additional work is needed to allow accurate predictions of the effect of reactor configuration and oxygen transfer systems on plant performance. The model of Dold accurately characterized the steady-state performance of biological nitrogen and phosphorus removal systems, but not their dynamic behavior. Detailed wastewater characterization is necessary to allow accurate prediction of the steady-state performance of biological phosphorus removal systems. Further work is necessary to demonstrate its applicability to dynamic applications.
APA, Harvard, Vancouver, ISO, and other styles
12

Abdullah, Radhwan M., Abedallah Zaid Abualkishik, Najla Matti Isaacc, Ali A. Alwan, and Yonis Gulzar. "An investigation study for risk calculation of security vulnerabilities on android applications." Indonesian Journal of Electrical Engineering and Computer Science 25, no. 3 (March 1, 2022): 1736. http://dx.doi.org/10.11591/ijeecs.v25.i3.pp1736-1748.

Full text
Abstract:
Applications within mobile devices, although useful and entertaining, come with security risks to private information stored within the device such as name, address, and date of birth. Standards, frameworks, models, and metrics have been proposed and implemented to combat these security vulnerabilities, but they remain to persist today. In this review, we discuss the risk calculation of android applications which is used to determine the overall security of an application. Besides, we also present and discuss the permission-based access control models that can be used to evaluate application access to user data. The study also focuses on examining the predictive analysis of security risks using machine learning. We conduct a comprehensive review of the leading studies accomplished on investigating the vulnerabilities of the applications for the Android mobile platform. The review examines various well-known vulnerabilities prediction models and highlights the sources of the vulnerabilities, prediction technique, applications and the performance of these models. Some models and frameworks prove to be promising but there is still much more research needed to be done regarding security for Android applications.
APA, Harvard, Vancouver, ISO, and other styles
13

Mohammed, Mohammed Ali. "Investigation of financial applications with blockchain technology." Journal of Computer & Electrical and Electronics Engineering Sciences 1, no. 1 (April 28, 2023): 10–14. http://dx.doi.org/10.51271/jceees-0003.

Full text
Abstract:
Aims: This article investigates recent advancements in machine learning and blockchain technology for cryptocurrency price prediction. The study presents a ML system using various techniques applied to six different datasets. The findings highlight that simpler models can outperform complex ones in predicting cryptocurrency prices. Methods: The methods used in this study include applying diverse ML techniques such as LSTM, CNN, SVM, KNN, XGBoost, Astro ML, LASSO, RIDGE, linear regression, DT, and GP on six cryptocurrency datasets to predict prices. Results: The research evaluated various machine learning techniques for predicting cryptocurrency prices and reported the following RMSE values: Bitcoin prediction using Nadaraya-Watson kernel regression yielded an RMSE of 0.17, while Dogecoin prediction with linear regression resulted in an RMSE of 0.032. Ethereum price prediction using Gaussian regression achieved an RMSE of 0.02. For USD Coin, a combination of XGBoost, Gaussian regression, and Ridge techniques led to an RMSE of 0.014. Binance Coin price prediction using Gaussian regression had an RMSE of 0.032, and finally, Cardano Coin prediction employing LSTM reached an RMSE of 0.059. Conclusion: This study demonstrated the effectiveness of various machine learning techniques in predicting cryptocurrency prices. It revealed that simpler models can outperform complex ones in certain cases. The research contributes valuable insights to the field and can guide future work in cryptocurrency price prediction. The proposed model achieved promising results as evaluated by the RMSE metric.
APA, Harvard, Vancouver, ISO, and other styles
14

Matsuzaka, Yasunari, and Yoshihiro Uesawa. "Computational Models That Use a Quantitative Structure–Activity Relationship Approach Based on Deep Learning." Processes 11, no. 4 (April 21, 2023): 1296. http://dx.doi.org/10.3390/pr11041296.

Full text
Abstract:
In the toxicological testing of new small-molecule compounds, it is desirable to establish in silico test methods to predict toxicity instead of relying on animal testing. Since quantitative structure–activity relationships (QSARs) can predict the biological activity from structural information for small-molecule compounds, QSAR applications for in silico toxicity prediction have been studied for a long time. However, in recent years, the remarkable predictive performance of deep learning has attracted attention for practical applications. In this review, we summarize the application of deep learning to QSAR for constructing prediction models, including a discussion of parameter optimization for deep learning.
APA, Harvard, Vancouver, ISO, and other styles
15

Hasaballah, Mustafa M., Abdulhakim A. Al-Babtain, Md Moyazzem Hossain, and Mahmoud E. Bakr. "Theoretical Aspects for Bayesian Predictions Based on Three-Parameter Burr-XII Distribution and Its Applications in Climatic Data." Symmetry 15, no. 8 (August 7, 2023): 1552. http://dx.doi.org/10.3390/sym15081552.

Full text
Abstract:
Symmetry and asymmetry play vital roles in prediction. Symmetrical data, which follows a predictable pattern, is easier to predict compared to asymmetrical data, which lacks a predictable pattern. Symmetry helps identify patterns within data that can be utilized in predictive models, while asymmetry aids in identifying outliers or anomalies that should be considered in the predictive model. Among the various factors associated with storms and their impact on surface temperatures, wind speed stands out as a significant factor. This paper focuses on predicting wind speed by utilizing unified hybrid censoring data from the three-parameter Burr-XII distribution. Bayesian prediction bounds for future observations are obtained using both one-sample and two-sample prediction techniques. As explicit expressions for Bayesian predictions of one and two samples are unavailable, we propose the use of the Gibbs sampling process in the Markov chain Monte Carlo framework to obtain estimated predictive distributions. Furthermore, we present a climatic data application to demonstrate the developed uncertainty procedures. Additionally, a simulation research is carried out to examine and contrast the effectiveness of the suggested methods. The results reveal that the Bayes estimates for the parameters outperformed the Maximum likelihood estimators.
APA, Harvard, Vancouver, ISO, and other styles
16

Lan, Yu, and Daniel F. Heitjan. "Adaptive parametric prediction of event times in clinical trials." Clinical Trials 15, no. 2 (January 29, 2018): 159–68. http://dx.doi.org/10.1177/1740774517750633.

Full text
Abstract:
Background: In event-based clinical trials, it is common to conduct interim analyses at planned landmark event counts. Accurate prediction of the timing of these events can support logistical planning and the efficient allocation of resources. As the trial progresses, one may wish to use the accumulating data to refine predictions. Purpose: Available methods to predict event times include parametric cure and non-cure models and a nonparametric approach involving Bayesian bootstrap simulation. The parametric methods work well when their underlying assumptions are met, and the nonparametric method gives calibrated but inefficient predictions across a range of true models. In the early stages of a trial, when predictions have high marginal value, it is difficult to infer the form of the underlying model. We seek to develop a method that will adaptively identify the best-fitting model and use it to create robust predictions. Methods: At each prediction time, we repeat the following steps: (1) resample the data; (2) identify, from among a set of candidate models, the one with the highest posterior probability; and (3) sample from the predictive posterior of the data under the selected model. Results: A Monte Carlo study demonstrates that the adaptive method produces prediction intervals whose coverage is robust within the family of selected models. The intervals are generally wider than those produced assuming the correct model, but narrower than nonparametric prediction intervals. We demonstrate our method with applications to two completed trials: The International Chronic Granulomatous Disease study and Radiation Therapy Oncology Group trial 0129. Limitations: Intervals produced under any method can be badly calibrated when the sample size is small and unhelpfully wide when predicting the remote future. Early predictions can be inaccurate if there are changes in enrollment practices or trends in survival. Conclusions: An adaptive event-time prediction method that selects the model given the available data can give improved robustness compared to methods based on less flexible parametric models.
APA, Harvard, Vancouver, ISO, and other styles
17

Elish, Mahmoud. "Enhanced prediction of vulnerable Web components using Stochastic Gradient Boosting Trees." International Journal of Web Information Systems 15, no. 2 (June 17, 2019): 201–14. http://dx.doi.org/10.1108/ijwis-05-2018-0041.

Full text
Abstract:
Purpose Effective and efficient software security inspection is crucial as the existence of vulnerabilities represents severe risks to software users. The purpose of this paper is to empirically evaluate the potential application of Stochastic Gradient Boosting Trees (SGBT) as a novel model for enhanced prediction of vulnerable Web components compared to common, popular and recent machine learning models. Design/methodology/approach An empirical study was conducted where the SGBT and 16 other prediction models have been trained, optimized and cross validated using vulnerability data sets from multiple versions of two open-source Web applications written in PHP. The prediction performance of these models have been evaluated and compared based on accuracy, precision, recall and F-measure. Findings The results indicate that the SGBT models offer improved prediction over the other 16 models and thus are more effective and reliable in predicting vulnerable Web components. Originality/value This paper proposed a novel application of SGBT for enhanced prediction of vulnerable Web components and showed its effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
18

Mehdipour, Farhad, Wisanu Boonrat, April Naviza, Vimita Vidhya, and Marianne Cherrington. "Reducing profiling bias in crime risk prediction models." Rere Āwhio - The Journal of Applied Research and Practice, no. 1 (2021): 86–93. http://dx.doi.org/10.34074/rere.00108.

Full text
Abstract:
Crime risk prediction and predictive policing can lead to safer communities, by focusing on crime hotspots. Yet predictive tools should be reliable, and their outputs should be valid, especially across diverse cultures. Machine learning methods in policing systems are topical as they seem to be causing unintended consequences that exacerbate social injustice. Research into machine learning algorithm bias is prevalent, but bias, as it relates to predictive policing, is limited. In this paper, we summarise the findings of nascent scholarship on the topic of bias in predictive policing. The unique contribution of this paper is in the use of a typical police prediction modelling process to unpack how and why such bias can creep into algorithms that have high predictive accuracy. Our research finds that especially when resources are limited, trust in machine learning outputs is elevated; systemic bias of preceding assumptions may replicate. Recommendations include a call for human oversight in machine learning methods with sensitive applications such as automated crime prediction methods. Routine reviews of prediction outputs can ensure unwarranted community targeting is not magnified.
APA, Harvard, Vancouver, ISO, and other styles
19

Wang, Debby D., Haoran Xie, and Hong Yan. "Proteo-chemometrics interaction fingerprints of protein–ligand complexes predict binding affinity." Bioinformatics 37, no. 17 (February 27, 2021): 2570–79. http://dx.doi.org/10.1093/bioinformatics/btab132.

Full text
Abstract:
Abstract Motivation Reliable predictive models of protein–ligand binding affinity are required in many areas of biomedical research. Accurate prediction based on current descriptors or molecular fingerprints (FPs) remains a challenge. We develop novel interaction FPs (IFPs) to encode protein–ligand interactions and use them to improve the prediction. Results Proteo-chemometrics IFPs (PrtCmm IFPs) formed by combining extended connectivity fingerprints (ECFPs) with the proteo-chemometrics concept. Combining PrtCmm IFPs with machine-learning models led to efficient scoring models, which were validated on the PDBbind v2019 core set and CSAR-HiQ sets. The PrtCmm IFP Score outperformed several other models in predicting protein–ligand binding affinities. Besides, conventional ECFPs were simplified to generate new IFPs, which provided consistent but faster predictions. The relationship between the base atom properties of ECFPs and the accuracy of predictions was also investigated. Availability PrtCmm IFP has been implemented in the IFP Score Toolkit on github (https://github.com/debbydanwang/IFPscore). Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
20

Lekea, Angella, and Wynand J. vdM Steyn. "Performance of Pavement Temperature Prediction Models." Applied Sciences 13, no. 7 (March 24, 2023): 4164. http://dx.doi.org/10.3390/app13074164.

Full text
Abstract:
Appropriate asphalt binder selection is dependent on the correct determination of maximum and minimum pavement temperatures. Temperature prediction models have been developed to determine pavement design temperatures. Accordingly, accurate temperature prediction is necessary to ensure the correct design of climate-resilient pavements and for suitable pavement overlay design. Research has shown that the complexity of the model, input variables, geographical location among others affect the accuracy of temperature prediction models. Calibration has also proved to improve the accuracy of the predicted temperature. In this paper, the performance of three pavement temperature prediction models with a sample of materials, including asphalt, was examined. Furthermore, the effect of calibration on model accuracy was evaluated. Temperature data sourced from Pretoria were used to calibrate and test the models. The performance of both the calibrated and uncalibrated models in a different geographical location was also assessed. Asphalt temperature data from two locations in Ghana were used. The determination coefficient (R2), Variance Accounted For (VAF), Maximum Relative Error (MRE) and Root Mean Square Error (RMSE) statistical methods were used in the analysis. It was observed that the models performed better at predicting maximum temperature, while minimum temperature predictions were highly variable. The performance of the models varied for the maximum temperature prediction depending on the material. Calibration improved the accuracy of the models, but test data relevant to each location ought to be used for calibration to be effective. There is also a need for the models to be tested with data sourced from other continents.
APA, Harvard, Vancouver, ISO, and other styles
21

Kolaghassi, Rania, Gianluca Marcelli, and Konstantinos Sirlantzis. "Effect of Gait Speed on Trajectory Prediction Using Deep Learning Models for Exoskeleton Applications." Sensors 23, no. 12 (June 18, 2023): 5687. http://dx.doi.org/10.3390/s23125687.

Full text
Abstract:
Gait speed is an important biomechanical determinant of gait patterns, with joint kinematics being influenced by it. This study aims to explore the effectiveness of fully connected neural networks (FCNNs), with a potential application for exoskeleton control, in predicting gait trajectories at varying speeds (specifically, hip, knee, and ankle angles in the sagittal plane for both limbs). This study is based on a dataset from 22 healthy adults walking at 28 different speeds ranging from 0.5 to 1.85 m/s. Four FCNNs (a generalised-speed model, a low-speed model, a high-speed model, and a low-high-speed model) are evaluated to assess their predictive performance on gait speeds included in the training speed range and on speeds that have been excluded from it. The evaluation involves short-term (one-step-ahead) predictions and long-term (200-time-step) recursive predictions. The results show that the performance of the low- and high-speed models, measured using the mean absolute error (MAE), decreased by approximately 43.7% to 90.7% when tested on the excluded speeds. Meanwhile, when tested on the excluded medium speeds, the performance of the low-high-speed model improved by 2.8% for short-term predictions and 9.8% for long-term predictions. These findings suggest that FCNNs are capable of interpolating to speeds within the maximum and minimum training speed ranges, even if not explicitly trained on those speeds. However, their predictive performance decreases for gaits at speeds beyond or below the maximum and minimum training speed ranges.
APA, Harvard, Vancouver, ISO, and other styles
22

Brüdigam, Tim, Johannes Teutsch, Dirk Wollherr, Marion Leibold, and Martin Buss. "Probabilistic model predictive control for extended prediction horizons." at - Automatisierungstechnik 69, no. 9 (September 1, 2021): 759–70. http://dx.doi.org/10.1515/auto-2021-0025.

Full text
Abstract:
Abstract Detailed prediction models with robust constraints and small sampling times in Model Predictive Control yield conservative behavior and large computational effort, especially for longer prediction horizons. Here, we extend and combine previous Model Predictive Control methods that account for prediction uncertainty and reduce computational complexity. The proposed method uses robust constraints on a detailed model for short-term predictions, while probabilistic constraints are employed on a simplified model with increased sampling time for long-term predictions. The underlying methods are introduced before presenting the proposed Model Predictive Control approach. The advantages of the proposed method are shown in a mobile robot simulation example.
APA, Harvard, Vancouver, ISO, and other styles
23

Jiang, Zhe. "Spatial Structured Prediction Models: Applications, Challenges, and Techniques." IEEE Access 8 (2020): 38714–27. http://dx.doi.org/10.1109/access.2020.2975584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Liski, Erkki P., and Tapio Nummi. "Prediction in Repeated-Measures Models With Engineering Applications." Technometrics 38, no. 1 (February 1996): 25–36. http://dx.doi.org/10.1080/00401706.1996.10484413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Halim, Muhammad, Muslihah Wook, Nor Hasbullah, Noor Razali, and Hasmeda Hamid. "Comparative Assessment of Data Mining Techniques for Flash Flood Prediction." International Journal of Advances in Soft Computing and its Applications 14, no. 1 (March 28, 2022): 126–45. http://dx.doi.org/10.15849/ijasca.220328.09.

Full text
Abstract:
Abstract Data mining techniques have recently drawn considerable attention from the research community for their ability to predict flash flood phenomena. These techniques can bring large-scale flood data into real practice and have become the necessary tools for impact assessment, societal resilience, and disaster control. Although numerous studies have been conducted on data mining techniques and flash flood predictions, domain-specific flash flood prediction models based on existing data mining techniques are still lacking. Notably, this study has focused on the performance of four data mining techniques, namely, logistic regression (LR), artificial neural networks (ANN), k-nearest neighbour (kNN), and support vector machine (SVM) in a comparative assessment as prediction models. The area under the curve (AUC) was utilised to validate these models. The value of AUC was higher than 0.9 for all models. Accordingly, the outcomes outlined in this study can contribute to Halim et al. the current literature by boosting the performance of data mining techniques for predicting flash floods through a comparison of the most recent data mining techniques. Keywords: Artificial neural networks (ANN), Flash flood, k-nearest neighbor (kNN), Logistic regression (LR), Support vector machine (SVM)
APA, Harvard, Vancouver, ISO, and other styles
26

Che, Tong, Xiaofeng Liu, Site Li, Yubin Ge, Ruixiang Zhang, Caiming Xiong, and Yoshua Bengio. "Deep Verifier Networks: Verification of Deep Discriminative Models with Deep Generative Models." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 7002–10. http://dx.doi.org/10.1609/aaai.v35i8.16862.

Full text
Abstract:
AI Safety is a major concern in many deep learning applications such as autonomous driving. Given a trained deep learning model, an important natural problem is how to reliably verify the model's prediction. In this paper, we propose a novel framework --- deep verifier networks (DVN) to detect unreliable inputs or predictions of deep discriminative models, using separately trained deep generative models. Our proposed model is based on conditional variational auto-encoders with disentanglement constraints to separate the label information from the latent representation. We give both intuitive and theoretical justifications for the model. Our verifier network is trained independently with the prediction model, which eliminates the need of retraining the verifier network for a new model. We test the verifier network on both out-of-distribution detection and adversarial example detection problems, as well as anomaly detection problems in structured prediction tasks such as image caption generation. We achieve state-of-the-art results in all of these problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Yu, Jiaqi, Wen-Shao Chang, and Yu Dong. "Building Energy Prediction Models and Related Uncertainties: A Review." Buildings 12, no. 8 (August 21, 2022): 1284. http://dx.doi.org/10.3390/buildings12081284.

Full text
Abstract:
Building energy usage has been an important issue in recent decades, and energy prediction models are important tools for analysing this problem. This study provides a comprehensive review of building energy prediction models and uncertainties in the models. First, this paper introduces three types of prediction methods: white-box models, black-box models, and grey-box models. The principles, strengths, shortcomings, and applications of every model are discussed systematically. Second, this paper analyses prediction model uncertainties in terms of human, building, and weather factors. Finally, the research gaps in predicting building energy consumption are summarised in order to guide the optimisation of building energy prediction methods.
APA, Harvard, Vancouver, ISO, and other styles
28

Jekabsons, Gints, and Marina Uhanova. "Adaptive Regression and Classification Models with Applications in Insurance." Applied Computer Systems 15, no. 1 (July 1, 2014): 28–31. http://dx.doi.org/10.2478/acss-2014-0004.

Full text
Abstract:
Abstract Nowadays, in the insurance industry the use of predictive modeling by means of regression and classification techniques is becoming increasingly important and popular. The success of an insurance company largely depends on the ability to perform such tasks as credibility estimation, determination of insurance premiums, estimation of probability of claim, detecting insurance fraud, managing insurance risk. This paper discusses regression and classification modeling for such types of prediction problems using the method of Adaptive Basis Function Construction
APA, Harvard, Vancouver, ISO, and other styles
29

Staffa, Steven J., and David Zurakowski. "Statistical Development and Validation of Clinical Prediction Models." Anesthesiology 135, no. 3 (July 30, 2021): 396–405. http://dx.doi.org/10.1097/aln.0000000000003871.

Full text
Abstract:
Summary Clinical prediction models in anesthesia and surgery research have many clinical applications including preoperative risk stratification with implications for clinical utility in decision-making, resource utilization, and costs. It is imperative that predictive algorithms and multivariable models are validated in a suitable and comprehensive way in order to establish the robustness of the model in terms of accuracy, predictive ability, reliability, and generalizability. The purpose of this article is to educate anesthesia researchers at an introductory level on important statistical concepts involved with development and validation of multivariable prediction models for a binary outcome. Methods covered include assessments of discrimination and calibration through internal and external validation. An anesthesia research publication is examined to illustrate the process and presentation of multivariable prediction model development and validation for a binary outcome. Properly assessing the statistical and clinical validity of a multivariable prediction model is essential for reassuring the generalizability and reproducibility of the published tool.
APA, Harvard, Vancouver, ISO, and other styles
30

Alqahtani, Norah Dhafer, Bander Alzahrani, and Muhammad Sher Ramzan. "Deep Learning Applications for Dyslexia Prediction." Applied Sciences 13, no. 5 (February 22, 2023): 2804. http://dx.doi.org/10.3390/app13052804.

Full text
Abstract:
Dyslexia is a neurological problem that leads to obstacles and difficulties in the learning process, especially in reading. Generally, people with dyslexia suffer from weak reading, writing, spelling, and fluency abilities. However, these difficulties are not related to their intelligence. An early diagnosis of this disorder will help dyslexic children improve their abilities using appropriate tools and specialized software. Machine learning and deep learning methods have been implemented to recognize dyslexia with various datasets related to dyslexia acquired from medical and educational organizations. This review paper analyzed the prediction performance of deep learning models for dyslexia and summarizes the challenges researchers face when they use deep learning models for classification and diagnosis. Using the PRISMA protocol, 19 articles were reviewed and analyzed, with a focus on data acquisition, preprocessing, feature extraction, and the prediction model performance. The purpose of this review was to aid researchers in building a predictive model for dyslexia based on available dyslexia-related datasets. The paper demonstrated some challenges that researchers encounter in this field and must overcome.
APA, Harvard, Vancouver, ISO, and other styles
31

Loukili, Manal. "Supervised Learning Algorithms for Predicting Customer Churn with Hyperparameter Optimization." International Journal of Advances in Soft Computing and its Applications 14, no. 3 (November 28, 2022): 50–63. http://dx.doi.org/10.15849/ijasca.221128.04.

Full text
Abstract:
Abstract Churn risk is one of the most worrying issues in the telecommunications industry. The methods for predicting churn have been improved to a great extent by the remarkable developments in the word of artificial intelligence and machine learning. In this context, a comparative study of four machine learning models was conducted. The first phase consists of data preprocessing, followed by feature analysis. In the third phase, feature selection. Then, the data is split into the training set and the test set. During the prediction phase, some of the commonly used predictive models were adopted, namely k-nearest neighbor, logistic regression, random forest, and support vector machine. Furthermore, we used cross-validation on the training set for hyperparameter adjustment and for avoiding model overfitting. Next, the hyperparameters were adjusted to increase the models' performance. The results obtained on the test set were evaluated using the feature weights, confusion matrix, accuracy score, precision, recall, error rate, and f1 score. Finally, it was found that the support vector machine model outperformed the other prediction models with an accuracy equal to 96.92%. Keywords: Churn Prediction, Classification Algorithms, Hyperparameter Optimization, Machine Learning, Telecommunications.
APA, Harvard, Vancouver, ISO, and other styles
32

Negi, Pankaj. "Application of Machine Learning in Predicting the Fatigue behaviour of Materials Using Deep Learning." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 9, no. 2 (December 30, 2018): 541–53. http://dx.doi.org/10.17762/turcomat.v9i2.13858.

Full text
Abstract:
Accurate prediction of the fatigue behaviour of materials is crucial for ensuring the reliability and durability of structural components in various engineering applications. Machine learning (ML) techniques have demonstrated significant potential in predicting fatigue behaviour by analysing complex datasets. This research paper explores the application of deep learning, a subset of ML, for predicting the fatigue behaviour of materials. The study focuses on the development and optimization of deep learning models to accurately predict fatigue life and failure modes based on material properties, loading conditions, and other relevant factors. The research aims to improve the understanding and prediction of fatigue behaviour, leading to enhanced design and optimization of materials and structures. The prediction of fatigue behaviour in materials is a critical aspect in engineering design and structural integrity assessment. Traditional approaches rely on empirical models and physical testing, which can be time-consuming and resource-intensive. In recent years, the application of machine learning, particularly deep learning techniques, has shown promising results in predicting the fatigue behaviour of materials. This paper presents an analysis of the application of machine learning, specifically deep learning, in predicting the fatigue behaviour of materials. The study focuses on the use of deep learning models, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), to analyse the complex relationships between material properties, loading conditions, and fatigue life. The paper discusses the methodology for training and validating the deep learning models using available fatigue data sets. Furthermore, it examines the performance and accuracy of the models in predicting fatigue life compared to traditional approaches. The findings suggest that deep learning models can effectively capture the nonlinear and intricate patterns in fatigue data, leading to accurate predictions of fatigue life. The practical implications of integrating machine learning into fatigue prediction are discussed, including the potential for accelerated design optimization, reduced testing requirements, and enhanced structural reliability. The contribution of this study lies in the exploration and evaluation of deep learning techniques for predicting the fatigue behaviour of materials, providing insights into the capabilities and limitations of machine learning approaches in this domain. Machine learning, particularly deep learning, as a valuable tool in predicting the fatigue behaviour of materials, enabling more efficient and reliable engineering design processes.
APA, Harvard, Vancouver, ISO, and other styles
33

Brunbauer, Julia, and Gerald Pinter. "Stiffness and Strength Based Models for the Fatigue-Life Prediction of Continuously Fiber Reinforced Composites." Materials Science Forum 825-826 (July 2015): 960–67. http://dx.doi.org/10.4028/www.scientific.net/msf.825-826.960.

Full text
Abstract:
The fatigue-life prediction of continuously fiber reinforced carbon/epoxy composites is of importance in order to support or partially replace the extensive amount of mechanical testing necessary for safe structural applications. However, the factors influencing the damage behaviour and the degradation of mechanical properties under real applications are numerous. To be able to predict fatigue-life of composites in an application-oriented way in the future, two novel approaches towards fatigue-life predictions have been studied by the authors in the last years. In this work, the promising approaches based on fatigue stiffness and fatigue strength and their potentials are introduced briefly.
APA, Harvard, Vancouver, ISO, and other styles
34

Bart, Evgeniy, Rui Zhang, and Muzammil Hussain. "Where Would You Go this Weekend? Time-Dependent Prediction of User Activity Using Social Network Data." Proceedings of the International AAAI Conference on Web and Social Media 7, no. 1 (August 3, 2021): 669–72. http://dx.doi.org/10.1609/icwsm.v7i1.14453.

Full text
Abstract:
Predicting user activities and interests has many applications, for example, in contextual recommendations. Although the problem of predicting interests in general has been studied extensively, the problem of predicting when the users are likely to act on those interests has received considerably less attention. Such predictions of timing are extremely important when the application itself is time-sensitive (e.g., travel recommendations are irrelevant too far in advance and after reservations have already been made). Particularly important is the ability to predict likely future activities long in advance (as opposed to short-term prediction of imminent activities). In this paper we describe a comprehensive study that addresses this problem of making long-term time-dependent predictions of user interest. We have conducted this study on a large collection of visits to various venues of interest performed by users of Foursquare. We have built models that, given a user's history, can predict whether or not the user will visit a venue of a particular type on a given day. These models provide useful prediction accuracy of up to 75% for up to several weeks into the future. Our study explores and compares various feature sets and prediction methods. Of particular interest is the fact that venues interact with each other: to predict visits to one type of venue, it helps to use the history of visits to all venue types.
APA, Harvard, Vancouver, ISO, and other styles
35

de Zarzà, I., J. de Curtò, Enrique Hernández-Orallo, and Carlos T. Calafate. "Cascading and Ensemble Techniques in Deep Learning." Electronics 12, no. 15 (August 5, 2023): 3354. http://dx.doi.org/10.3390/electronics12153354.

Full text
Abstract:
In this study, we explore the integration of cascading and ensemble techniques in Deep Learning (DL) to improve prediction accuracy on diabetes data. The primary approach involves creating multiple Neural Networks (NNs), each predicting the outcome independently, and then feeding these initial predictions into another set of NN. Our exploration starts from an initial preliminary study and extends to various ensemble techniques including bagging, stacking, and finally cascading. The cascading ensemble involves training a second layer of models on the predictions of the first. This cascading structure, combined with ensemble voting for the final prediction, aims to exploit the strengths of multiple models while mitigating their individual weaknesses. Our results demonstrate significant improvement in prediction accuracy, providing a compelling case for the potential utility of these techniques in healthcare applications, specifically for prediction of diabetes where we achieve compelling model accuracy of 91.5% on the test set on a particular challenging dataset, where we compare thoroughly against many other methodologies.
APA, Harvard, Vancouver, ISO, and other styles
36

SCHOPF, JENNIFER M., and FRANCINE BERMAN. "USING STOCHASTIC INFORMATION TO PREDICT APPLICATION BEHAVIOR ON CONTENDED RESOURCES." International Journal of Foundations of Computer Science 12, no. 03 (June 2001): 341–63. http://dx.doi.org/10.1142/s0129054101000527.

Full text
Abstract:
Prediction is a critical component in the achievement of application execution performance. The development of adequate and accurate prediction models is especially difficult in local-area clustered environments where resources are distributed and performance varies due to the presence of other users in the system. This paper discusses the use of stochastic values to parameterize cluster application performance models. Stochastic values represent a range of likely behavior and can be used effectively as model parameters. We describe two representations for stochastic model parameters and demonstrate their effectiveness in predicting the behavior of several applications under different workloads on a contended network of workstations.
APA, Harvard, Vancouver, ISO, and other styles
37

Myasnikova, Ekaterina, and Alexander Spirov. "Relative sensitivity analysis of the predictive properties of sloppy models." Journal of Bioinformatics and Computational Biology 16, no. 02 (April 2018): 1840008. http://dx.doi.org/10.1142/s0219720018400085.

Full text
Abstract:
Commonly among the model parameters characterizing complex biological systems are those that do not significantly influence the quality of the fit to experimental data, so-called “sloppy” parameters. The sloppiness can be mathematically expressed through saturating response functions (Hill’s, sigmoid) thereby embodying biological mechanisms responsible for the system robustness to external perturbations. However, if a sloppy model is used for the prediction of the system behavior at the altered input (e.g. knock out mutations, natural expression variability), it may demonstrate the poor predictive power due to the ambiguity in the parameter estimates. We introduce a method of the predictive power evaluation under the parameter estimation uncertainty, Relative Sensitivity Analysis. The prediction problem is addressed in the context of gene circuit models describing the dynamics of segmentation gene expression in Drosophila embryo. Gene regulation in these models is introduced by a saturating sigmoid function of the concentrations of the regulatory gene products. We show how our approach can be applied to characterize the essential difference between the sensitivity properties of robust and non-robust solutions and select among the existing solutions those providing the correct system behavior at any reasonable input. In general, the method allows to uncover the sources of incorrect predictions and proposes the way to overcome the estimation uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
38

Xue, Han, and Yanmin Niu. "Multi-Output Based Hybrid Integrated Models for Student Performance Prediction." Applied Sciences 13, no. 9 (April 26, 2023): 5384. http://dx.doi.org/10.3390/app13095384.

Full text
Abstract:
In higher education, student learning relies increasingly on autonomy. With the rise in blended learning, both online and offline, students need to further improve their online learning effectiveness. Therefore, predicting students’ performance and identifying students who are struggling in real time to intervene is an important way to improve learning outcomes. However, currently, machine learning in grade prediction applications typically only employs a single-output prediction method and has lagging issues. To advance the prediction of time and enhance the predictive attributes, as well as address the aforementioned issues, this study proposes a multi-output hybrid ensemble model that utilizes data from the Superstar Learning Communication Platform (SLCP) to predict grades. Experimental results show that using the first six weeks of SLCP data and the Xgboost model to predict mid-term and final grades meant that accuracy reached 78.37%, which was 3–8% higher than the comparison models. Using the Gdbt model to predict homework and experiment grades, the average mean squared error was 16.76, which is better than the comparison models. This study uses a multi-output hybrid ensemble model to predict how grades can help improve student learning quality and teacher teaching effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Meng-Wei, Meng-Shiuh Chang, Yuehua Mao, Shuyin Hu, and Chih-Chun Kung. "Machine learning in the evaluation and prediction models of biochar application: A review." Science Progress 106, no. 1 (January 2023): 003685042211488. http://dx.doi.org/10.1177/00368504221148842.

Full text
Abstract:
This article reviews recent studies applying machine learning (ML) approaches to biochar applications. We first briefly introduce the general biochar production process. Various aspects are contained, including the biochar application in the elimination of heavy metals and/or organic compounds and the biochar application in environmental and economic scopes, for instance, food security, energy, and carbon emission. The utilization of ML methods, including ANN, RF, and NN, plays a vital role in evaluating and predicting the efficiency of biochar absorption. It has been proved that ML methods can validly predict the adsorption effectiveness of biochar for water heavy metals with higher accuracy. Moreover, the literature proposed a comprehensive data-driven model to forecast biochar yield and compositions under various biomass input feedstock and different pyrolysis criteria. They said a 12.7% improvement in prediction accuracy compared to the existing literature. However, it might need further optimization in this direction. In summary, this review concludes increasing studies that a well-trained ML method can sufficiently reduce the number of experiment trials and working times associated with higher prediction accuracy. Moreover, further studies on ML applications are needed to optimize the trade-off between biochar yield and its composition.
APA, Harvard, Vancouver, ISO, and other styles
40

Muneer, Rizwan, Muhammad Rehan Hashmet, Peyman Pourafshary, and Mariam Shakeel. "Unlocking the Power of Artificial Intelligence: Accurate Zeta Potential Prediction Using Machine Learning." Nanomaterials 13, no. 7 (March 29, 2023): 1209. http://dx.doi.org/10.3390/nano13071209.

Full text
Abstract:
Nanoparticles have gained significance in modern science due to their unique characteristics and diverse applications in various fields. Zeta potential is critical in assessing the stability of nanofluids and colloidal systems but measuring it can be time-consuming and challenging. The current research proposes the use of cutting-edge machine learning techniques, including multiple regression analyses (MRAs), support vector machines (SVM), and artificial neural networks (ANNs), to simulate the zeta potential of silica nanofluids and colloidal systems, while accounting for affecting parameters such as nanoparticle size, concentration, pH, temperature, brine salinity, monovalent ion type, and the presence of sand, limestone, or nano-sized fine particles. Zeta potential data from different literature sources were used to develop and train the models using machine learning techniques. Performance indicators were employed to evaluate the models’ predictive capabilities. The correlation coefficient (r) for the ANN, SVM, and MRA models was found to be 0.982, 0.997, and 0.68, respectively. The mean absolute percentage error for the ANN model was 5%, whereas, for the MRA and SVM models, it was greater than 25%. ANN models were more accurate than SVM and MRA models at predicting zeta potential, and the trained ANN model achieved an accuracy of over 97% in zeta potential predictions. ANN models are more accurate and faster at predicting zeta potential than conventional methods. The model developed in this research is the first ever to predict the zeta potential of silica nanofluids, dispersed kaolinite, sand–brine system, and coal dispersions considering several influencing parameters. This approach eliminates the need for time-consuming experimentation and provides a highly accurate and rapid prediction method with broad applications across different fields.
APA, Harvard, Vancouver, ISO, and other styles
41

Levinson, Rich, Samantha Niemoeller, Sreeja Nag, and Vinay Ravindra. "Planning Satellite Swarm Measurements for Earth Science Models: Comparing Constraint Processing and MILP Methods." Proceedings of the International Conference on Automated Planning and Scheduling 32 (June 13, 2022): 471–79. http://dx.doi.org/10.1609/icaps.v32i1.19833.

Full text
Abstract:
We compare two planner solutions for a challenging Earth science application to plan coordinated measurements (observations) for a constellation of satellites. This problem is combinatorially explosive, involving many degrees of freedom for planner choices. Each satellite carries two different sensors and is maneuverable to 61 pointing angle options. The sensors collect data to update the predictions made by a high-fidelity global soil moisture prediction model. Soil moisture is an important geophysical variable whose knowledge is used in applications such as crop health monitoring and predictions of floods, droughts, and fires. The global soil-moisture model produces soil-moisture predictions with associated prediction errors over the globe represented by a grid of 1.67 million Ground Positions (GPs). The prediction error varies over space and time and can change drastically with events like rain/fire. The planner's goal is to select measurements which reduce prediction errors to improve future predictions. This is done by targeting high-quality observations at locations of high prediction-error. Observations can be made in multiple ways, such as by using one or more instruments or different pointing angles; the planner seeks to select the way with the least measurement-error (higher observation quality). In this paper we compare two planning approaches to this problem: Dynamic Constraint Processing (DCP) and Mixed Integer Linear Programming (MILP). We match inputs and metrics for both DCP and MILP algorithms to enable a direct apples-to-apples comparison. DCP uses domain heuristics to find solutions within a reasonable time for our application but cannot be proven optimal, while the MILP produces provably optimal solutions. We demonstrate and discuss the trades between DCP flexibility and performance vs. MILP's promise of provable optimality.
APA, Harvard, Vancouver, ISO, and other styles
42

de-Miguel, Sergio, Lauri Mehtätalo, and Ali Durkaya. "Developing generalized, calibratable, mixed-effects meta-models for large-scale biomass prediction." Canadian Journal of Forest Research 44, no. 6 (June 2014): 648–56. http://dx.doi.org/10.1139/cjfr-2013-0385.

Full text
Abstract:
Large-scale prediction of forest biomass is of interest for forest science, ecology, and issues related to climate change. Previous research has attempted to provide allometric models suitable for large-scale biomass prediction using different methods. We present a new approach for meta-analysis of existing biomass equations using mixed-effects modelling on simulated data. The resulting generalized meta-models can be calibrated for local conditions. This meta-analytical approach allows for directly benefiting from previous research to minimize data collection and properly take into account the unknown differences between different locations within large areas. The approach is demonstrated by developing pan-Mediterranean mixed-effects meta-models for Pinus brutia Ten. The fixed part of the meta-models enables sound aboveground biomass predictions throughout practically the full native range of the species. Significant improvement in the predictive performance can be further gained by using small local datasets for model calibration. The calibration procedure for location-specific biomass prediction is based on best linear unbiased predictor of random effects. The predictive performance of the meta-models under different sampling strategies is validated in an independent dataset. The results show that mixed-effects meta-models may enable accurate and robust large-scale biomass predictions. Calibration for specific locations based on minimal data collection effort performs better than fitting location-specific equations based on much larger samples. The advantages of mixed-effects meta-models are of interest not only for further biomass-related research and applications, but also for other modelling disciplines within forest science.
APA, Harvard, Vancouver, ISO, and other styles
43

Ma, Xin. "Research on a Novel Kernel Based Grey Prediction Model and Its Applications." Mathematical Problems in Engineering 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/5471748.

Full text
Abstract:
The discrete grey prediction models have attracted considerable interest of research due to its effectiveness to improve the modelling accuracy of the traditional grey prediction models. The autoregressive GM(1,1)model, abbreviated as ARGM(1,1), is a novel discrete grey model which is easy to use and accurate in prediction of approximate nonhomogeneous exponential time series. However, the ARGM(1,1)is essentially a linear model; thus, its applicability is still limited. In this paper a novel kernel based ARGM(1,1)model is proposed, abbreviated as KARGM(1,1). The KARGM(1,1)has a nonlinear function which can be expressed by a kernel function using the kernel method, and its modelling procedures are presented in details. Two case studies of predicting the monthly gas well production are carried out with the real world production data. The results of KARGM(1,1)model are compared to the existing discrete univariate grey prediction models, including ARGM(1,1), NDGM(1,1,k), DGM(1,1), and NGBMOP, and it is shown that the KARGM(1,1)outperforms the other four models.
APA, Harvard, Vancouver, ISO, and other styles
44

Anand, Mayank, Arun Velu, and Pawan Whig. "Prediction of Loan Behaviour with Machine Learning Models for Secure Banking." Journal of Computer Science and Engineering (JCSE) 3, no. 1 (February 15, 2022): 1–13. http://dx.doi.org/10.36596/jcse.v3i1.237.

Full text
Abstract:
Given loan default prediction has such a large impact on earnings, it is one of the most influential factor on credit score that banks and other financial organisations face. There have been several traditional methods for mining information about a loan application and some new machine learning methods of which, most of these methods appear to be failing, as the number of defaults in loans has increased. For loan default prediction, a variety of techniques such as Multiple Logistic Regression, Decision Tree, Random Forests, Gaussian Naive Bayes, Support Vector Machines, and other ensemble methods are presented in this research work. The prediction is based on loan data from multiple internet sources such as Kaggle, as well as data sets from the applicant's loan application. Significant evaluation measures including Confusion Matrix, Accuracy, Recall, Precision, F1- Score, ROC analysis area and Feature Importance has been calculated and shown in the results section. It is found that Extra Trees Classifier and Random Forest has highest Accuracy of using predictive modelling, this research concludes effectual results for loan credit disapproval on vulnerable consumers from a large number of loan applications
APA, Harvard, Vancouver, ISO, and other styles
45

Becker, Steffen, and Vishy Karri. "Implementation of Neural Network Models for Parameter Estimation of a PEM-Electrolyzer." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 6 (September 20, 2010): 735–45. http://dx.doi.org/10.20965/jaciii.2010.p0735.

Full text
Abstract:
Predictive models were built using neural networks for hydrogen flow rate, electrolyzer system-efficiency and stack-efficiency respectively. A comprehensive experimental database forms the foundation for the predictive models. It is argued that, due to the high costs associated with the hydrogen measuring equipment; these reliable predictive models can be implemented as virtual sensors. These models can also be used online for monitoring and safety of hydrogen equipment. The quantitative accuracy of the predictive models is appraised using statistical techniques. These mathematical models are found to be reliable predictive tools with an excellent accuracy of ±3% compared with experimental values. The predictive nature of these models did not show any significant bias to either over prediction or under prediction. These predictive models, built on a sound mathematical and quantitative basis, can be seen as a step towards establishing hydrogen performance prediction models as generic virtual sensors for wider safety and monitoring applications.
APA, Harvard, Vancouver, ISO, and other styles
46

Gao, Jian, and Tang-Wei Kuo. "Toward the accurate prediction of soot in engine applications." International Journal of Engine Research 20, no. 7 (May 14, 2018): 706–17. http://dx.doi.org/10.1177/1468087418773937.

Full text
Abstract:
Soot emissions from internal combustion engines represent a major challenge to engine manufactures with ever most stringent emission regulations, not only in soot mass yielded but also in soot particle number. For example, a particulate number standard has been introduced in 2011 with Euro 5b for diesel engines and in 2014 with Euro 6 for petrol engines (a limit of 6 × 1011/km). Soot models provide a detailed insight into soot evolution processes and are thus an essential tool in today’s advanced engine designs. Therefore, continuous efforts are made to develop more physically based engine soot models and improve the prediction accuracy. The primary objective of this work is to identify and demonstrate the critical parameters for accurate soot predictions in internal combustion engine applications using the high-fidelity detailed soot model from an engineering point of view. A detailed soot model based on sectional method was used to solve the soot process in diesel and spark ignition direct injection gasoline engines. A series of sensitivity analyses were carried out to evaluate the importance and significance of wall boundary conditions, wall film formation and vaporization, multi-component fuel surrogate, and soot transport process in engine exhaust on soot predictions. The predicted results were compared in details to engine-out measurements in terms of soot mass, number density, and size distributions under various operating conditions. The model results demonstrate that the correct description of the spray–wall interaction and wall film vaporization, as well as the soot transport processes in full engine cycle, is critical for achieving reliable predictive capabilities in engine simulations, especially for spark ignition direct injection gasoline engines. The findings should help engineers in this field for more accurate soot predictions in engine simulations.
APA, Harvard, Vancouver, ISO, and other styles
47

Habib, M. A., J. J. O’Sullivan, and M. Salauddin. "Prediction of Wave Overtopping Characteristics at Coastal Flood Defences Using Machine Learning Algorithms: A Systematic Rreview." IOP Conference Series: Earth and Environmental Science 1072, no. 1 (September 1, 2022): 012003. http://dx.doi.org/10.1088/1755-1315/1072/1/012003.

Full text
Abstract:
Abstract The assessment of coastal defences requires reliable prediction of mean overtopping discharges and acceptable overtopping rates for defined design conditions, an process of increasing importance given that global and regional climate change and associated sea level rises are becoming more acute. Prediction of overtopping discharge is usually computed from physical, analytical, and numerical models. However, the ongoing development of soft computing techniques now offer potential for rapid, relatively simple, and economically attractive methods for predicting overtopping. The application of Machine Learning (ML) algorithms has become increasingly prominent in models for estimating wave overtopping at flood defences. Here we review ML methods as tools for accurate prediction of overtopping and overtopping parameters. A systematic review of 32 publications, published between 2001 and 2021 (last twenty years), underpinned Decision Trees and Artificial Neural Network (ANN) as the most popular ML methods as analysis of wave overtopping datasets. A comparison of estimates of overtopping and overtopping parameters using these models with those from commonly used (empirical) prediction models, highlights the potential of ML methods for these applications. The review, however, highlights important limitations of the methods and identifies future research needs that may serve as an impetus for further development of these ML algorithms for wave overtopping, particularly in applications characterised by complex geometrical configurations.
APA, Harvard, Vancouver, ISO, and other styles
48

Moreira, Gabriel S., Heeseung Jo, and Jinkyu Jeong. "NAP: Natural App Processing for Predictive User Contexts in Mobile Smartphones." Applied Sciences 10, no. 19 (September 23, 2020): 6657. http://dx.doi.org/10.3390/app10196657.

Full text
Abstract:
The resource management of an application is an essential task in smartphones. Optimizing the application launch process results in a faster and more efficient system, directly impacting the user experience. Predicting the next application that will be used can orient the smartphone to address the system resources to the correct application, making the system more intelligent and efficient. Neural networks have been presenting outstanding results in the state-of-the-art for mapping large sequences of data, outperforming all previous classification and prediction models. A recurrent neural network (RNN) is an artificial neural network associated with sequence models, and it can recognize patterns in sequences. One of the areas that use RNN is language modeling (LM). Given an arrangement of words, LM can learn how the words are organized in sentences, making it possible to predict the next word given a group of previous words. We propose building a predictive model inspired by LM. However, instead of using words, we will use previous applications to predict the next application. Moreover, some context features, such as timestamp and energy record, will be included in the prediction model to evaluate the impact of the features on the performance. We will provide the following application prediction result and extend it to the top-k possible candidates for the next application.
APA, Harvard, Vancouver, ISO, and other styles
49

Rashid, M., and Jafri Din. "Effects of reduction factor on rain attenuation predictions over millimeter-wave links for 5G applications." Bulletin of Electrical Engineering and Informatics 9, no. 5 (October 1, 2020): 1907–15. http://dx.doi.org/10.11591/eei.v9i5.2188.

Full text
Abstract:
Millimeter-wave will be the strong contender for the terrestrial link using for 5G networks. So it is imperative to examine these frequency bands to ensure the uninterrupted services when 5G network is connected in tropical regions. A critical challenge of link-budgeting in mm-wave 5G networks is the precise estimation of rain attenuation for short-path links. The difficulties are further intensified in the tropical areas where the rainfall rate is very high. Different models are proposed to predict rain attenuation, however recent measurements show huge discrepancies with predictions for shorter links at mm-wave. The path reduction factor is the main parameter in the prediction model for predicting total attenuation from specific rain attenuation. This study investigates four path reduction factor models for the prediction of rain attenuation. A comparison was made between these models based on rain attenuation data measured at 26 GHz at 300 m and 1.3 km links in Malaysia. All models are found to predict rain attenuation at a 1.3 km link with minimum errors, while tremendous discrepancies are observed for 300 m link. Hence it is highly recommended to further investigate the reduction factor model for shorter links less than 1 km
APA, Harvard, Vancouver, ISO, and other styles
50

Tunthanathip, Thara, Sakchai Sae-heng, Thakul Oearsakul, Ittichai Sakarunchai, Anukoon Kaewborisutsakul, and Chin Taweesomboonyat. "Machine learning applications for the prediction of surgical site infection in neurological operations." Neurosurgical Focus 47, no. 2 (August 2019): E7. http://dx.doi.org/10.3171/2019.5.focus19241.

Full text
Abstract:
OBJECTIVESurgical site infection (SSI) following a neurosurgical operation is a complication that impacts morbidity, mortality, and economics. Currently, machine learning (ML) algorithms are used for outcome prediction in various neurosurgical aspects. The implementation of ML algorithms to learn from medical data may help in obtaining prognostic information on diseases, especially SSIs. The purpose of this study was to compare the performance of various ML models for predicting surgical infection after neurosurgical operations.METHODSA retrospective cohort study was conducted on patients who had undergone neurosurgical operations at tertiary care hospitals between 2010 and 2017. Supervised ML algorithms, which included decision tree, naive Bayes with Laplace correction, k-nearest neighbors, and artificial neural networks, were trained and tested as binary classifiers (infection or no infection). To evaluate the ML models from the testing data set, their sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), as well as their accuracy, receiver operating characteristic curve, and area under the receiver operating characteristic curve (AUC) were analyzed.RESULTSData were available for 1471 patients in the study period. The SSI rate was 4.6%, and the type of SSI was superficial, deep, and organ/space in 1.2%, 0.8%, and 2.6% of cases, respectively. Using the backward stepwise method, the authors determined that the significant predictors of SSI in the multivariable Cox regression analysis were postoperative CSF leakage/subgaleal collection (HR 4.24, p < 0.001) and postoperative fever (HR 1.67, p = 0.04). Compared with other ML algorithms, the naive Bayes had the highest performance with sensitivity at 63%, specificity at 87%, PPV at 29%, NPV at 96%, and AUC at 76%.CONCLUSIONSThe naive Bayes algorithm is highlighted as an accurate ML method for predicting SSI after neurosurgical operations because of its reasonable accuracy. Thus, it can be used to effectively predict SSI in individual neurosurgical patients. Therefore, close monitoring and allocation of treatment strategies can be informed by ML predictions in general practice.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography