Journal articles on the topic 'Prediction and analysis'

To see the other types of publications on this topic, follow the link: Prediction and analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Prediction and analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gaur, Varun, Sharad Bhardwaj, Utsav Gaur, and Sushant Gupta. "Stock Market Prediction & Analysis." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 4404–8. http://dx.doi.org/10.22214/ijraset.2022.43403.

Full text
Abstract:
Abstract: Stock trading is one of the most essential activities in the financial sector. The act of attempting to anticipate the future value of a stock or other financial instrument is known as stock market prediction. A financial exchange-traded instrument. This document illustrates how Machine Learning is used to predict a stock. The time series analysis or technical and fundamental analysis is used most stockbrokers use when deciding on a stock predictions. To forecast the outcome, the computer language is employed. Python is a stock market that uses machine learning. This paper is about We suggest a Machine Learning (ML) strategy that will be cost-effective. taught from publicly available stock data and intelligence and then applies what they've learned to make an accurate prediction. This work use machine learning in this setting. Support Vector Machine (SVM) is a technology for predicting Stock prices for large and small cap companies, as well as in the three different markets, using daily and weekly pricing Frequencies that are up to date. Keywords: Support Vector Machine, Stock Market, Machine Learning, Predictions
APA, Harvard, Vancouver, ISO, and other styles
2

Carlsson, Leo S., Mikael Vejdemo-Johansson, Gunnar Carlsson, and Pär G. Jönsson. "Fibers of Failure: Classifying Errors in Predictive Processes." Algorithms 13, no. 6 (June 23, 2020): 150. http://dx.doi.org/10.3390/a13060150.

Full text
Abstract:
Predictive models are used in many different fields of science and engineering and are always prone to make faulty predictions. These faulty predictions can be more or less malignant depending on the model application. We describe fibers of failure (FiFa), a method to classify failure modes of predictive processes. Our method uses Mapper, an algorithm from topological data analysis (TDA), to build a graphical model of input data stratified by prediction errors. We demonstrate two ways to use the failure mode groupings: either to produce a correction layer that adjusts predictions by similarity to the failure modes; or to inspect members of the failure modes to illustrate and investigate what characterizes each failure mode. We demonstrate FiFa on two scenarios: a convolutional neural network (CNN) predicting MNIST images with added noise, and an artificial neural network (ANN) predicting the electrical energy consumption of an electric arc furnace (EAF). The correction layer on the CNN model improved its prediction accuracy significantly while the inspection of failure modes for the EAF model provided guiding insights into the domain-specific reasons behind several high-error regions.
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Jae Kwon, and Sanggil Kang. "Neural Network-Based Coronary Heart Disease Risk Prediction Using Feature Correlation Analysis." Journal of Healthcare Engineering 2017 (2017): 1–13. http://dx.doi.org/10.1155/2017/2780501.

Full text
Abstract:
Background. Of the machine learning techniques used in predicting coronary heart disease (CHD), neural network (NN) is popularly used to improve performance accuracy. Objective. Even though NN-based systems provide meaningful results based on clinical experiments, medical experts are not satisfied with their predictive performances because NN is trained in a “black-box” style. Method. We sought to devise an NN-based prediction of CHD risk using feature correlation analysis (NN-FCA) using two stages. First, the feature selection stage, which makes features acceding to the importance in predicting CHD risk, is ranked, and second, the feature correlation analysis stage, during which one learns about the existence of correlations between feature relations and the data of each NN predictor output, is determined. Result. Of the 4146 individuals in the Korean dataset evaluated, 3031 had low CHD risk and 1115 had CHD high risk. The area under the receiver operating characteristic (ROC) curve of the proposed model (0.749 ± 0.010) was larger than the Framingham risk score (FRS) (0.393 ± 0.010). Conclusions. The proposed NN-FCA, which utilizes feature correlation analysis, was found to be better than FRS in terms of CHD risk prediction. Furthermore, the proposed model resulted in a larger ROC curve and more accurate predictions of CHD risk in the Korean population than the FRS.
APA, Harvard, Vancouver, ISO, and other styles
4

Wade, Bruce A., Krishnendu Ghosh, and Peter J. Tonellato. "Optimization of a Gene Analysis Application." Computing Letters 2, no. 1-2 (March 6, 2006): 81–88. http://dx.doi.org/10.1163/157404006777491927.

Full text
Abstract:
MetaGene is a software environment for gene analysis developed at the Bioinformatics Research Center, Medical College of Wisconsin. In this work, a new neural network optimization module is developed to enhance the prediction of gene features developed by MetaGene. The input of the neural network consists of gene feature predictions from several gene analysis engines used by MetaGene. When compared, these predictions are often in conflict. The output from the neural net is a synthesis of these individual predictions taking into account the degree of conflict detected. This optimized prediction provides a more accurate answer when compared to the default prediction of MetaGene or any single prediction engine’s solution.
APA, Harvard, Vancouver, ISO, and other styles
5

Dall’Aglio, John. "Sex and Prediction Error, Part 3: Provoking Prediction Error." Journal of the American Psychoanalytic Association 69, no. 4 (August 2021): 743–65. http://dx.doi.org/10.1177/00030651211042059.

Full text
Abstract:
In parts 1 and 2 of this Lacanian neuropsychoanalytic series, surplus prediction error was presented as a neural correlate of the Lacanian concept of jouissance. Affective consciousness (a key source of prediction error in the brain) impels the work of cognition, the predictive work of explaining what is foreign and surprising. Yet this arousal is the necessary bedrock of all consciousness. Although the brain’s predictive model strives for homeostatic explanation of prediction error, jouissance “drives a hole” in the work of homeostasis. Some residual prediction error always remains. Lacanian clinical technique attends to this surplus and the failed predictions to which this jouissance “sticks.” Rather than striving to eliminate prediction error, clinical practice seeks its metabolization. Analysis targets one’s mode of jouissance to create a space for the subject to enjoy in some other way. This entails working with prediction error, not removing or tolerating it. Analysis aims to shake the very core of the subject by provoking prediction error—this drives clinical change. Brief clinical examples illustrate this view.
APA, Harvard, Vancouver, ISO, and other styles
6

Yin, Tao, and Yiming Wang. "Nonlinear analysis and prediction of soybean futures." Agricultural Economics (Zemědělská ekonomika) 67, No. 5 (May 20, 2021): 200–207. http://dx.doi.org/10.17221/480/2020-agricecon.

Full text
Abstract:
We use chaotic artificial neural network (CANN) technology to predict the price of the most widely traded agricultural futures – soybean futures. The nonlinear existence test results show that the time series of soybean futures have multifractal dynamics, long-range dependence, self similarity, and chaos characteristics. This also provides a basis for the construction of a CANN model. Compared with the artificial neural network (ANN) structure as our benchmark system, the predictability of CANN is much higher. The ANN is based on Gaussian kernel function and is only suitable for local approximation of nonstationary signals, so it cannot approach the global nonlinear chaotical hidden pattern. Improving the prediction accuracy of soybean futures prices is of great significance for investors, soybean producers, and decision makers.
APA, Harvard, Vancouver, ISO, and other styles
7

Panchal, D. S., M. B. Shelke, S. S. Kawathekar, and S. N. Deshmukh. "Prediction of Healthcare Quality Using Sentiment Analysis." Indian Journal Of Science And Technology 16, no. 21 (June 3, 2023): 1603–13. http://dx.doi.org/10.17485/ijst/v16i21.2506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

N, Sathyanarayana, Anjani Lahoty, Anubhav ., Archana S, and Dhanush Rao H S. "PREDICTIVE ANALYSIS OF SPORTS DATA USING MACHINE LEARNING." International Research Journal of Computer Science 9, no. 8 (August 13, 2022): 240–44. http://dx.doi.org/10.26562/irjcs.2022.v0908.17.

Full text
Abstract:
There are numerous methods for making sports predictions, and data analysis is crucial to predicting. Previous attempts in sports data analysis have resulted in the prediction of sports such as football, tennis next shot location prediction, Olympic athlete performance, basketball slam dunk shot frequency, and many more. Cricket prediction is tough due to the numerous variables that might affect the result or outcome of a cricket match. Previously, simple cricket match prediction systems focused on the venue, ignoring aspects such as weather, stadium size, captaincy, etc. Factors such as the match's location, pitch, weather conditions, first-pitch batting, and fielding all play a role in forecasting the match's outcome. To predict, suitable models are required, and data mining allows the required information to be extracted from data sets. This paper is a review of techniques used for predicting the winners of three different games. In order to anticipate various facts linked to a certain match, such as the outcome of the match, an injured player's performance in the match, the discovery of new talents in the game, etc., various machine learning algorithms can be used to exploit the statistical data of the game. The objective is to correctly forecast the outcome of a specific game.
APA, Harvard, Vancouver, ISO, and other styles
9

Zain, Zuhaira Muhammad, Mona Alshenaifi, Abeer Aljaloud, Tamadhur Albednah, Reham Alghanim, Alanoud Alqifari, and Amal Alqahtani. "Predicting breast cancer recurrence using principal component analysis as feature extraction: an unbiased comparative analysis." International Journal of Advances in Intelligent Informatics 6, no. 3 (November 6, 2020): 313. http://dx.doi.org/10.26555/ijain.v6i3.462.

Full text
Abstract:
Breast cancer recurrence is among the most noteworthy fears faced by women. Nevertheless, with modern innovations in data mining technology, early recurrence prediction can help relieve these fears. Although medical information is typically complicated, and simplifying searches to the most relevant input is challenging, new sophisticated data mining techniques promise accurate predictions from high-dimensional data. In this study, the performances of three established data mining algorithms: Naïve Bayes (NB), k-nearest neighbor (KNN), and fast decision tree (REPTree), adopting the feature extraction algorithm, principal component analysis (PCA), for predicting breast cancer recurrence were contrasted. The comparison was conducted between models built in the absence and presence of PCA. The results showed that KNN produced better prediction without PCA (F-measure = 72.1%), whereas the other two techniques: NB and REPTree, improved when used with PCA (F-measure = 76.1% and 72.8%, respectively). This study can benefit the healthcare industry in assisting physicians in predicting breast cancer recurrence precisely.
APA, Harvard, Vancouver, ISO, and other styles
10

Prahmana, I. Gusti, and Kristina Annatasia Br Sitepu. "Knearst Algorithm Analysis – Neighbor Breast Cancer Prediction Coimbra." Journal of Artificial Intelligence and Engineering Applications (JAIEA) 1, no. 3 (June 15, 2022): 226–30. http://dx.doi.org/10.59934/jaiea.v1i3.97.

Full text
Abstract:
A process to explain the results of the KNN algorithm analysis with the prediction of Breast Cancer Coimbra disease (Breast Cancer). The prediction output of the KNN algorithm will be added with the Simple Linear Regression algorithm modeling to measure the predictive data through a straight line as an illustration of the correlation relationship between 2 or more variables. Linear regression prediction is used as a technique for the relationship between variables in the prediction process of the Breast Cancer Coimbra data set (Breast Cancer). for the value of K in analyzing the KNN algorithm, take the nearest neighbor with the ranking results with K = 5 nearest neighbors which are taken in the KNN calculation. Which is where the output of the KNN algorithm classification will be analyzed with the Simple Linear Regression algorithm with Dependent (Cause) and Independent (effect) variables. The test results determine that the patient has breast cancer and the number of predictions based on age with glucose means that the patient is predicted to have breast cancer. analyze the KNN algorithm with Simple Liner Regression modeling with Python programming language.
APA, Harvard, Vancouver, ISO, and other styles
11

Sumner, Joel, and Adel Alaeddini. "Analysis of Feature Extraction Methods for Prediction of 30-Day Hospital Readmissions." Methods of Information in Medicine 58, no. 06 (December 2019): 213–21. http://dx.doi.org/10.1055/s-0040-1702159.

Full text
Abstract:
Abstract Objectives This article aims to determine possible improvements made by feature extraction methods to the machine learning prediction methods for predicting 30-day hospital readmissions. Methods The study evaluates five feature extraction methods including principal component analysis (PCA), kernel principal component analysis (KPCA), isomap, Laplacian eigenmaps, and locality preserving projections (LPPs) for improving the accuracy of nine machine learning prediction methods in predicting 30-day hospital readmissions. The specific prediction methods considered include logistic regression, Cox regression, linear discriminant analysis, k-nearest neighbor (KNN), support vector machines (SVMs), bagged trees, boosted trees, random forest, and artificial neural networks. All models are developed in MATLAB and validated using area under the curve based on two population-based data sets from partner hospitals. Results Laplacian eigenmaps and isomap feature extraction provide the most improvement to the readmission predictive accuracy of KNN, SVM, bagged trees, boosted trees, and linear discriminant analysis methods. The results for artificial neural networks, random forest, Cox regression, and logistic regression show improvement for only one of the data sets. Also, PCA and LPP provided the best computation efficiency followed by KPCA, Laplacian eigenmaps, and isomap. Conclusion Feature extraction methods can improve the predictive performance of machine learning methods for predicting readmissions. However, the improvement depended on the specific choice of the prediction method, feature extraction method, and the complexity of the data set features.
APA, Harvard, Vancouver, ISO, and other styles
12

Baranov, L. A., E. P. Balakina, and Yungqiang Zhang. "Prediction error analysis for intelligent management and predictive diagnostics systems." Dependability 23, no. 2 (June 5, 2023): 12–18. http://dx.doi.org/10.21683/1729-2646-2023-23-2-12-18.

Full text
Abstract:
Random signal prediction is efficient for intelligent management and predictive diagnostics systems. Aim. The paper aims to analyse the error of random signal prediction. To develop recommendations for the selection of random signal extrapolator parameters. Methods. The paper uses the mathematics of the theory of random functions, formalization adopted in the theory of pulse systems, mathematical description of extrapolators with Chebyshev polynomials orthogonal over a set of equally spaced points. The coefficients of the predicting polynomial are selected according to the minimal least squares. Results. The paper describes the mathematical model of the extrapolator. Design ratios were obtained for prediction error assessments. The maximum and prediction interval-averaged relative mean square error of extrapolation were defined. The authors analyse the error of extrapolation of random processes defined by the sum of a centred stationary random process and a deterministic time function. Based on diverse calculations, recommendations were defined that allow selecting the parameters of the extrapolator (degree of the extrapolating polynomial, number of test points that precede the prediction interval, discretisation interval of the predicting function) under the specified input signal models. Conclusion. The use of extrapolators based on Chebyshev polynomials orthogonal on a set of equally spaced points and the least square method allows implementing a procedure for calculating predicted values of a random process with the required accuracy. Under the specified models of the predicting signal, a method was developed that allows selecting the extrapolator’s parameters (order, number of points involved in the generation of the prediction, sample spacing) for the purpose of ensuring the required accuracy.
APA, Harvard, Vancouver, ISO, and other styles
13

Pace, Michael L. "Prediction and the aquatic sciences." Canadian Journal of Fisheries and Aquatic Sciences 58, no. 1 (January 1, 2001): 63–72. http://dx.doi.org/10.1139/f00-151.

Full text
Abstract:
The need for prediction is now widely recognized and frequently articulated as an objective of research programs in aquatic science. This recognition is partly the legacy of earlier advocacy by the school of empirical limnologists. This school, however, presented prediction narrowly and failed to account for the diversity of predictive approaches as well to set prediction within the proper scientific context. Examples from time series analysis and probabilistic models oriented toward management provide an expanded view of approaches and prospects for prediction. The context and rationale for prediction is enhanced understanding. Thus, prediction is correctly viewed as an aid to building scientific knowledge with better understanding leading to improved predictions. Experience, however, suggests that the most effective predictive models represent condensed models of key features in aquatic systems. Prediction remains important for the future of aquatic sciences. Predictions are required in the assessment of environmental concerns and for testing scientific fundamentals. Technology is driving enormous advances in the ability to study aquatic systems. If these advances are not accompanied by improvements in predictive capability, aquatic research will have failed in delivering on promised objectives. This situation should spark discomfort in aquatic scientists and foster creative approaches toward prediction.
APA, Harvard, Vancouver, ISO, and other styles
14

S, Sabarinath, Thirumalaivasan R, Shiam S, Mohamed Aashik M. S, K. Sudhakar, and Dr P. Rama. "Analysis of Stock Price Prediction Using ML Techniques." International Journal for Research in Applied Science and Engineering Technology 11, no. 4 (April 30, 2023): 1533–37. http://dx.doi.org/10.22214/ijraset.2023.50378.

Full text
Abstract:
Abstract: Time series forecasting has been widely used to determine the future prices of stock, and the analysis and modelling of finance time series importantly guide investors’ decisions and trades. This work proposes an intelligent time series prediction system that uses sliding-window optimization for the purpose of predicting the stock prices using data science techniques. The system has a graphical user interface and functions as a stand-alone application. The proposed model is a promising predictive technique for highly non-linear time series, whose patterns are difficult to capture by traditional model.
APA, Harvard, Vancouver, ISO, and other styles
15

Abdullahi, Dauda Sani, Dr Muhammad Sirajo Aliyu, and Usman Musa Abdullahi. "Comparative analysis of resampling algorithms in the prediction of stroke diseases." UMYU Scientifica 2, no. 1 (March 30, 2023): 88–94. http://dx.doi.org/10.56919/usci.2123.011.

Full text
Abstract:
Stroke disease is a serious cause of death globally. Early predictions of the disease will save a lot of lives but most of the clinical datasets are imbalanced in nature including the stroke dataset, making the predictive algorithms biased towards the majority class. The objective of this research is to compare different data resampling algorithms on the stroke dataset to improve the prediction performances of the machine learning models. This paper considered five (5) resampling algorithms namely; Random over Sampling (ROS), Synthetic Minority oversampling Technique (SMOTE), Adaptive Synthetic (ADASYN), hybrid techniques like SMOTE with Edited Nearest Neighbor (SMOTE-ENN), and SMOTE with Tomek Links (SMOTE-TOMEK) and trained on six (6) machine learning classifiers namely; Logistic Regression (LR), Decision Tree (DT), K-nearest Neighbor (KNN), Support Vector Machines (SVM), Random Forest (RF), and XGBoost (XGB). The hybrid technique SMOTE-ENN influences the machine learning classifiers the best followed by the SMOTE technique while the combination of SMOTE and XGB perform better with an accuracy of 97.99% and G-mean score of 0.99, and auc_roc score of 0.99. Resampling algorithms balance the dataset and enhanced the predictive power of machine learning algorithms. Therefore, we recommend resampling stroke dataset in predicting stroke disease than modeling on the imbalanced dataset.
APA, Harvard, Vancouver, ISO, and other styles
16

Tehrani, Payam, and Denis Mitchell. "Investigating the Use of Natural and Artificial Records for Prediction of Seismic Response of Regular and Irregular RC Bridges Considering Displacement Directions." Applied Sciences 11, no. 3 (January 20, 2021): 906. http://dx.doi.org/10.3390/app11030906.

Full text
Abstract:
The seismic responses of continuous multi-span reinforced concrete (RC) bridges were predicted using inelastic time history analyses (ITHA) and incremental dynamic analysis (IDA). Some important issues in ITHA were studied in this research, including: the effects of using artificial and natural records on predictions of the mean seismic demands, effects of displacement directions on predictions of the mean seismic response, the use of 2D analysis with combination rules for prediction of the response obtained using 3D analysis, and prediction of the maximum radial displacement demands compared to the displacements obtained along the principal axes of the bridges. In addition, IDA was conducted and predictions were obtained at different damage states. These issues were investigated for the case of regular and irregular bridges using three different sets of natural and artificial records. The results indicated that the use of natural and artificial records typically resulted in similar predictions for the cases studied. The effect of displacement direction was important in predicting the mean seismic response. It was shown that 2D analyses with the combination rules resulted in good predictions of the radial displacement demands obtained from 3D analyses. The use of artificial records in IDA resulted in good prediction of the median collapse capacity.
APA, Harvard, Vancouver, ISO, and other styles
17

Chimote, Vaishnavi, and Prof Vrushali D. Dharmale. "Analytic System Based on Prediction Analysis of Social Emotions from Users : A Review." International Journal of Trend in Scientific Research and Development Volume-2, Issue-3 (April 30, 2018): 1608–12. http://dx.doi.org/10.31142/ijtsrd11441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Šebalj, Dario, Josip Mesarić, and Davor Dujak. "Analysis of methods and techniques for prediction of natural gas consumption." Journal of information and organizational sciences 43, no. 1 (June 21, 2019): 99–117. http://dx.doi.org/10.31341/jios.43.1.6.

Full text
Abstract:
Due to its many advantages, demand for natural gas has increased considerably and many models for predicting natural gas consumption are developed. The aim of this paper is to present an overview and systematic analysis of the latest research papers that deal with predictions of natural gas consumption for residential and commercial use from the year 2002 to 2017. Literature overview analysis was conducted using the two most relevant scientific databases Web of Science Core Collection and Scopus. The results indicate neural networks as the most common method used for predictions of natural gas consumption, while most accurate methods are genetic algorithms, support vector machines and ANFIS. Most used input variables are past natural gas consumption data and weather data, and prediction is most commonly made on daily and annual level on a country area level. Limitations of the research raise from relatively small number of analyzed papers but still research could be used for significant improving of prediction models for natural gas consumption.
APA, Harvard, Vancouver, ISO, and other styles
19

Bierkens, M. F. P., and L. P. H. van Beek. "Seasonal Predictability of European Discharge: NAO and Hydrological Response Time." Journal of Hydrometeorology 10, no. 4 (August 1, 2009): 953–68. http://dx.doi.org/10.1175/2009jhm1034.1.

Full text
Abstract:
Abstract In this paper the skill of seasonal prediction of river discharge and how this skill varies between the branches of European rivers across Europe is assessed. A prediction system of seasonal (winter and summer) discharge is evaluated using 1) predictions of the average North Atlantic Oscillation (NAO) index for the coming winter based on May SST anomalies of the North Atlantic; 2) a global-scale hydrological model; and 3) 40-yr European Centre for Medium-Range Weather Forecasts Re-Analysis (ERA-40) data. The skill of seasonal discharge predictions is investigated with a numerical experiment. Also Europe-wide patterns of predictive skill are related to the use of NAO-based seasonal weather prediction, the hydrological properties of the river basin, and a correct assessment of initial hydrological states. These patterns, which are also corroborated by observations, show that in many parts of Europe the skill of predicting winter discharge can, in theory, be quite large. However, this achieved skill mainly comes from knowing the correct initial conditions of the hydrological system (i.e., groundwater, surface water, soil water storage of the basin) rather than from the use of NAO-based seasonal weather prediction. These factors are equally important for predicting subsequent summer discharge.
APA, Harvard, Vancouver, ISO, and other styles
20

Rajput, Prashant, Priyanka Sapkal, and Shefali Sinha. "Box Office Revenue Prediction Using Dual Sentiment Analysis." International Journal of Machine Learning and Computing 7, no. 4 (October 2017): 72–75. http://dx.doi.org/10.18178/ijmlc.2017.7.4.623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lim, Chia Yean, Vincent K. T. Khoo, and Bahari Belaton. "A Methodology for Deliberating Prediction Criteria." Applied Mechanics and Materials 130-134 (October 2011): 1758–61. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.1758.

Full text
Abstract:
The economic downturn has been forcing many companies to use predictive analysis for spotting emerging product and technology trends and also future customer needs. Since every company is unique, without the assistance of some methodologies and tools, decision makers encounter great difficulties in conducting predictive analysis, especially in the deliberation and prioritization of new prediction criteria derived from the publicly available unstructured information. This paper proposes a unique methodology which attempts to integrate the personalization and visualization of new prediction criteria. The challenging iterative tasks are achieved through a rule-based inconsistency detection triad-based comparison algorithm, supported by sophisticated visual displays of the relative importance among the prediction criteria. It is hoped that the proposed methodology will intuitively support the decision makers in exploring and deliberating new criteria for making better predictions.
APA, Harvard, Vancouver, ISO, and other styles
22

Sun, Mengxuan, Jinglin Zhao, and Heidan Shang. "Building Energy Consumption Prediction with Principal Component Analysis and Artificial Neural Network." International Journal of Electronics and Electrical Engineering 8, no. 2 (June 2020): 36–39. http://dx.doi.org/10.18178/ijeee.8.2.36-39.

Full text
Abstract:
The implementation of the smart grid will greatly improve the efficiency of energy supply by detecting, predicting, and reacting to real-time local changes of energy uses. To this end, energy usage prediction of household buildings is critically important to facilitate the implementation of smart grid. This study used a single house as a prototype, employed different observed features, advanced data analysis approach, and artificial neural network model to predict real-time dynamics of house energy usage. Data analysis revealed that among the 26 observed features, only the top ten most important features were helpful and could maximize the neural network model performance. The resultant model has the great predictive capability on energy usage, thus provided a promising framework to improve the smart grid implementation.
APA, Harvard, Vancouver, ISO, and other styles
23

Tang, Li, Ping He Pan, and Yong Yi Yao. "EPAK: A Computational Intelligence Model for 2-level Prediction of Stock Indices." International Journal of Computers Communications & Control 13, no. 2 (April 13, 2018): 268–79. http://dx.doi.org/10.15837/ijccc.2018.2.3187.

Full text
Abstract:
This paper proposes a new computational intelligence model for predicting univariate time series, called EPAK, and a complex prediction model for stock market index synthesizing all the sector index predictions using EPAK as a kernel. The EPAK model uses a complex nonlinear feature extraction procedure integrating a forward rolling Empirical Mode Decomposition (EMD) for financial time series signal analysis and Principal Component Analysis (PCA) for dimension reduction to generate information-rich features as input to a new two-layer K-Nearest Neighbor (KNN) with Affinity Propagation (AP) clustering for prediction via regression. The EPAK model is then used as a kernel for predicting each of all the sector indices of the stock market. The sector indices predictions are then synthesized via weighted average to generate the prediction of the stock market index, yielding a complex prediction model for the stock market index. The EPAK model and the complex prediction model for stock index are tested on real historical financial time series in Chinese stock index including CSI 300 and ten sector indices, with results confirming the effectiveness of the proposed models.
APA, Harvard, Vancouver, ISO, and other styles
24

Xiao, Hai Ping, Lan Lan Chen, Yi Qiang Chen, and Zhong Qun Guo. "Research and Application of Grey Predictive Model Based on Wavelet Analysis." Applied Mechanics and Materials 170-173 (May 2012): 2912–16. http://dx.doi.org/10.4028/www.scientific.net/amm.170-173.2912.

Full text
Abstract:
It is the scientific basis of instructing the project to produce and operate that the deformation is monitored, and the analysis and prediction in constructing and operating of project is one of the important jobs. In order to analyze and predict the deformation of the project more timely and accurately, the paper analyzed and established the feasibility of wavelet-grey predicting model on the basis of the grey system theory in modeling limitations and the characteristics of wavelet transformation. With the comparison of predictive datas in two kinds of models, the results show, the predictive datas of the wavelet-grey model are more accurately than grey model’s, and has achieved good results in prediction of the engineering, is a feasible method.
APA, Harvard, Vancouver, ISO, and other styles
25

Kouadri, Wissam Mammar, Mourad Ouziri, Salima Benbernou, Karima Echihabi, Themis Palpanas, and Iheb Ben Amor. "Quality of sentiment analysis tools." Proceedings of the VLDB Endowment 14, no. 4 (December 2020): 668–81. http://dx.doi.org/10.14778/3436905.3436924.

Full text
Abstract:
In this paper, we present a comprehensive study that evaluates six state-of-the-art sentiment analysis tools on five public datasets, based on the quality of predictive results in the presence of semantically equivalent documents, i.e., how consistent existing tools are in predicting the polarity of documents based on paraphrased text. We observe that sentiment analysis tools exhibit intra-tool inconsistency , which is the prediction of different polarity for semantically equivalent documents by the same tool, and inter-tool inconsistency , which is the prediction of different polarity for semantically equivalent documents across different tools. We introduce a heuristic to assess the data quality of an augmented dataset and a new set of metrics to evaluate tool inconsistencies. Our results indicate that tool inconsistencies is still an open problem, and they point towards promising research directions and accuracy improvements that can be obtained if such inconsistencies are resolved.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhou, Xiaobing, Youmin Tang, Yanjie Cheng, and Ziwang Deng. "Improved ENSO Prediction by Singular Vector Analysis in a Hybrid Coupled Model." Journal of Atmospheric and Oceanic Technology 26, no. 3 (March 1, 2009): 626–34. http://dx.doi.org/10.1175/2008jtecho599.1.

Full text
Abstract:
Abstract In this study, a method based on singular vector analysis is proposed to improve El Niño–Southern Oscillation (ENSO) predictions. Its essential idea is that the initial errors are projected onto their optimal growth patterns, which are propagated by the tangent linear model (TLM) of the original prediction model. The forecast errors at a given lead time of predictions are obtained, and then removed from the raw predictions. This method is applied to a realistic ENSO prediction model for improving prediction skill for the period from 1980 to 1999. This correction method considerably improves the ENSO prediction skill, compared with the original predictions without the correction.
APA, Harvard, Vancouver, ISO, and other styles
27

Poelman, M. C., A. Hegyi, A. Verbraeck, and J. W. C. van Lint. "Sensitivity Analysis to Define Guidelines for Predictive Control Design." Transportation Research Record: Journal of the Transportation Research Board 2674, no. 6 (May 18, 2020): 385–98. http://dx.doi.org/10.1177/0361198120919114.

Full text
Abstract:
Signalized traffic control is important in traffic management to reduce congestion in urban areas. With recent technological developments, more data have become available to the controllers and advanced state estimation and prediction methods have been developed that use these data. To fully benefit from these techniques in the design of signalized traffic controllers, it is important to look at the quality of the estimated and predicted input quantities in relation to the performance of the controllers. Therefore, in this paper, a general framework for sensitivity analysis is proposed, to analyze the effect of erroneous input quantities on the performance of different types of signalized traffic control. The framework is illustrated for predictive control with different adaptivity levels. Experimental relations between the performance of the control system and the prediction horizon are obtained for perfect and erroneous predictions. The results show that prediction improves the performance of a signalized traffic controller, even in most of the cases with erroneous input data. Moreover, controllers with high adaptivity seem to outperform controllers with low adaptivity, under both perfect and erroneous predictions. The outcome of the sensitivity analysis contributes to understanding the relations between information quality and performance of signalized traffic control. In the design phase of a controller, this insight can be used to make choices on the length of the prediction horizon, the level of adaptivity of the controller, the representativeness of the objective of the control system, and the input quantities that need to be estimated and predicted the most accurately.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Ningyan. "House Price Prediction Model of Zhaoqing City Based on Correlation Analysis and Multiple Linear Regression Analysis." Wireless Communications and Mobile Computing 2022 (May 5, 2022): 1–18. http://dx.doi.org/10.1155/2022/9590704.

Full text
Abstract:
Situated in southern China, Zhaoqing City is a part of Guangdong Province, China. The total administrative area of the city covers 14,891 square kilometers. The data of China’s seventh population census in 2020 showed that the permanent resident population in Zhaoqing City reached up to 4,413,594. Meanwhile, Zhaoqing is one of the cities in the Guangdong-Hong Kong-Macao Greater Bay Area. House price analysis and prediction carried out against Zhaoqing City will have directive significance for relevant policies formulated by the local government, residential investment or purchase of consumers, and prediction of house price trend as well as business decisions made by enterprises. By virtue of machine learning and statistical theory, the house price in Zhaoqing City from 2010 to 2020 will be researched, and the house price prediction model of Zhaoqing City will be constructed in this paper with several variables including GDP, proportion of tertiary industry, income of urban residents, fiscal revenue, land price, investment volume in real estate development, permanent resident population, population density, and proportion of urban population in net migration. First of all, the methods of correlation analysis will be utilized, to select variables that are highly correlated with house price data based on correlation coefficients. Then, the model will be constructed for predicting the house price on the basis of multiple linear regression analysis that is conducted with selected variables. Finally, the prediction model will be adjusted gradually based on data with different correlations selected from available data, to realize better imitative effect and more precise predictive effect and select optimum prediction model. By means of the above model, the house prices of Zhaoqing City in 2021 and beyond will be predicted accurately, with preferable fitting effect and prediction effect.
APA, Harvard, Vancouver, ISO, and other styles
29

Ge, Shaojia, Erkki Tomppo, Yrjö Rauste, Ronald E. McRoberts, Jaan Praks, Hong Gu, Weimin Su, and Oleg Antropov. "Sentinel-1 Time Series for Predicting Growing Stock Volume of Boreal Forest: Multitemporal Analysis and Feature Selection." Remote Sensing 15, no. 14 (July 11, 2023): 3489. http://dx.doi.org/10.3390/rs15143489.

Full text
Abstract:
Copernicus Sentinel-1 images are widely used for forest mapping and predicting forest growing stock volume (GSV) due to their accessibility. However, certain important aspects related to the use of Sentinel-1 time series have not been thoroughly explored in the literature. These include the impact of image time series length on prediction accuracy, the optimal feature selection approaches, and the best prediction methods. In this study, we conduct an in-depth exploration of the potential of long time series of Sentinel-1 SAR data to predict forest GSV and evaluate the temporal dynamics of the predictions using extensive reference data. Our boreal coniferous forests study site is located near the Hyytiälä forest station in central Finland and covers an area of 2500 km2 with nearly 17,000 stands. We considered several prediction approaches and fine-tuned them to predict GSV in various evaluation scenarios. Our analyses used 96 Sentinel-1 images acquired over three years. Different approaches for aggregating SAR images and choosing feature (predictor) variables were evaluated. Our results demonstrate a considerable decrease in the root mean squared errors (RMSEs) of GSV predictions as the number of images increases. While prediction accuracy using individual Sentinel-1 images varied from 85 to 91 m3/ha RMSE, prediction accuracy with combined images decreased to 75.6 m3/ha. Feature extraction and dimension reduction techniques facilitated the achievement of near-optimal prediction accuracy using only 8–10 images. Examined methods included radiometric contrast, mutual information, improved k-Nearest Neighbors, random forests selection, Lasso, and Wrapper approaches. Lasso was the most optimal, with RMSE reaching 77.1 m3/ha. Finally, we found that using assemblages of eight consecutive images resulted in the greatest accuracy in predicting GSV when initial acquisitions started between September and January.
APA, Harvard, Vancouver, ISO, and other styles
30

Son, Hyesook, Seokyeon Kim, Hanbyul Yeon, Yejin Kim, Yun Jang, and Seung-Eock Kim. "Visual Analysis of Spatiotemporal Data Predictions with Deep Learning Models." Applied Sciences 11, no. 13 (June 24, 2021): 5853. http://dx.doi.org/10.3390/app11135853.

Full text
Abstract:
The output of a deep-learning model delivers different predictions depending on the input of the deep learning model. In particular, the input characteristics might affect the output of a deep learning model. When predicting data that are measured with sensors in multiple locations, it is necessary to train a deep learning model with spatiotemporal characteristics of the data. Additionally, since not all of the data measured together result in increasing the accuracy of the deep learning model, we need to utilize the correlation characteristics between the data features. However, it is difficult to interpret the deep learning output, depending on the input characteristics. Therefore, it is necessary to analyze how the input characteristics affect prediction results to interpret deep learning models. In this paper, we propose a visualization system to analyze deep learning models with air pollution data. The proposed system visualizes the predictions according to the input characteristics. The input characteristics include space-time and data features, and we apply temporal prediction networks, including gated recurrent units (GRU), long short term memory (LSTM), and spatiotemporal prediction networks (convolutional LSTM) as deep learning models. We interpret the output according to the characteristics of input to show the effectiveness of the system.
APA, Harvard, Vancouver, ISO, and other styles
31

Huang, Wanqi, Yizhuo Li, Yuhang Zhao, and Lanfeng Zheng. "Time Series Analysis and Prediction on Bitcoin." BCP Business & Management 34 (December 14, 2022): 1223–34. http://dx.doi.org/10.54691/bcpbm.v34i.3163.

Full text
Abstract:
Bitcoin is the most famous digital currency in the world and has become an investment asset. Prediction is one of the important matters in the investment market. In the economic field, there are different studies on the reasons for the price change of Bitcoin and how to predict the price trend of Bitcoin or how Bitcoin studies the market. Therefore, for Bitcoin, predicting the trend of Bitcoin price can effectively help Bitcoin investors. Data from www. Coingecko, the price of bitcoin is sorted according to the time sequence. Using the time series model, the change of bitcoin price in a specific period which is from 28 April 2013 to 22 August 2022 is calculated to predict the future trend of bitcoin price. Data preprocessing includes attributes removal, stationary test, and differencing. In predicting the price of Bitcoin, the ARIMA method that can produce high accuracy in short-term prediction is adopted. Use prediction test AIC and Check the residuals to select the best prediction model among the candidate models. The results of model testing show that AIC of ARIMA (5,1,2) is the smallest among all candidate models, and the results of residual check also show that ARIMA (5,1,2) model is the best model for predicting four periods.
APA, Harvard, Vancouver, ISO, and other styles
32

Amini, Mohammad Reza, Yiheng Feng, Zhen Yang, Ilya Kolmanovsky, and Jing Sun. "Long-Term Vehicle Speed Prediction via Historical Traffic Data Analysis for Improved Energy Efficiency of Connected Electric Vehicles." Transportation Research Record: Journal of the Transportation Research Board 2674, no. 11 (August 20, 2020): 17–29. http://dx.doi.org/10.1177/0361198120941508.

Full text
Abstract:
Connected and automated vehicles (CAVs) are expected to provide enhanced safety, mobility, and energy efficiency. While abundant evidence has been accumulated showing substantial energy saving potentials of CAVs through eco-driving, traffic condition prediction has remained to be the main challenge in capitalizing the gains. The coupled power and thermal subsystems of CAVs necessitate the use of different speed preview windows for effective and integrated power and thermal management. Real-time vehicle-to-infrastructure (V2I) communications can provide an accurate speed prediction over a short prediction horizon (e.g., 30 s to 60 s), but not for a long range (e.g., over 180 s). Therefore, advanced approaches are required to develop detailed speed prediction for robust optimization-based energy management of CAVs. This paper presents an integrated speed prediction framework based on historical traffic data classification and real-time V2I communications for efficient energy management of electrified CAVs. The proposed framework provides multi-range speed predictions with different fidelity over short and long horizons. The proposed multi-range speed prediction is integrated with an economic model predictive control (MPC) strategy for the battery thermal management (BTM) of connected and automated electric vehicles (EVs). The simulation results over real-world urban driving cycles confirm the enhanced prediction performance of the proposed data classification strategy over a long prediction horizon. Despite the uncertainty in long-range CAVs’ speed predictions, the vehicle-level simulation results show that 14% and 19% energy savings can be accumulated sequentially through eco-driving and BTM optimization (eco-cooling), respectively, when compared with normal driving (i.e., human driver) and conventional BTM strategy.
APA, Harvard, Vancouver, ISO, and other styles
33

Ooka, Tadao, Hisashi Johno, Kazunori Nakamoto, Yoshioki Yoda, Hiroshi Yokomichi, and Zentaro Yamagata. "Random forest approach for determining risk prediction and predictive factors of type 2 diabetes: large-scale health check-up data in Japan." BMJ Nutrition, Prevention & Health 4, no. 1 (March 11, 2021): 140–48. http://dx.doi.org/10.1136/bmjnph-2020-000200.

Full text
Abstract:
IntroductionEarly intervention in type 2 diabetes can prevent exacerbation of insulin resistance. More effective interventions can be implemented by early and precise prediction of the change in glycated haemoglobin A1c (HbA1c). Artificial intelligence (AI), which has been introduced into various medical fields, may be useful in predicting changes in HbA1c. However, the inability to explain the predictive factors has been a problem in the use of deep learning, the leading AI technology. Therefore, we applied a highly interpretable AI method, random forest (RF), to large-scale health check-up data and examined whether there was an advantage over a conventional prediction model.Research design and methodsThis study included a cumulative total of 42 908 subjects not receiving treatment for diabetes with an HbA1c <6.5%. The objective variable was the change in HbA1c in the next year. Each prediction model was created with 51 health-check items and part of their change values from the previous year. We used two analytical methods to compare the predictive powers: RF as a new model and multivariate logistic regression (MLR) as a conventional model. We also created models excluding the change values to determine whether it positively affected the predictions. In addition, variable importance was calculated in the RF analysis, and standard regression coefficients were calculated in the MLR analysis to identify the predictors.ResultsThe RF model showed a higher predictive power for the change in HbA1c than MLR in all models. The RF model including change values showed the highest predictive power. In the RF prediction model, HbA1c, fasting blood glucose, body weight, alkaline phosphatase and platelet count were factors with high predictive power.ConclusionsCorrect use of the RF method may enable highly accurate risk prediction for the change in HbA1c and may allow the identification of new diabetes risk predictors.
APA, Harvard, Vancouver, ISO, and other styles
34

Rasero, Javier, Amy Isabella Sentis, Fang-Cheng Yeh, and Timothy Verstynen. "Integrating across neuroimaging modalities boosts prediction accuracy of cognitive ability." PLOS Computational Biology 17, no. 3 (March 5, 2021): e1008347. http://dx.doi.org/10.1371/journal.pcbi.1008347.

Full text
Abstract:
Variation in cognitive ability arises from subtle differences in underlying neural architecture. Understanding and predicting individual variability in cognition from the differences in brain networks requires harnessing the unique variance captured by different neuroimaging modalities. Here we adopted a multi-level machine learning approach that combines diffusion, functional, and structural MRI data from the Human Connectome Project (N = 1050) to provide unitary prediction models of various cognitive abilities: global cognitive function, fluid intelligence, crystallized intelligence, impulsivity, spatial orientation, verbal episodic memory and sustained attention. Out-of-sample predictions of each cognitive score were first generated using a sparsity-constrained principal component regression on individual neuroimaging modalities. These individual predictions were then aggregated and submitted to a LASSO estimator that removed redundant variability across channels. This stacked prediction led to a significant improvement in accuracy, relative to the best single modality predictions (approximately 1% to more than 3% boost in variance explained), across a majority of the cognitive abilities tested. Further analysis found that diffusion and brain surface properties contribute the most to the predictive power. Our findings establish a lower bound to predict individual differences in cognition using multiple neuroimaging measures of brain architecture, both structural and functional, quantify the relative predictive power of the different imaging modalities, and reveal how each modality provides unique and complementary information about individual differences in cognitive function.
APA, Harvard, Vancouver, ISO, and other styles
35

Dermadi, Yedi, and Yoanes Bandung. "Tsunami Impact Prediction System Based on TsunAWI Inundation Data." Journal of ICT Research and Applications 15, no. 1 (June 29, 2021): 21–40. http://dx.doi.org/10.5614/itbj.ict.res.appl.2021.15.1.2.

Full text
Abstract:
It is very important for tsunami early warning systems to provide inundation predictions within a short period of time. Inundation is one of the factors that directly cause destruction and damage from tsunamis. This research proposes a tsunami impact prediction system based on inundation data analysis. The inundation data used in this analysis were obtained from the tsunami modeling called TsunAWI. The inundation data analysis refers to the coastal forecast zones for each city/regency that are currently used in the Indonesia Tsunami Early Warning System (InaTEWS). The data analysis process comprises data collection, data transformation, data analysis (through GIS analysis, predictive analysis, and simple statistical analysis), and data integration, ultimately producing a pre-calculated inundation database for inundation prediction and tsunami impact prediction. As the outcome, the tsunami impact prediction system provides estimations of the flow depth and inundation distance for each city/regency incorporated into generated tsunami warning bulletins and impact predictions based on the Integrated Tsunami Intensity Scale (ITIS-2012). In addition, the system provides automatic sea level anomaly detection from tide gauge sensors by applying a tsunami detection algorithm. Finally, the contribution of this research is expected to bring enhancements to the tsunami warning products of InaTEWS.
APA, Harvard, Vancouver, ISO, and other styles
36

Seabe, Phumudzo Lloyd, Claude Rodrigue Bambe Moutsinga, and Edson Pindza. "Forecasting Cryptocurrency Prices Using LSTM, GRU, and Bi-Directional LSTM: A Deep Learning Approach." Fractal and Fractional 7, no. 2 (February 18, 2023): 203. http://dx.doi.org/10.3390/fractalfract7020203.

Full text
Abstract:
Highly accurate cryptocurrency price predictions are of paramount interest to investors and researchers. However, owing to the nonlinearity of the cryptocurrency market, it is difficult to assess the distinct nature of time-series data, resulting in challenges in generating appropriate price predictions. Numerous studies have been conducted on cryptocurrency price prediction using different Deep Learning (DL) based algorithms. This study proposes three types of Recurrent Neural Networks (RNNs): namely, Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Bi-Directional LSTM (Bi-LSTM) for exchange rate predictions of three major cryptocurrencies in the world, as measured by their market capitalization—Bitcoin (BTC), Ethereum (ETH), and Litecoin (LTC). The experimental results on the three major cryptocurrencies using both Root Mean Squared Error (RMSE) and the Mean Absolute Percentage Error (MAPE) show that the Bi-LSTM performed better in prediction than LSTM and GRU. Therefore, it can be considered the best algorithm. Bi-LSTM presented the most accurate prediction compared to GRU and LSTM, with MAPE values of 0.036, 0.041, and 0.124 for BTC, LTC, and ETH, respectively. The paper suggests that the prediction models presented in it are accurate in predicting cryptocurrency prices and can be beneficial for investors and traders. Additionally, future research should focus on exploring other factors that may influence cryptocurrency prices, such as social media and trading volumes.
APA, Harvard, Vancouver, ISO, and other styles
37

S.S., Sayyidkosimov `., Kazakov A.N, Khakberdiev M.R., Tursunbayev D.A., and Tuxsariyev B.B. "Analysis Of Methods And Means Of Bump Hazard Prediction." American Journal of Applied sciences 03, no. 04 (April 22, 2021): 47–54. http://dx.doi.org/10.37547/tajas/volume03issue04-06.

Full text
Abstract:
The article analyzes the methods and tools for predicting the impact hazard in the conditions of underground mining of gold deposits. To assess the stress state of a rock mass, the core disking method is proposed as a basic method. The degree and categories of impact hazard of sections of the rock mass are estimated. Due to the impossibility of solving many problems by geomechanical only field studies. The reliable efficiency of the use of the finite element method and the boundary element method in predicting the impact hazard of the field sites a priori is shown.
APA, Harvard, Vancouver, ISO, and other styles
38

V., Haribaabu. "Analysis of Filters in ECG Signal for Emotion Prediction." Journal of Advanced Research in Dynamical and Control Systems 12, SP4 (March 31, 2020): 896–902. http://dx.doi.org/10.5373/jardcs/v12sp4/20201559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dönmez, İlknur. "Human Activity Analysis and Prediction Using Google n-Grams." International Journal of Future Computer and Communication 7, no. 2 (June 2018): 32–36. http://dx.doi.org/10.18178/ijfcc.2018.7.2.516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Dixon, A. A., R. O. Holness, W. J. Howes, and J. B. Garner. "Spontaneous Intracerebral Haemorrhage: An Analysis of Factors Affecting Prognosis." Canadian Journal of Neurological Sciences / Journal Canadien des Sciences Neurologiques 12, no. 3 (August 1985): 267–71. http://dx.doi.org/10.1017/s0317167100047144.

Full text
Abstract:
ABSTRACT:A retrospective study of 100 patients with spontaneous intracerebral haemorrhage was carried out, to identify clinical factors which have a predictive value for outcome. Numerical equivalents for the admission level of consciousness (the Glasgow Coma Scale), ventricular rupture, partial pressure of oxygen in the blood, the electrocardiogram, clot location, and clot size were combined into equations predicting outcome. The best single parameter for prediction was the Glasgow Coma Scale.
APA, Harvard, Vancouver, ISO, and other styles
41

Chen, Gang, Donglin Zhu, Xiao Wang, Changjun Zhou, and Xiangyu Chen. "Prediction of Concrete Compressive Strength Based on the BP Neural Network Optimized by Random Forest and ISSA." Journal of Function Spaces 2022 (August 27, 2022): 1–20. http://dx.doi.org/10.1155/2022/8799429.

Full text
Abstract:
In modern engineering construction, the compressive strength of concrete determines the safety of engineering structure. BP neural network (BPNN) tends to converge to different local minimum points, and the prediction accuracy is not high in the prediction of the compressive strength of concrete. Therefore, a prediction model based on the BPNN optimized by improved sparrow search algorithm (ISSA) and random forest (RF) is proposed to enhance the generalization ability and prediction accuracy of BPNN for compressive strength of concrete. In terms of algorithm improvement, three improvements are proposed for SSA: Latin hypercube sampling is introduced to initialize the location of sparrows and increase the diversity of sparrows; the somersault foraging strategy is used to enrich the optimal position of producers; and combining with the cyclone foraging mechanism, the position updating process of the scroungers is optimized to obtain a better foraging position. In terms of performance evaluation of the algorithm, the ablation experiment verifies that the three improved strategies have improved effects in SSA, and the performance of ISSA on the CEC2017 benchmark function is better than other peers. In terms of predictive index screening, the important features are selected as the input variables of the model by random forest. The prediction results show that compared with the RF-BPNN model and models optimized by other algorithms, RF-ISSA-BPNN model has the lowest prediction error, and the expected value fits the real value better.
APA, Harvard, Vancouver, ISO, and other styles
42

Sajjadian, Mehri, Raymond W. Lam, Roumen Milev, Susan Rotzinger, Benicio N. Frey, Claudio N. Soares, Sagar V. Parikh, et al. "Machine learning in the prediction of depression treatment outcomes: a systematic review and meta-analysis." Psychological Medicine 51, no. 16 (October 12, 2021): 2742–51. http://dx.doi.org/10.1017/s0033291721003871.

Full text
Abstract:
AbstractBackgroundMultiple treatments are effective for major depressive disorder (MDD), but the outcomes of each treatment vary broadly among individuals. Accurate prediction of outcomes is needed to help select a treatment that is likely to work for a given person. We aim to examine the performance of machine learning methods in delivering replicable predictions of treatment outcomes.MethodsOf 7732 non-duplicate records identified through literature search, we retained 59 eligible reports and extracted data on sample, treatment, predictors, machine learning method, and treatment outcome prediction. A minimum sample size of 100 and an adequate validation method were used to identify adequate-quality studies. The effects of study features on prediction accuracy were tested with mixed-effects models. Fifty-four of the studies provided accuracy estimates or other estimates that allowed calculation of balanced accuracy of predicting outcomes of treatment.ResultsEight adequate-quality studies reported a mean accuracy of 0.63 [95% confidence interval (CI) 0.56–0.71], which was significantly lower than a mean accuracy of 0.75 (95% CI 0.72–0.78) in the other 46 studies. Among the adequate-quality studies, accuracies were higher when predicting treatment resistance (0.69) and lower when predicting remission (0.60) or response (0.56). The choice of machine learning method, feature selection, and the ratio of features to individuals were not associated with reported accuracy.ConclusionsThe negative relationship between study quality and prediction accuracy, combined with a lack of independent replication, invites caution when evaluating the potential of machine learning applications for personalizing the treatment of depression.
APA, Harvard, Vancouver, ISO, and other styles
43

Wadi, Faska Aris Y. K., Putu Sugiartawan, Ni Nengah Dita Adriani, and Ni Nengah Dita Adriani. "Analisa Prediksi Time Series Jumlah Kasus Covid-19 Dengan Metode BPNN Di Bali." Jurnal Sistem Informasi dan Komputer Terapan Indonesia (JSIKTI) 4, no. 1 (January 14, 2022): 24–33. http://dx.doi.org/10.33173/jsikti.124.

Full text
Abstract:
The COVID-19 pandemic has not yet subsided. This epidemic has spread to almost all countries in the world. In Indonesia, especially in the province of Bali, which experienced a large number of additional positive cases, recoveries and deaths from COVID-19, an analysis was carried out. The purpose of this analysis is to be able to obtain accuracy in predicting the addition of COVID-19 cases, recoveries and deaths in the province of Bali, predictions are made using the covid-19 time series data used in making predictions. what was done obtained the best and not good prediction accuracy, prediction using one input and one output obtained the best precision model accuracy of 72% and for poor accuracy using three inputs and one output with a prediction model accuracy of 33% in the process Covid-19 predictions in Bali.
APA, Harvard, Vancouver, ISO, and other styles
44

Nan, Libin, Kai Guo, Mingmin Li, Qi Wu, and Shaojun Huo. "Development and validation of a multi-parameter nomogram for predicting prostate cancer: a retrospective analysis from Handan Central Hospital in China." PeerJ 10 (March 2, 2022): e12912. http://dx.doi.org/10.7717/peerj.12912.

Full text
Abstract:
Background To explore the possible predicting factors related to prostate cancer and develop a validated nomogram for predicting the probability of patients with prostate cancer. Method Clinical data of 697 patients who underwent prostate biopsy in Handan Central Hospital from January 2014 to January 2020 were retrospectively collected. Cases were randomized into two groups: 80% (548 cases) as the development group, and 20% (149 cases) as the validation group. Univariate and multivariate logistic regression analyses were performed to determine the independent risk factors for prostate cancer. The nomogram prediction model was generated using the finalized independent risk factors. Decision curve analysis (DCA) and the area under receiver operating characteristics curve (ROC) of both development group and validation group were calculated and compared to validate the accuracy and efficiency of the nomogram prediction model. Clinical utility curve (CUC) helped to decide the desired cut-off value for the prediction model. The established nomogram with Prostate Cancer Prevention Trial Derived Cancer Risk Calculator (PCPT-CRC) and other domestic prediction models using the entire study population were compared. Results The independent risk factors determined through univariate and multivariate logistic regression analyses were: age, tPSA, fPSA, PV, DRE, TRUS and BMI. Nomogram prediction model was developed with the cut-off value of 0.31. The AUC of development group and validation group were 0.856 and 0.797 respectively. DCA exhibits consistent observations with the findings. Through validating our prediction model as well as other three domestic prediction models based on the entire study population of 697 cases, our prediction model demonstrated significantly higher predictive value than all the other models. Conclusion The nomogram for predicting prostate cancer can facilitate more accurate evaluation of the probability of having prostate cancer, and provide better ground for prostate biopsy.
APA, Harvard, Vancouver, ISO, and other styles
45

Venskus, Julius, Povilas Treigys, and Jurita Markevičiūtė. "Unsupervised marine vessel trajectory prediction using LSTM network and wild bootstrapping techniques." Nonlinear Analysis: Modelling and Control 26, no. 4 (July 1, 2021): 718–37. http://dx.doi.org/10.15388/namc.2021.26.23056.

Full text
Abstract:
Increasing intensity in maritime traffic pushes the requirement in better preventionoriented incident management system. Observed regularities in data could help to predict vessel movement from previous vessels trajectory data and make further movement predictions under specific traffic and weather conditions. However, the task is burden by the fact that the vessels behave differently in different geographical sea regions, sea ports, and their trajectories depends on the vessel type as well. The model must learn spatio-temporal patterns representing vessel trajectories and should capture vessel’s position relation to both space and time. The authors of the paper proposes new unsupervised trajectory prediction with prediction regions at arbitrary probabilities using two methods: LSTM prediction region learning and wild bootstrapping. Results depict that both the autoencoder-based and wild bootstrapping region prediction algorithms can predict vessel trajectory and be applied for abnormal marine traffic detection by evaluating obtained prediction region in an unsupervised manner with desired prediction probability.
APA, Harvard, Vancouver, ISO, and other styles
46

Mahmudi, Bambang, and Enis Khaerunnisa. "BANKRUPTCY PREDICTION ANALYSIS USING THE ALTMAN Z-SCORE AND SPRINGATE MODELS IN INSURANCE COMPANIES WHICH GO PUBLIC IN THE INDONESIA STOCK EXCHANGE." Management Science Research Journal 2, no. 1 (February 9, 2023): 28–45. http://dx.doi.org/10.56548/msr.v2i1.45.

Full text
Abstract:
The research aims to analyze the potential bankruptcy using Altman Z-Score and the Springate model also to find out the level of accuracy prediction model in insurance companies go public on the Indonesia Stock Exchange. The population used in this research all insurance companies listed on the Indonesia Stock Exchange in the period 2016-2019 with sample of 12 companies. The research method employs descriptive analysis using secondary data and Paired t test. The results of the research on the prediction of bankruptcy using the Altman Z-Score model showed that 11 insurance companies were in the healthy category and 1 company was in a condition of prone to bankruptcy (grey area). Based on the average value of the Z-Score prediction compared to the reality value, 10 companies are in accordance with the prediction, and 2 companies are not match with the prediction whereas an Altman Z-Score model has accuracy rate of 83.33%. The results of the Springate model bankruptcy prediction research showed 4 companies are in good health and 8 insurance companies have the potential to go bankrupt. The average prediction score of the Springate model is compared with the reality value described 11 companies are in accordance with the predictions, but 1 company is not in accordance with the predictions, the accuracy rate was 91.67%. Based on the result of paired sample t test showed that there were significant differences Altman and Springate model in predicting bankruptcy
APA, Harvard, Vancouver, ISO, and other styles
47

Lin, Wan-Ju, Shih-Hsuan Lo, Hong-Tsu Young, and Che-Lun Hung. "Evaluation of Deep Learning Neural Networks for Surface Roughness Prediction Using Vibration Signal Analysis." Applied Sciences 9, no. 7 (April 8, 2019): 1462. http://dx.doi.org/10.3390/app9071462.

Full text
Abstract:
The use of surface roughness (Ra) to indicate product quality in the milling process in an intelligent monitoring system applied in-process has been developing. From the considerations of convenient installation and cost-effectiveness, accelerator vibration signals combined with deep learning predictive models for predicting surface roughness is a potential tool. In this paper, three models, namely, Fast Fourier Transform-Deep Neural Networks (FFT-DNN), Fast Fourier Transform Long Short Term Memory Network (FFT-LSTM), and one-dimensional convolutional neural network (1-D CNN), are used to explore the training and prediction performances. Feature extraction plays an important role in the training and predicting results. FFT and the one-dimensional convolution filter, known as 1-D CNN, are employed to extract vibration signals’ raw data. The results show the following: (1) the LSTM model presents the temporal modeling ability to achieve a good performance at higher Ra value and (2) 1-D CNN, which is better at extracting features, exhibits highly accurate prediction performance at lower Ra ranges. Based on the results, vibration signals combined with a deep learning predictive model could be applied to predict the surface roughness in the milling process. Based on this experimental study, the use of prediction of the surface roughness via vibration signals using FFT-LSTM or 1-D CNN is recommended to develop an intelligent system.
APA, Harvard, Vancouver, ISO, and other styles
48

Karpac, Dusan, and Viera Bartosova. "The verification of prediction and classification ability of selected Slovak prediction models and their emplacement in forecasts of financial health of a company in aspect of globalization." SHS Web of Conferences 74 (2020): 06010. http://dx.doi.org/10.1051/shsconf/20207406010.

Full text
Abstract:
Predicting financial health of a company is in this global world necessary for each business entity, especially for the international ones, as it´s very important to know financial stability. Forecasting business failure is a worldwide known term, in a global notion, and there is a lot of prediction models constructed to compute financial health of a company and, by that, state whether a company inclines to financial boom or bankruptcy. Globalized prediction models compute financial health of companies, but the vast majority of models predicting business failure are constructed solely for the conditions of a particular country or even just for a specific sector of a national economy. Field of financial predictions regarding to international view consists of elementary used models, for example, such as Altman´s Z-score or Beerman´s index, which are globally know and used as basic of many other modificated models. Following article deals with selected Slovak prediction models designed to Slovak conditions, states how these models stand in this global world, what is their international connection to the worldwide economies, and also states verification of their prediction ability in a specific sector. The verification of predictive ability of the models is defined by ROC analysis and through results the paper demonstrates the most suitable prediction models to use in the selected sector.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Weihang. "Application of Market Cycle Analysis and LSTM in Prediction of Stock Price Movements." BCP Business & Management 38 (March 2, 2023): 856–61. http://dx.doi.org/10.54691/bcpbm.v38i.3787.

Full text
Abstract:
The stock market prediction has been carried out by several ways in data science using deep learning approaches to capture profitable trading opportunities and making the trading plans. However, it is widely believed there are two main issues involved in it, i.e., efficient market hypothesis and low information noise ratio. Therefore, a prediction based model will be affected by noises thus hard to produce a prediction. In this paper, two methods will be presented for forecasting stock future performance. To be specific, LSTM (long-short time memory) and cycle analysis are implemented to predict the future period that gives a higher return than average times. According to the analysis, introducing the time analysis as a variable to input could significantly increase the accuracy of predicting the return for the next few weeks. These results shed light on guiding further exploration of the different ways of extracting periodic behaviors of the market and marking predictions based on the analysis.
APA, Harvard, Vancouver, ISO, and other styles
50

Lu, Ying, Xiaopeng Fan, Zhipan Zhao, and Xuepeng Jiang. "Dynamic Fire Risk Classification Prediction of Stadiums: Multi-Dimensional Machine Learning Analysis Based on Intelligent Perception." Applied Sciences 12, no. 13 (June 29, 2022): 6607. http://dx.doi.org/10.3390/app12136607.

Full text
Abstract:
Stadium fires can easily cause massive casualties and property damage. The early risk prediction of stadiums will be able to reduce the incidence of fires by making corresponding fire safety management and decision making in an early and targeted manner. In the field of building fires, some studies apply data mining techniques and machine learning algorithms to the collected risk hazard data for fire risk prediction. However, most of these studies use all attributes in the dataset, which may degrade the performance of predictive models due to data redundancy. Furthermore, machine learning algorithms are numerous and applied to fewer stadium fires, and it is crucial to explore models suitable for predicting stadium fire risk. The purpose of this study was to identify salient features to build a model for predicting stadium fire risk predictions. In this study, we designed an index attribute threshold interval to classify and quantify different fire risk data. We then used Gradient Boosting-Recursive Feature Elimination (GB-RFE) and Pearson correlation analysis to perform efficient feature selection on risk feature attributes to find the most informative salient feature subsets. Two cross-validation strategies were employed to address the dataset imbalance problem. Using the smart stadium fire risk data set provided by the Wuhan Emergency Rescue Detachment, the optimal prediction model was obtained based on the identified significant features and six machine learning methods of 12 combination forms, and full features were input as an experimental comparison study. Five performance evaluation metrics were used to evaluate and compare the combined models. Results show that the best performing model had an F1 score of 81.9% and an accuracy of 93.2%. Meanwhile, by introducing a precision-recall curve to explain the actual classification performance of each model, AdaBoost achieves the highest Auprc score (0.78), followed by SVM (0.77), which reveals more stable performance under such imbalanced data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography