Journal articles on the topic 'Prediction'

To see the other types of publications on this topic, follow the link: Prediction.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Prediction.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mtimkulu, Zimele, and Mfowabo Maphosa. "Flight Delay Prediction Using Machine Learning: A Comparative Study of Ensemble Techniques." International Conference on Artificial Intelligence and its Applications 2023 (November 9, 2023): 212–18. http://dx.doi.org/10.59200/icarti.2023.030.

Full text
Abstract:
Machine learning is a promising tool for predicting flight delays. Accurately predicting flight delays in aviation enhances operational efficiency and passenger contentment. Accurate predictions are critical to improving operational efficiency and passenger satisfaction. The study aims to develop a robust predictive model for domestic flights and identify key variables affecting delays. This investigation transcends the confines of traditional prediction methodologies by embracing the potency of ensemble techniques, thereby imbuing the model with the capacity to capture intricate patterns and dependencies within the dataset holistically. By adopting a comparative approach, this study systematically evaluates a spectrum of ensemble methods, unravelling their strengths and weaknesses in the context of flight delay prediction. The study’s results highlight the strong predictive performance of stacking methods (92.4%) and random forest (91.2%), which effectively capture patterns while cautioning about the sensitivity of AdaBoostClassifier (51.6%) to noisy data. This research has the potential to augment the precision and applicability of flight delay prediction, fostering operational enhancements within the aviation industry while increasing passenger satisfaction.
APA, Harvard, Vancouver, ISO, and other styles
2

Carlsson, Leo S., Mikael Vejdemo-Johansson, Gunnar Carlsson, and Pär G. Jönsson. "Fibers of Failure: Classifying Errors in Predictive Processes." Algorithms 13, no. 6 (June 23, 2020): 150. http://dx.doi.org/10.3390/a13060150.

Full text
Abstract:
Predictive models are used in many different fields of science and engineering and are always prone to make faulty predictions. These faulty predictions can be more or less malignant depending on the model application. We describe fibers of failure (FiFa), a method to classify failure modes of predictive processes. Our method uses Mapper, an algorithm from topological data analysis (TDA), to build a graphical model of input data stratified by prediction errors. We demonstrate two ways to use the failure mode groupings: either to produce a correction layer that adjusts predictions by similarity to the failure modes; or to inspect members of the failure modes to illustrate and investigate what characterizes each failure mode. We demonstrate FiFa on two scenarios: a convolutional neural network (CNN) predicting MNIST images with added noise, and an artificial neural network (ANN) predicting the electrical energy consumption of an electric arc furnace (EAF). The correction layer on the CNN model improved its prediction accuracy significantly while the inspection of failure modes for the EAF model provided guiding insights into the domain-specific reasons behind several high-error regions.
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Ke. "Predicting Student Performance Using Artificial Neural Networks." Journal of Arts, Society, and Education Studies 6, no. 1 (May 15, 2024): 45–77. http://dx.doi.org/10.69610/j.ases.20240515.

Full text
Abstract:
<p class="MsoNormal" style="text-align: justify;"><span style="font-family: Times New Roman;">This paper explores machine learning approaches to predicting student performance using artificial neural networks. By employing educational data mining and predictive modeling techniques, accurate predictions of student outcomes were achieved. The results indicate that artificial neural networks exhibit high accuracy and reliability in forecasting student academic performance. Through comprehensive analysis and empirical testing, this approach significantly enhances the effectiveness of student performance predictions. Future research directions may include further optimization of the model's algorithms and expansion of the data sample size to improve prediction accuracy and applicability. The method demonstrated exceptional performance in predicting student outcomes, offering high accuracy and efficacy. By mining and analyzing extensive educational data, a predictive model was established and validated through experiments. We introduce a novel predictive model to the field of education, providing robust support for student learning and educational decision-making. Future enhancements can optimize the model, increase prediction precision, and expand application fields to better serve the development of educational endeavors.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Hsin-Yao, Yu-Hsin Liu, Yi-Ju Tseng, Chia-Ru Chung, Ting-Wei Lin, Jia-Ruei Yu, Yhu-Chering Huang, and Jang-Jih Lu. "Investigating Unfavorable Factors That Impede MALDI-TOF-Based AI in Predicting Antibiotic Resistance." Diagnostics 12, no. 2 (February 5, 2022): 413. http://dx.doi.org/10.3390/diagnostics12020413.

Full text
Abstract:
The combination of Matrix-Assisted Laser Desorption/Ionization Time-of-Flight (MALDI-TOF) spectra data and artificial intelligence (AI) has been introduced for rapid prediction on antibiotic susceptibility testing (AST) of Staphylococcus aureus. Based on the AI predictive probability, cases with probabilities between the low and high cut-offs are defined as being in the “grey zone”. We aimed to investigate the underlying reasons of unconfident (grey zone) or wrong predictive AST. In total, 479 S. aureus isolates were collected and analyzed by MALDI-TOF, and AST prediction and standard AST were obtained in a tertiary medical center. The predictions were categorized as correct-prediction group, wrong-prediction group, and grey-zone group. We analyzed the association between the predictive results and the demographic data, spectral data, and strain types. For methicillin-resistant S. aureus (MRSA), a larger cefoxitin zone size was found in the wrong-prediction group. Multilocus sequence typing of the MRSA isolates in the grey-zone group revealed that uncommon strain types comprised 80%. Of the methicillin-susceptible S. aureus (MSSA) isolates in the grey-zone group, the majority (60%) comprised over 10 different strain types. In predicting AST based on MALDI-TOF AI, uncommon strains and high diversity contribute to suboptimal predictive performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Pace, Michael L. "Prediction and the aquatic sciences." Canadian Journal of Fisheries and Aquatic Sciences 58, no. 1 (January 1, 2001): 63–72. http://dx.doi.org/10.1139/f00-151.

Full text
Abstract:
The need for prediction is now widely recognized and frequently articulated as an objective of research programs in aquatic science. This recognition is partly the legacy of earlier advocacy by the school of empirical limnologists. This school, however, presented prediction narrowly and failed to account for the diversity of predictive approaches as well to set prediction within the proper scientific context. Examples from time series analysis and probabilistic models oriented toward management provide an expanded view of approaches and prospects for prediction. The context and rationale for prediction is enhanced understanding. Thus, prediction is correctly viewed as an aid to building scientific knowledge with better understanding leading to improved predictions. Experience, however, suggests that the most effective predictive models represent condensed models of key features in aquatic systems. Prediction remains important for the future of aquatic sciences. Predictions are required in the assessment of environmental concerns and for testing scientific fundamentals. Technology is driving enormous advances in the ability to study aquatic systems. If these advances are not accompanied by improvements in predictive capability, aquatic research will have failed in delivering on promised objectives. This situation should spark discomfort in aquatic scientists and foster creative approaches toward prediction.
APA, Harvard, Vancouver, ISO, and other styles
6

Фокина, Элла, and Георгий Елизарьев. "M&A Prediction Model-Based Investment Strategies." Journal of Corporate Finance Research / Корпоративные Финансы | ISSN: 2073-0438 17, no. 2 (September 4, 2023): 5–26. http://dx.doi.org/10.17323/j.jcfr.2073-0438.17.2.2023.5-26.

Full text
Abstract:
In this paper, we study the development of investment strategies by predicting M&A deals using a logistic model with the financial and non-financial indicators of public companies. A random sample of 1510 acquired and non-acquired companies in Germany, the United Kingdom, France, Sweden, and Russia over the period 2000-2021 was used to design an M&A logit prediction model with high predictive power. The use of interaction variables significantly improved the model’s predictive power and allowed it to obtain more than 70% of correct out-of-sample predictions. Then the model’s ability to generate abnormal returns was tested with the help of an event study using share price data over the period 2011-2021. We show that an M&A prediction model can also efficiently generate abnormal returns (up to 49% on average) for a portfolio of companies that are expected to be acquired. Moreover, we uncover evidence that reduction in false positiveand negative predictions has a positive effect on abnormal returns due to the added model flexibility resulting from interaction terms. Our positive theoretical and empirical results can help both private and institutional investors to design investment strategies. In addition, there are indirect implications that support the practical importance of an efficient M&A prediction model.
APA, Harvard, Vancouver, ISO, and other styles
7

Burbey, Ingrid, and Thomas L. Martin. "A survey on predicting personal mobility." International Journal of Pervasive Computing and Communications 8, no. 1 (March 30, 2012): 5–22. http://dx.doi.org/10.1108/17427371211221063.

Full text
Abstract:
PurposeLocation‐prediction enables the next generation of location‐based applications. The purpose of this paper is to provide a historical summary of research in personal location‐prediction. Location‐prediction began as a tool for network management, predicting the load on particular cellular towers or WiFi access points. With the increasing popularity of mobile devices, location‐prediction turned personal, predicting individuals' next locations given their current locations.Design/methodology/approachThis paper includes an overview of prediction techniques and reviews several location‐prediction projects comparing the raw location data, feature extraction, choice of prediction algorithms and their results.FindingsA new trend has emerged, that of employing additional context to improve or expand predictions. Incorporating temporal information enables location‐predictions farther out into the future. Appending place types or place names can improve predictions or develop prediction applications that could be used in any locale. Finally, the authors explore research into diverse types of context, such as people's personal contacts or health activities.Originality/valueThis overview provides a broad background for future research in prediction.
APA, Harvard, Vancouver, ISO, and other styles
8

Bierkens, M. F. P., and L. P. H. van Beek. "Seasonal Predictability of European Discharge: NAO and Hydrological Response Time." Journal of Hydrometeorology 10, no. 4 (August 1, 2009): 953–68. http://dx.doi.org/10.1175/2009jhm1034.1.

Full text
Abstract:
Abstract In this paper the skill of seasonal prediction of river discharge and how this skill varies between the branches of European rivers across Europe is assessed. A prediction system of seasonal (winter and summer) discharge is evaluated using 1) predictions of the average North Atlantic Oscillation (NAO) index for the coming winter based on May SST anomalies of the North Atlantic; 2) a global-scale hydrological model; and 3) 40-yr European Centre for Medium-Range Weather Forecasts Re-Analysis (ERA-40) data. The skill of seasonal discharge predictions is investigated with a numerical experiment. Also Europe-wide patterns of predictive skill are related to the use of NAO-based seasonal weather prediction, the hydrological properties of the river basin, and a correct assessment of initial hydrological states. These patterns, which are also corroborated by observations, show that in many parts of Europe the skill of predicting winter discharge can, in theory, be quite large. However, this achieved skill mainly comes from knowing the correct initial conditions of the hydrological system (i.e., groundwater, surface water, soil water storage of the basin) rather than from the use of NAO-based seasonal weather prediction. These factors are equally important for predicting subsequent summer discharge.
APA, Harvard, Vancouver, ISO, and other styles
9

Juneja, Dr Sonia. "House Price Prediction Using Machine Learning Algorithms." International Journal for Research in Applied Science and Engineering Technology 11, no. 6 (June 30, 2023): 3156–64. http://dx.doi.org/10.22214/ijraset.2023.54259.

Full text
Abstract:
Abstract: House price prediction is the process of using learning based techniques to predict the future sale price of a house. It explores the use of predictive models to accurately forecast house prices. It also examines the effectiveness of using machine learning algorithms to predict house prices. In particular, our research investigates the impact of data such as location, duration of house, dimension of house on the accuracy of the predictions. Finally, a discussion on the implications of using machine learning algorithms for predicting price for consumers and real estate professionals is presented
APA, Harvard, Vancouver, ISO, and other styles
10

Wei, Chih-Chiang, and Wei-Jen Kao. "Establishing a Real-Time Prediction System for Fine Particulate Matter Concentration Using Machine-Learning Models." Atmosphere 14, no. 12 (December 13, 2023): 1817. http://dx.doi.org/10.3390/atmos14121817.

Full text
Abstract:
With the rapid urbanization and industrialization in Taiwan, pollutants generated from industrial processes, coal combustion, and vehicle emissions have led to severe air pollution issues. This study focuses on predicting the fine particulate matter (PM2.5) concentration. This enables individuals to be aware of their immediate surroundings in advance, reducing their exposure to high concentrations of fine particulate matter. The research area includes Keelung City and Xizhi District in New Taipei City, located in northern Taiwan. This study establishes five fine prediction models based on machine-learning algorithms, namely, the deep neural network (DNN), M5’ decision tree algorithm (M5P), M5’ rules decision tree algorithm (M5Rules), alternating model tree (AMT), and multiple linear regression (MLR). Based on the predictive results from these five models, the study evaluates the optimal model for forecast horizons and proposes a real-time PM2.5 concentration prediction system by integrating various models. The results demonstrate that the prediction errors vary across different models at different forecast horizons, with no single model consistently outperforming the others. Therefore, the establishment of a hybrid prediction system proves to be more accurate in predicting future PM2.5 concentration compared to a single model. To assess the practicality of the system, the study process involved simulating data, with a particular focus on the winter season when high PM2.5 concentrations are prevalent. The predictive system generated excellent results, even though errors increased in long-term predictions. The system can promptly adjust its predictions over time, effectively forecasting the PM2.5 concentration for the next 12 h.
APA, Harvard, Vancouver, ISO, and other styles
11

Brüdigam, Tim, Johannes Teutsch, Dirk Wollherr, Marion Leibold, and Martin Buss. "Probabilistic model predictive control for extended prediction horizons." at - Automatisierungstechnik 69, no. 9 (September 1, 2021): 759–70. http://dx.doi.org/10.1515/auto-2021-0025.

Full text
Abstract:
Abstract Detailed prediction models with robust constraints and small sampling times in Model Predictive Control yield conservative behavior and large computational effort, especially for longer prediction horizons. Here, we extend and combine previous Model Predictive Control methods that account for prediction uncertainty and reduce computational complexity. The proposed method uses robust constraints on a detailed model for short-term predictions, while probabilistic constraints are employed on a simplified model with increased sampling time for long-term predictions. The underlying methods are introduced before presenting the proposed Model Predictive Control approach. The advantages of the proposed method are shown in a mobile robot simulation example.
APA, Harvard, Vancouver, ISO, and other styles
12

Ouenniche, Jamal, Kais Bouslah, Blanca Perez-Gladish, and Bing Xu. "A new VIKOR-based in-sample-out-of-sample classifier with application in bankruptcy prediction." Annals of Operations Research 296, no. 1-2 (April 9, 2019): 495–512. http://dx.doi.org/10.1007/s10479-019-03223-0.

Full text
Abstract:
AbstractNowadays, business analytics has become a common buzzword in a range of industries, as companies are increasingly aware of the importance of high quality predictions to guide their pro-active planning exercises. The financial industry is amongst those industries where predictive analytics techniques are widely used to predict both continuous and discrete variables. Conceptually, the prediction of discrete variables comes down to addressing sorting problems, classification problems, or clustering problems. The focus of this paper is on classification problems as they are the most relevant in risk-class prediction in the financial industry. The contribution of this paper lies in proposing a new classifier that performs both in-sample and out-of-sample predictions, where in-sample predictions are devised with a new VIKOR-based classifier and out-of-sample predictions are devised with a CBR-based classifier trained on the risk class predictions provided by the proposed VIKOR-based classifier. The performance of this new non-parametric classification framework is tested on a dataset of firms in predicting bankruptcy. Our findings conclude that the proposed new classifier can deliver a very high predictive performance, which makes it a real contender in industry applications in finance and investment.
APA, Harvard, Vancouver, ISO, and other styles
13

Oh, Cheol, Stephen G. Ritchie, and Jun-Seok Oh. "Exploring the Relationship between Data Aggregation and Predictability to Provide Better Predictive Traffic Information." Transportation Research Record: Journal of the Transportation Research Board 1935, no. 1 (January 2005): 28–36. http://dx.doi.org/10.1177/0361198105193500104.

Full text
Abstract:
Providing reliable predictive traffic information is a crucial element for successful operation of intelligent transportation systems. However, there are difficulties in providing accurate predictions mainly because of limitations in processing data associated with existing traffic surveillance systems and the lack of suitable prediction techniques. This study examines different aggregation intervals to characterize various levels of traffic dynamic representations and to investigate their effects on prediction accuracy. The relationship between data aggregation and predictability is explored by predicting travel times obtained from the inductive signature–based vehicle reidentification system on the I-405 freeway detector test bed in Irvine, California. For travel time prediction, this study employs three techniques: adaptive exponential smoothing, adaptive autoregressive model using Kalman filtering, and recurrent neural network with genetically optimized parameters. Finally, findings are discussed on suggestions for applying prediction techniques effectively.
APA, Harvard, Vancouver, ISO, and other styles
14

Drisya, G. V., D. C. Kiplangat, K. Asokan, and K. Satheesh Kumar. "Deterministic prediction of surface wind speed variations." Annales Geophysicae 32, no. 11 (November 19, 2014): 1415–25. http://dx.doi.org/10.5194/angeo-32-1415-2014.

Full text
Abstract:
Abstract. Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error) of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
APA, Harvard, Vancouver, ISO, and other styles
15

Watson-Daniels, Jamelle, David C. Parkes, and Berk Ustun. "Predictive Multiplicity in Probabilistic Classification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 9 (June 26, 2023): 10306–14. http://dx.doi.org/10.1609/aaai.v37i9.26227.

Full text
Abstract:
Machine learning models are often used to inform real world risk assessment tasks: predicting consumer default risk, predicting whether a person suffers from a serious illness, or predicting a person's risk to appear in court. Given multiple models that perform almost equally well for a prediction task, to what extent do predictions vary across these models? If predictions are relatively consistent for similar models, then the standard approach of choosing the model that optimizes a penalized loss suffices. But what if predictions vary significantly for similar models? In machine learning, this is referred to as predictive multiplicity i.e. the prevalence of conflicting predictions assigned by near-optimal competing models. In this paper, we present a framework for measuring predictive multiplicity in probabilistic classification (predicting the probability of a positive outcome). We introduce measures that capture the variation in risk estimates over the set of competing models, and develop optimization-based methods to compute these measures efficiently and reliably for convex empirical risk minimization problems. We demonstrate the incidence and prevalence of predictive multiplicity in real-world tasks. Further, we provide insight into how predictive multiplicity arises by analyzing the relationship between predictive multiplicity and data set characteristics (outliers, separability, and majority-minority structure). Our results emphasize the need to report predictive multiplicity more widely.
APA, Harvard, Vancouver, ISO, and other styles
16

Yu, Bin. "Bicycle Sales Prediction Based on Ensemble Learning." Advances in Economics, Management and Political Sciences 59, no. 1 (January 5, 2024): 293–99. http://dx.doi.org/10.54254/2754-1169/59/20231135.

Full text
Abstract:
In the field of sales forecasting, there are still various challenges in conducting comprehensive analysis and accurate predictions for bicycle sales, including the diversity of sample data, the range of research scope, and the methods employed. This study aims to fill this research gap by applying a bicycle sales dataset and two ensemble learning methods to investigate the factors influencing bicycle sales and conduct sales predictions and analysis. The research findings indicate that cost, profit, and income are the most significant factors influencing bicycle profit predictions. Compared to the Random Forest model, the Gradient Boosting model performs better in predicting bicycle profits. This paper discusses the relevance and predictive performance of the bicycle sales dataset, providing opportunities for improvement and further optimization in future research to enhance the accuracy and reliability of bicycle sales predictions and offer valuable insights for decision-making and planning. Overall, these results shed light on guiding further exploration of sales prediction.
APA, Harvard, Vancouver, ISO, and other styles
17

Dall’Aglio, John. "Sex and Prediction Error, Part 3: Provoking Prediction Error." Journal of the American Psychoanalytic Association 69, no. 4 (August 2021): 743–65. http://dx.doi.org/10.1177/00030651211042059.

Full text
Abstract:
In parts 1 and 2 of this Lacanian neuropsychoanalytic series, surplus prediction error was presented as a neural correlate of the Lacanian concept of jouissance. Affective consciousness (a key source of prediction error in the brain) impels the work of cognition, the predictive work of explaining what is foreign and surprising. Yet this arousal is the necessary bedrock of all consciousness. Although the brain’s predictive model strives for homeostatic explanation of prediction error, jouissance “drives a hole” in the work of homeostasis. Some residual prediction error always remains. Lacanian clinical technique attends to this surplus and the failed predictions to which this jouissance “sticks.” Rather than striving to eliminate prediction error, clinical practice seeks its metabolization. Analysis targets one’s mode of jouissance to create a space for the subject to enjoy in some other way. This entails working with prediction error, not removing or tolerating it. Analysis aims to shake the very core of the subject by provoking prediction error—this drives clinical change. Brief clinical examples illustrate this view.
APA, Harvard, Vancouver, ISO, and other styles
18

Genç, Onur, Bilal Gonen, and Mehmet Ardıçlıoğlu. "A comparative evaluation of shear stress modeling based on machine learning methods in small streams." Journal of Hydroinformatics 17, no. 5 (April 28, 2015): 805–16. http://dx.doi.org/10.2166/hydro.2015.142.

Full text
Abstract:
Predicting shear stress distribution has proved to be a critical problem to solve. Hence, the basic objective of this paper is to develop a prediction of shear stress distribution by machine learning algorithms including artificial neural networks, classification and regression tree, generalized linear models. The data set, which is large and feature-rich, is utilized to improve machine learning-based predictive models and extract the most important predictive factors. The 10-fold cross-validation approach was used to determine the performances of prediction methods. The predictive performances of the proposed models were found to be very close to each other. However, the results indicated that the artificial neural network, which has the R value of 0.92 ± 0.03, achieved the best classification performance overall accuracy on the 10-fold holdout sample. The predictions of all machine learning models were well correlated with measurement data.
APA, Harvard, Vancouver, ISO, and other styles
19

Bozorg-Haddad, Omid, Mohammad Delpasand, and Hugo A. Loáiciga. "Self-optimizer data-mining method for aquifer level prediction." Water Supply 20, no. 2 (December 31, 2019): 724–36. http://dx.doi.org/10.2166/ws.2019.204.

Full text
Abstract:
Abstract Groundwater management requires accurate methods for simulating and predicting groundwater processes. Data-based methods can be applied to serve this purpose. Support vector regression (SVR) is a novel and powerful data-based method for predicting time series. This study proposes the genetic algorithm (GA)–SVR hybrid algorithm that combines the GA for parameter calibration and the SVR method for the simulation and prediction of groundwater levels. The GA–SVR algorithm is applied to three observation wells in the Karaj plain aquifer, a strategic water source for municipal water supply in Iran. The GA–SVR's groundwater-level predictions were compared to those from genetic programming (GP). Results show that the randomized approach of GA–SVR prediction yields R2 values ranging between 0.88 and 0.995, and root mean square error (RMSE) values ranging between 0.13 and 0.258 m, which indicates better groundwater-level predictive skill of GA-SVR compared to GP, whose R2 and RMSE values range between 0.48–0.91 and 0.15–0.44 m, respectively.
APA, Harvard, Vancouver, ISO, and other styles
20

Rasero, Javier, Amy Isabella Sentis, Fang-Cheng Yeh, and Timothy Verstynen. "Integrating across neuroimaging modalities boosts prediction accuracy of cognitive ability." PLOS Computational Biology 17, no. 3 (March 5, 2021): e1008347. http://dx.doi.org/10.1371/journal.pcbi.1008347.

Full text
Abstract:
Variation in cognitive ability arises from subtle differences in underlying neural architecture. Understanding and predicting individual variability in cognition from the differences in brain networks requires harnessing the unique variance captured by different neuroimaging modalities. Here we adopted a multi-level machine learning approach that combines diffusion, functional, and structural MRI data from the Human Connectome Project (N = 1050) to provide unitary prediction models of various cognitive abilities: global cognitive function, fluid intelligence, crystallized intelligence, impulsivity, spatial orientation, verbal episodic memory and sustained attention. Out-of-sample predictions of each cognitive score were first generated using a sparsity-constrained principal component regression on individual neuroimaging modalities. These individual predictions were then aggregated and submitted to a LASSO estimator that removed redundant variability across channels. This stacked prediction led to a significant improvement in accuracy, relative to the best single modality predictions (approximately 1% to more than 3% boost in variance explained), across a majority of the cognitive abilities tested. Further analysis found that diffusion and brain surface properties contribute the most to the predictive power. Our findings establish a lower bound to predict individual differences in cognition using multiple neuroimaging measures of brain architecture, both structural and functional, quantify the relative predictive power of the different imaging modalities, and reveal how each modality provides unique and complementary information about individual differences in cognitive function.
APA, Harvard, Vancouver, ISO, and other styles
21

Sabarinath U S and Ashly Mathew. "Medical Insurance Cost Prediction." Indian Journal of Data Communication and Networking 4, no. 4 (June 30, 2024): 1–4. http://dx.doi.org/10.54105/ijdcn.d5037.04040624.

Full text
Abstract:
This is a medical insurance cost prediction model that uses a linear regression algorithm to predict the medical insurance charges of a person based on the given data. To predict things that have never been so easy. In this project used to predict values that wonder how Insurance amount is normally charged. This is a medical insurance cost prediction model that uses a linear regression algorithm to predict the medical insurance charges of a person based on the given data. This project on predicting medical insurance costs can serve various purposes and address several needs that are Accurate Pricing Insurance companies need accurate predictions of medical insurance costs to set appropriate premiums for policyholders. Predictive models can analyse historical data and various factors such as age, gender, pre-existing conditions, lifestyle habits, and geographic location to estimate future healthcare expenses accurately. This Prediction model achieves three regression methods accuracy that the linear regression gets an accuracy of 74.45 %, whereas Ridge regression and Support Vector Regression gets 82.59% word-level state-of-the-art accuracy. The Medical Insurance Cost Prediction project, proposes a comprehensive approach to predict the medical cost, aiming to develop a robust and accurate system capable of predicting the accurate cost for a particular individual. Leveraging linear regression, our proposed system builds upon the successes of existing models like different types of regressions like linear regression, Ridge regression and Support Vector regression. We will put the Regression algorithm into practice and evaluate how it performs in comparison to the other three algorithms. By comparing the performance of these three methodologies, this project aims to identify the most effective approach for medical insurance cost prediction. Through rigorous evaluation and validation processes, the selected model will provide valuable insights for insurance companies, policymakers, and individuals seeking to optimize healthcare resource allocation and financial planning strategies.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhao, Wenbo, and Ling Fan. "Short-Term Load Forecasting Method for Industrial Buildings Based on Signal Decomposition and Composite Prediction Model." Sustainability 16, no. 6 (March 19, 2024): 2522. http://dx.doi.org/10.3390/su16062522.

Full text
Abstract:
Accurately predicting the cold load of industrial buildings is a crucial step in establishing an energy consumption management system for industrial constructions, which plays a significant role in advancing sustainable development. However, due to diverse influencing factors and the complex nonlinear patterns exhibited by cold load data in industrial buildings, predicting these loads poses significant challenges. This study proposes a hybrid prediction approach combining the Improved Snake Optimization Algorithm (ISOA), Variational Mode Decomposition (VMD), random forest (RF), and BiLSTM-attention. Initially, the ISOA optimizes the parameters of the VMD method, obtaining the best decomposition results for cold load data. Subsequently, RF is employed to predict components with higher frequencies, while BiLSTM-attention is utilized for components with lower frequencies. The final cold load prediction results are obtained by combining these predictions. The proposed method is validated using actual cold load data from an industrial building, and experimental results demonstrate its excellent predictive performance, making it more suitable for cold load prediction in industrial constructions compared to traditional methods. By enhancing the accuracy of cold load predictions. This approach not only improves the energy efficiency of industrial buildings but also promotes the reduction in energy consumption and carbon emissions, thus contributing to the sustainable development of the industrial sector.
APA, Harvard, Vancouver, ISO, and other styles
23

Ouenniche, Jamal, Oscar Javier Uvalle Perez, and Aziz Ettouhami. "A new EDAS-based in-sample-out-of-sample classifier for risk-class prediction." Management Decision 57, no. 2 (February 11, 2019): 314–23. http://dx.doi.org/10.1108/md-04-2018-0397.

Full text
Abstract:
PurposeNowadays, the field of data analytics is witnessing an unprecedented interest from a variety of stakeholders. The purpose of this paper is to contribute to the subfield of predictive analytics by proposing a new non-parametric classifier.Design/methodology/approachThe proposed new non-parametric classifier performs both in-sample and out-of-sample predictions, where in-sample predictions are devised with a new Evaluation Based on Distance from Average Solution (EDAS)-based classifier, and out-of-sample predictions are devised with a CBR-based classifier trained on the class predictions provided by the proposed EDAS-based classifier.FindingsThe performance of the proposed new non-parametric classification framework is tested on a data set of UK firms in predicting bankruptcy. Numerical results demonstrate an outstanding predictive performance, which is robust to the implementation decisions’ choices.Practical implicationsThe exceptional predictive performance of the proposed new non-parametric classifier makes it a real contender in actual applications in areas such as finance and investment, internet security, fraud and medical diagnosis, where the accuracy of the risk-class predictions has serious consequences for the relevant stakeholders.Originality/valueOver and above the design elements of the new integrated in-sample-out-of-sample classification framework and its non-parametric nature, it delivers an outstanding predictive performance for a bankruptcy prediction application.
APA, Harvard, Vancouver, ISO, and other styles
24

Hall, Andrew N., and Sandra C. Matz. "Targeting Item–level Nuances Leads to Small but Robust Improvements in Personality Prediction from Digital Footprints." European Journal of Personality 34, no. 5 (September 2020): 873–84. http://dx.doi.org/10.1002/per.2253.

Full text
Abstract:
In the past decade, researchers have demonstrated that personality can be accurately predicted from digital footprint data, including Facebook likes, tweets, blog posts, pictures, and transaction records. Such computer–based predictions from digital footprints can complement—and in some circumstances even replace—traditional self–report measures, which suffer from well–known response biases and are difficult to scale. However, these previous studies have focused on the prediction of aggregate trait scores (i.e. a person's extroversion score), which may obscure prediction–relevant information at theoretical levels of the personality hierarchy beneath the Big 5 traits. Specifically, new research has demonstrated that personality may be better represented by so–called personality nuances—item–level representations of personality—and that utilizing these nuances can improve predictive performance. The present work examines the hypothesis that personality predictions from digital footprint data can be improved by first predicting personality nuances and subsequently aggregating to scores, rather than predicting trait scores outright. To examine this hypothesis, we employed least absolute shrinkage and selection operator regression and random forest models to predict both items and traits using out–of–sample cross–validation. In nine out of 10 cases across the two modelling approaches, nuance–based models improved the prediction of personality over the trait–based approaches to a small, but meaningful degree (4.25% or 1.69% on average, depending on method). Implications for personality prediction and personality nuances are discussed. © 2020 European Association of Personality Psychology
APA, Harvard, Vancouver, ISO, and other styles
25

Kaufman, Aaron Russell, Peter Kraft, and Maya Sen. "Improving Supreme Court Forecasting Using Boosted Decision Trees." Political Analysis 27, no. 3 (February 19, 2019): 381–87. http://dx.doi.org/10.1017/pan.2018.59.

Full text
Abstract:
Though used frequently in machine learning, boosted decision trees are largely unused in political science, despite many useful properties. We explain how to use one variant of boosted decision trees, AdaBoosted decision trees (ADTs), for social science predictions. We illustrate their use by examining a well-known political prediction problem, predicting U.S. Supreme Court rulings. We find that our ADT approach outperforms existing predictive models. We also provide two additional examples of the approach, one predicting the onset of civil wars and the other predicting county-level vote shares in U.S. presidential elections.
APA, Harvard, Vancouver, ISO, and other styles
26

Wu, Jianjun, Yuxue Hu, Zhongqiang Huang, Junsong Li, Xiang Li, and Ying Sha. "Enhancing Predictive Expert Method for Link Prediction in Heterogeneous Information Social Networks." Applied Sciences 13, no. 22 (November 17, 2023): 12437. http://dx.doi.org/10.3390/app132212437.

Full text
Abstract:
Link prediction is a critical prerequisite and foundation task for social network security that involves predicting the potential relationship between nodes within a network or graph. Although the existing methods show promising performance, they often ignore the unique attributes of each link type and the impact of diverse node differences on network topology when dealing with heterogeneous information networks (HINs), resulting in inaccurate predictions of unobserved links. To overcome this hurdle, we propose the Enhancing Predictive Expert Method (EPEM), a comprehensive framework that includes an individual feature projector, a predictive expert constructor, and a trustworthiness investor. The individual feature projector extracts the distinct characteristics associated with each link type, eliminating shared attributes that are common across all links. The predictive expert constructor then creates enhancing predictive experts, which improve predictive precision by incorporating the individual feature representations unique to each node category. Finally, the trustworthiness investor evaluates the reliability of each enhancing predictive expert and adjusts their contributions to the prediction outcomes accordingly. Our empirical evaluations on three diverse heterogeneous social network datasets demonstrate the effectiveness of EPEM in forecasting unobserved links, outperforming the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
27

Rust, Nicole C., and Stephanie E. Palmer. "Remembering the Past to See the Future." Annual Review of Vision Science 7, no. 1 (September 15, 2021): 349–65. http://dx.doi.org/10.1146/annurev-vision-093019-112249.

Full text
Abstract:
In addition to the role that our visual system plays in determining what we are seeing right now, visual computations contribute in important ways to predicting what we will see next. While the role of memory in creating future predictions is often overlooked, efficient predictive computation requires the use of information about the past to estimate future events. In this article, we introduce a framework for understanding the relationship between memory and visual prediction and review the two classes of mechanisms that the visual system relies on to create future predictions. We also discuss the principles that define the mapping from predictive computations to predictive mechanisms and how downstream brain areas interpret the predictive signals computed by the visual system.
APA, Harvard, Vancouver, ISO, and other styles
28

Stoodley, Catherine J., and Peter T. Tsai. "Adaptive Prediction for Social Contexts: The Cerebellar Contribution to Typical and Atypical Social Behaviors." Annual Review of Neuroscience 44, no. 1 (July 8, 2021): 475–93. http://dx.doi.org/10.1146/annurev-neuro-100120-092143.

Full text
Abstract:
Social interactions involve processes ranging from face recognition to understanding others’ intentions. To guide appropriate behavior in a given context, social interactions rely on accurately predicting the outcomes of one's actions and the thoughts of others. Because social interactions are inherently dynamic, these predictions must be continuously adapted. The neural correlates of social processing have largely focused on emotion, mentalizing, and reward networks, without integration of systems involved in prediction. The cerebellum forms predictive models to calibrate movements and adapt them to changing situations, and cerebellar predictive modeling is thought to extend to nonmotor behaviors. Primary cerebellar dysfunction can produce social deficits, and atypical cerebellar structure and function are reported in autism, which is characterized by social communication challenges and atypical predictive processing. We examine the evidence that cerebellar-mediated predictions and adaptation play important roles in social processes and argue that disruptions in these processes contribute to autism.
APA, Harvard, Vancouver, ISO, and other styles
29

Asiah, Mat, Khidzir Nik Zulkarnaen, Deris Safaai, Mat Yaacob Nik Nurul Hafzan, Mohamad Mohd Saberi, and Safaai Siti Syuhaida. "A Review on Predictive Modeling Technique for Student Academic Performance Monitoring." MATEC Web of Conferences 255 (2019): 03004. http://dx.doi.org/10.1051/matecconf/201925503004.

Full text
Abstract:
Despite of providing high quality of education, demand on predicting student academic performance become more critical to improve the quality and assisting students to achieve a great performance in their studies. The lack of existing an efficiency and accurate prediction model is one of the major issues. Predictive analytics can provide institution with intuitive and better decision making. The objective of this paper is to review current research activities related to academic analytics focusing on predicting student academic performance. Various methods have been proposed by previous researchers to develop the best performance model using variety of students data, techniques, algorithms and tools. Predictive modeling used in predicting student performance are related to several learning tasks such as classification, regression and clustering. To achieve best prediction model, a lot of variables have been chosen and tested to find most influential attributes to perform prediction. Accurate performance prediction will be helpful in order to provide guidance in learning process that will benefit to students in avoiding poor scores. The predictive model furthermore can help instructor to forecast course completion including student final grade which are directly correlated to student performance success. To harvest an effective predictive model, it requires a good input data and variables, suitable predictive method as well as powerful and robust prediction model.
APA, Harvard, Vancouver, ISO, and other styles
30

Hasaballah, Mustafa M., Abdulhakim A. Al-Babtain, Md Moyazzem Hossain, and Mahmoud E. Bakr. "Theoretical Aspects for Bayesian Predictions Based on Three-Parameter Burr-XII Distribution and Its Applications in Climatic Data." Symmetry 15, no. 8 (August 7, 2023): 1552. http://dx.doi.org/10.3390/sym15081552.

Full text
Abstract:
Symmetry and asymmetry play vital roles in prediction. Symmetrical data, which follows a predictable pattern, is easier to predict compared to asymmetrical data, which lacks a predictable pattern. Symmetry helps identify patterns within data that can be utilized in predictive models, while asymmetry aids in identifying outliers or anomalies that should be considered in the predictive model. Among the various factors associated with storms and their impact on surface temperatures, wind speed stands out as a significant factor. This paper focuses on predicting wind speed by utilizing unified hybrid censoring data from the three-parameter Burr-XII distribution. Bayesian prediction bounds for future observations are obtained using both one-sample and two-sample prediction techniques. As explicit expressions for Bayesian predictions of one and two samples are unavailable, we propose the use of the Gibbs sampling process in the Markov chain Monte Carlo framework to obtain estimated predictive distributions. Furthermore, we present a climatic data application to demonstrate the developed uncertainty procedures. Additionally, a simulation research is carried out to examine and contrast the effectiveness of the suggested methods. The results reveal that the Bayes estimates for the parameters outperformed the Maximum likelihood estimators.
APA, Harvard, Vancouver, ISO, and other styles
31

Iftikhar, Taqdees, Naila Qazi, Naushaba Malik, Nida Hamid, Nadia Gul, and Razia Rauf. "Prediction of Successful Induction of Labour jointly using Bishop Score and Transvaginal Sonography in Primigravida Women in Pakistan." Annals of PIMS-Shaheed Zulfiqar Ali Bhutto Medical University 19, no. 2 (May 31, 2023): 141–46. http://dx.doi.org/10.48036/apims.v19i2.783.

Full text
Abstract:
Objective: To assess the diagnostic efficacy of the Bishop Score and Transvaginal Ultrasonography (TVS) in predicting successful labor induction in primigravida women in a peri-urban population in Islamabad. Additionally, the study aimed to evaluate the effectiveness of combining the predictions from both methods to enhance accuracy in predicting successful labor induction. Methodology: A prospective comparative study was conducted at the Departments of Obstetrics and Gynecology, Rawal Institute of Health Sciences, and Farooq Hospital, Islamabad, from December 2021 to December 2022. A total of 520 pregnant, primigravida women undergoing labor induction were included, and they were randomly divided into two groups for assessment using either the Bishop Score or Transvaginal ultrasonography. The outcome of interest was documented as the initiation of active labor within 24 hours. The efficacy of each method was validated separately and jointly, and the predictive accuracy of all three predictors was compared. Results: The two groups demonstrated that both TVS and the Bishop Score were individually effective at predicting successful labor induction (p<0.00001 for both methods). TVS outperformed the Bishop Score in several key predictive measures, such as accuracy and the F1 Score. However, combining predictions from both the Bishop Score and TVS significantly improved both positive and negative predictive values (by more than 10% for each metric), resulting in a more reliable prediction. Conclusion: Both the Bishop Score and TVS are effective methods for predicting successful labor induction in the peri-urban population of Islamabad, Pakistan. While TVS showed significant quantitative advantages over the Bishop Score, combining both predictors yielded even better performance, suggesting that using both methods together should be prioritized for prediction.
APA, Harvard, Vancouver, ISO, and other styles
32

Karpac, Dusan, and Viera Bartosova. "The verification of prediction and classification ability of selected Slovak prediction models and their emplacement in forecasts of financial health of a company in aspect of globalization." SHS Web of Conferences 74 (2020): 06010. http://dx.doi.org/10.1051/shsconf/20207406010.

Full text
Abstract:
Predicting financial health of a company is in this global world necessary for each business entity, especially for the international ones, as it´s very important to know financial stability. Forecasting business failure is a worldwide known term, in a global notion, and there is a lot of prediction models constructed to compute financial health of a company and, by that, state whether a company inclines to financial boom or bankruptcy. Globalized prediction models compute financial health of companies, but the vast majority of models predicting business failure are constructed solely for the conditions of a particular country or even just for a specific sector of a national economy. Field of financial predictions regarding to international view consists of elementary used models, for example, such as Altman´s Z-score or Beerman´s index, which are globally know and used as basic of many other modificated models. Following article deals with selected Slovak prediction models designed to Slovak conditions, states how these models stand in this global world, what is their international connection to the worldwide economies, and also states verification of their prediction ability in a specific sector. The verification of predictive ability of the models is defined by ROC analysis and through results the paper demonstrates the most suitable prediction models to use in the selected sector.
APA, Harvard, Vancouver, ISO, and other styles
33

Lan, Yu, and Daniel F. Heitjan. "Adaptive parametric prediction of event times in clinical trials." Clinical Trials 15, no. 2 (January 29, 2018): 159–68. http://dx.doi.org/10.1177/1740774517750633.

Full text
Abstract:
Background: In event-based clinical trials, it is common to conduct interim analyses at planned landmark event counts. Accurate prediction of the timing of these events can support logistical planning and the efficient allocation of resources. As the trial progresses, one may wish to use the accumulating data to refine predictions. Purpose: Available methods to predict event times include parametric cure and non-cure models and a nonparametric approach involving Bayesian bootstrap simulation. The parametric methods work well when their underlying assumptions are met, and the nonparametric method gives calibrated but inefficient predictions across a range of true models. In the early stages of a trial, when predictions have high marginal value, it is difficult to infer the form of the underlying model. We seek to develop a method that will adaptively identify the best-fitting model and use it to create robust predictions. Methods: At each prediction time, we repeat the following steps: (1) resample the data; (2) identify, from among a set of candidate models, the one with the highest posterior probability; and (3) sample from the predictive posterior of the data under the selected model. Results: A Monte Carlo study demonstrates that the adaptive method produces prediction intervals whose coverage is robust within the family of selected models. The intervals are generally wider than those produced assuming the correct model, but narrower than nonparametric prediction intervals. We demonstrate our method with applications to two completed trials: The International Chronic Granulomatous Disease study and Radiation Therapy Oncology Group trial 0129. Limitations: Intervals produced under any method can be badly calibrated when the sample size is small and unhelpfully wide when predicting the remote future. Early predictions can be inaccurate if there are changes in enrollment practices or trends in survival. Conclusions: An adaptive event-time prediction method that selects the model given the available data can give improved robustness compared to methods based on less flexible parametric models.
APA, Harvard, Vancouver, ISO, and other styles
34

Shi, Yongsheng, Tailin Li, Leicheng Wang, Hongzhou Lu, Yujun Hu, Beichen He, and Xinran Zhai. "A Method for Predicting the Life of Lithium-Ion Batteries Based on Successive Variational Mode Decomposition and Optimized Long Short-Term Memory." Energies 16, no. 16 (August 12, 2023): 5952. http://dx.doi.org/10.3390/en16165952.

Full text
Abstract:
Accurately predicting the remaining lifespan of lithium-ion batteries is critical for the efficient and safe use of these devices. Predicting a lithium-ion battery’s remaining lifespan is challenging due to the non-linear changes in capacity that occur throughout the battery’s life. This study proposes a fused prediction model that employs a multimodal decomposition approach to address the problem of non-linear fluctuations during the degradation process of lithium-ion batteries. Specifically, the capacity attenuation signal is decomposed into multiple mode functions using successive variational mode decomposition (SVMD), which captures capacity fluctuations and a primary attenuation mode function to account for the degradation of lithium-ion batteries. The hyperparameters of the long short-term memory network (LSTM) are optimized using the tuna swarm optimization (TSO) technique. Subsequently, the trained prediction model is used to forecast various mode functions, which are then successfully integrated to obtain the capacity prediction result. The predictions show that the maximum percentage error for the projected results of five unique lithium-ion batteries, each with varying capacities and discharge rates, did not exceed 1%. Additionally, the average relative error remained within 2.1%. The fused lifespan prediction model, which integrates SVMD and the optimized LSTM, exhibited robustness, high predictive accuracy, and a degree of generalizability.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhao, Nana, Wenming Lu, and Zhongyuan Zhang. "The Application of Time Series Models in the Prediction of Hand-Foot-Mouth Disease Incidence." International Journal of Biology and Life Sciences 6, no. 3 (July 30, 2024): 24–29. http://dx.doi.org/10.54097/jq7es934.

Full text
Abstract:
Hand-Foot-Mouth Disease (HFMD) is a contagious illness predominantly affecting infants and children under five years old, caused by human enteroviruses. Over the past five decades, HFMD has rapidly spread across the Asia-Pacific region, gradually evolving into a significant public health challenge for many countries within this area. Currently, HFMD has emerged as an increasingly severe public health issue in our country. Therefore, analyzing the influencing factors of HFMD and predicting its incidence trends are of paramount importance for the prevention of the disease. With the rapid advancement of artificial intelligence technology, predictive models employing deep learning techniques have demonstrated superior performance among various infectious disease prediction models. This paper aims to construct a predictive model using deep learning methods to further enhance the accuracy of HFMD incidence predictions. We compared the effectiveness of Long Short-Term Memory (LSTM) networks, Transformer, and Informer models in HFMD prediction. The research findings indicate that the Informer model, by utilizing self-attention mechanisms and convolutional neural networks, can more effectively address long-term dependencies in time series data, thereby showing better performance in HFMD prediction compared to the LSTM and Transformer models. This has led to improvements in prediction accuracy and generalization capability.
APA, Harvard, Vancouver, ISO, and other styles
36

Oñate, Angelo, Juan Pablo Sanhueza, Gleydis Dueña, Diego Wackerling, Sergio Sauceda, Christopher Salvo, Marian Valenzuela, et al. "Sigma Phase Stabilization by Nb Doping in a New High-Entropy Alloy in the FeCrMnNiCu System: A Study of Phase Prediction and Nanomechanical Response." Metals 14, no. 1 (January 8, 2024): 74. http://dx.doi.org/10.3390/met14010074.

Full text
Abstract:
The development of high-entropy alloys has been hampered by the challenge of effectively and verifiably predicting phases using predictive methods for functional design. This study validates remarkable phase prediction capability in complex multicomponent alloys by microstructurally predicting two novel high-entropy alloys in the FCC + BCC and FCC + BCC + IM systems using a novel analytical method based on valence electron concentration (VEC). The results are compared with machine learning, CALPHAD, and experimental data. The key findings highlight the high predictive accuracy of the analytical method and its strong correlation with more intricate prediction methods such as random forest machine learning and CALPHAD. Furthermore, the experimental results validate the predictions with a range of techniques, including SEM-BSE, EDS, elemental mapping, XRD, microhardness, and nanohardness measurements. This study reveals that the addition of Nb enhances the formation of the sigma (σ) intermetallic phase, resulting in increased alloy strength, as demonstrated by microhardness and nanohardness measurements. Lastly, the overlapping VEC ranges in high-entropy alloys are identified as potential indicators of phase transitions at elevated temperatures.
APA, Harvard, Vancouver, ISO, and other styles
37

Ma, Junwei, Xiaoxu Niu, Huiming Tang, Yankun Wang, Tao Wen, and Junrong Zhang. "Displacement Prediction of a Complex Landslide in the Three Gorges Reservoir Area (China) Using a Hybrid Computational Intelligence Approach." Complexity 2020 (January 28, 2020): 1–15. http://dx.doi.org/10.1155/2020/2624547.

Full text
Abstract:
Displacement prediction of reservoir landslide remains inherently uncertain since a complete understanding of the complex nonlinear, dynamic landslide system is still lacking. An appropriate quantification of predictive uncertainties is a key underpinning of displacement prediction and mitigation of reservoir landslide. A density prediction, offering a full estimation of the probability density for future outputs, is promising for quantification of the uncertainty of landslide displacement. In the present study, a hybrid computational intelligence approach is proposed to build a density prediction model of landslide displacement and quantify the associated predictive uncertainties. The hybrid computational intelligence approach consists of two steps: first, the input variables are selected through copula analysis; second, kernel-based support vector machine quantile regression (KSVMQR) is employed to perform density prediction. The copula-KSVMQR approach is demonstrated through a complex landslide in the Three Gorges Reservoir Area (TGRA), China. The experimental study suggests that the copula-KSVMQR approach is capable of construction density prediction by providing full probability density distributions of the prediction with perfect performance. In addition, different types of predictions, including interval prediction and point prediction, can be derived from the obtained density predictions with excellent performance. The results show that the mean prediction interval widths of the proposed approach at ZG287 and ZG289 are 27.30 and 33.04, respectively, which are approximately 60 percent lower than that obtained using the traditional bootstrap-extreme learning machine-artificial neural network (Bootstrap-ELM-ANN). Moreover, the obtained point predictions show great consistency with the observations, with correlation coefficients of 0.9998. Given the satisfactory performance, the presented copula-KSVMQR approach shows a great ability to predict landslide displacement.
APA, Harvard, Vancouver, ISO, and other styles
38

Guo, Shengnan, and Jianqiu Xu. "CPRQ: Cost Prediction for Range Queries in Moving Object Databases." ISPRS International Journal of Geo-Information 10, no. 7 (July 8, 2021): 468. http://dx.doi.org/10.3390/ijgi10070468.

Full text
Abstract:
Predicting query cost plays an important role in moving object databases. Accurate predictions help database administrators effectively schedule workloads and achieve optimal resource allocation strategies. There are some works focusing on query cost prediction, but most of them employ analytical methods to obtain an index-based cost prediction model. The accuracy can be seriously challenged as the workload of the database management system becomes more and more complex. Differing from the previous work, this paper proposes a method called CPRQ (Cost Prediction of Range Query) which is based on machine-learning techniques. The proposed method contains four learning models: the polynomial regression model, the decision tree regression model, the random forest regression model, and the KNN (k-Nearest Neighbor) regression model. Using R-squared and MSE (Mean Squared Error) as measurements, we perform an extensive experimental evaluation. The results demonstrate that CPRQ achieves high accuracy and the random forest regression model obtains the best predictive performance (R-squared is 0.9695 and MSE is 0.154).
APA, Harvard, Vancouver, ISO, and other styles
39

Effraimidis, Grigoris. "MANAGEMENT OF ENDOCRINE DISEASE: Predictive scores in autoimmune thyroid disease: are they useful?" European Journal of Endocrinology 181, no. 3 (September 2019): R119—R131. http://dx.doi.org/10.1530/eje-19-0234.

Full text
Abstract:
Prediction models are of a great assistance for predicting the development of a disease, detecting or screening undiagnosed patients, predicting the effectiveness of a treatment and helping toward better decision-making. Recently, three predictive scores in the field of autoimmune thyroid disease (AITD) have been introduced: The Thyroid Hormones Event Amsterdam (THEA) score: a predictive score of the development of overt AITD, the Graves’ Events After Therapy (GREAT) score: a prediction score for the risk of recurrence after antithyroid drugs withdrawal and the Prediction Graves’ Orbitopathy (PREDIGO) score: a prediction score for the development of Graves’ orbitopathy in newly diagnosed patients with Graves’ hyperthyroidism. Their construction, clinical applicability, the possible preventative measurements which can be taken to diminish the risks and the potential future developments which can improve the accuracy of the predictive scores are discussed in this review.
APA, Harvard, Vancouver, ISO, and other styles
40

Ooka, Tadao, Hisashi Johno, Kazunori Nakamoto, Yoshioki Yoda, Hiroshi Yokomichi, and Zentaro Yamagata. "Random forest approach for determining risk prediction and predictive factors of type 2 diabetes: large-scale health check-up data in Japan." BMJ Nutrition, Prevention & Health 4, no. 1 (March 11, 2021): 140–48. http://dx.doi.org/10.1136/bmjnph-2020-000200.

Full text
Abstract:
IntroductionEarly intervention in type 2 diabetes can prevent exacerbation of insulin resistance. More effective interventions can be implemented by early and precise prediction of the change in glycated haemoglobin A1c (HbA1c). Artificial intelligence (AI), which has been introduced into various medical fields, may be useful in predicting changes in HbA1c. However, the inability to explain the predictive factors has been a problem in the use of deep learning, the leading AI technology. Therefore, we applied a highly interpretable AI method, random forest (RF), to large-scale health check-up data and examined whether there was an advantage over a conventional prediction model.Research design and methodsThis study included a cumulative total of 42 908 subjects not receiving treatment for diabetes with an HbA1c <6.5%. The objective variable was the change in HbA1c in the next year. Each prediction model was created with 51 health-check items and part of their change values from the previous year. We used two analytical methods to compare the predictive powers: RF as a new model and multivariate logistic regression (MLR) as a conventional model. We also created models excluding the change values to determine whether it positively affected the predictions. In addition, variable importance was calculated in the RF analysis, and standard regression coefficients were calculated in the MLR analysis to identify the predictors.ResultsThe RF model showed a higher predictive power for the change in HbA1c than MLR in all models. The RF model including change values showed the highest predictive power. In the RF prediction model, HbA1c, fasting blood glucose, body weight, alkaline phosphatase and platelet count were factors with high predictive power.ConclusionsCorrect use of the RF method may enable highly accurate risk prediction for the change in HbA1c and may allow the identification of new diabetes risk predictors.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Haobing, Yanmin Zhu, Tianzi Zang, Yanan Xu, Jiadi Yu, and Feilong Tang. "Jointly Modeling Heterogeneous Student Behaviors and Interactions among Multiple Prediction Tasks." ACM Transactions on Knowledge Discovery from Data 16, no. 1 (July 3, 2021): 1–24. http://dx.doi.org/10.1145/3458023.

Full text
Abstract:
Prediction tasks about students have practical significance for both student and college. Making multiple predictions about students is an important part of a smart campus. For instance, predicting whether a student will fail to graduate can alert the student affairs office to take predictive measures to help the student improve his/her academic performance. With the development of information technology in colleges, we can collect digital footprints that encode heterogeneous behaviors continuously. In this article, we focus on modeling heterogeneous behaviors and making multiple predictions together, since some prediction tasks are related and learning the model for a specific task may have the data sparsity problem. To this end, we propose a variant of Long-Short Term Memory (LSTM) and a soft-attention mechanism. The proposed LSTM is able to learn the student profile-aware representation from heterogeneous behavior sequences. The proposed soft-attention mechanism can dynamically learn different importance degrees of different days for every student. In this way, heterogeneous behaviors can be well modeled. In order to model interactions among multiple prediction tasks, we propose a co-attention mechanism based unit. With the help of the stacked units, we can explicitly control the knowledge transfer among multiple tasks. We design three motivating behavior prediction tasks based on a real-world dataset collected from a college. Qualitative and quantitative experiments on the three prediction tasks have demonstrated the effectiveness of our model.
APA, Harvard, Vancouver, ISO, and other styles
42

Tang, Youmin, Richard Kleeman, and Andrew M. Moore. "Comparison of Information-Based Measures of Forecast Uncertainty in Ensemble ENSO Prediction." Journal of Climate 21, no. 2 (January 15, 2008): 230–47. http://dx.doi.org/10.1175/2007jcli1719.1.

Full text
Abstract:
Abstract In this study, ensemble predictions of the El Niño–Southern Oscillation (ENSO) were conducted for the period 1981–98 using two hybrid coupled models. Several recently proposed information-based measures of predictability, including relative entropy (R), predictive information (PI), predictive power (PP), and mutual information (MI), were explored in terms of their ability of estimating a priori the predictive skill of the ENSO ensemble predictions. The emphasis was put on examining the relationship between the measures of predictability that do not use observations, and the model prediction skills of correlation and root-mean-square error (RMSE) that make use of observations. The relationship identified here offers a practical means of estimating the potential predictability and the confidence level of an individual prediction. It was found that the MI is a good indicator of overall skill. When it is large, the prediction system has high prediction skill, whereas small MI often corresponds to a low prediction skill. This suggests that MI is a good indicator of the actual skill of the models. The R and PI have a nearly identical average (over all predictions) as should be the case in theory. Comparing the different information-based measures reveals that R is a better predictor of prediction skill than PI and PP, especially when correlation-based metrics are used to evaluate model skill. A “triangular relationship” emerges between R and the model skill, namely, that when R is large, the prediction is likely to be reliable, whereas when R is small the prediction skill is quite variable. A small R is often accompanied by relatively weak ENSO variability. The possible reasons why R is superior to PI and PP as a measure of ENSO predictability will also be discussed.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Hongmei, Tatiana Ilyina, Tammas Loughran, Aaron Spring, and Julia Pongratz. "Reconstructions and predictions of the global carbon budget with an emission-driven Earth system model." Earth System Dynamics 14, no. 1 (February 1, 2023): 101–19. http://dx.doi.org/10.5194/esd-14-101-2023.

Full text
Abstract:
Abstract. The global carbon budget (GCB) – including fluxes of CO2 between the atmosphere, land, and ocean and its atmospheric growth rate – show large interannual to decadal variations. Reconstructing and predicting the variable GCB is essential for tracing the fate of carbon and understanding the global carbon cycle in a changing climate. We use a novel approach to reconstruct and predict the variations in GCB in the next few years based on our decadal prediction system enhanced with an interactive carbon cycle. By assimilating physical atmospheric and oceanic data products into the Max Planck Institute Earth System Model (MPI-ESM), we are able to reproduce the annual mean historical GCB variations from 1970–2018, with high correlations of 0.75, 0.75, and 0.97 for atmospheric CO2 growth, air–land CO2 fluxes, and air–sea CO2 fluxes, respectively, relative to the assessments from the Global Carbon Project (GCP). Such a fully coupled decadal prediction system, with an interactive carbon cycle, enables the representation of the GCB within a closed Earth system and therefore provides an additional line of evidence for the ongoing assessments of the anthropogenic GCB. Retrospective predictions initialized from the simulation in which physical atmospheric and oceanic data products are assimilated show high confidence in predicting the following year's GCB. The predictive skill is up to 5 years for the air–sea CO2 fluxes, and 2 years for the air–land CO2 fluxes and atmospheric carbon growth rate. This is the first study investigating the GCB variations and predictions with an emission-driven prediction system. Such a system also enables the reconstruction of the past and prediction of the evolution of near-future atmospheric CO2 concentration changes. The Earth system predictions in this study provide valuable inputs for understanding the global carbon cycle and informing climate-relevant policy.
APA, Harvard, Vancouver, ISO, and other styles
44

Csillag, Daniel, Lucas Monteiro Paes, Thiago Ramos, João Vitor Romano, Rodrigo Schuller, Roberto B. Seixas, Roberto I. Oliveira, and Paulo Orenstein. "AmnioML: Amniotic Fluid Segmentation and Volume Prediction with Uncertainty Quantification." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15494–502. http://dx.doi.org/10.1609/aaai.v37i13.26837.

Full text
Abstract:
Accurately predicting the volume of amniotic fluid is fundamental to assessing pregnancy risks, though the task usually requires many hours of laborious work by medical experts. In this paper, we present AmnioML, a machine learning solution that leverages deep learning and conformal prediction to output fast and accurate volume estimates and segmentation masks from fetal MRIs with Dice coefficient over 0.9. Also, we make available a novel, curated dataset for fetal MRIs with 853 exams and benchmark the performance of many recent deep learning architectures. In addition, we introduce a conformal prediction tool that yields narrow predictive intervals with theoretically guaranteed coverage, thus aiding doctors in detecting pregnancy risks and saving lives. A successful case study of AmnioML deployed in a medical setting is also reported. Real-world clinical benefits include up to 20x segmentation time reduction, with most segmentations deemed by doctors as not needing any further manual refinement. Furthermore, AmnioML's volume predictions were found to be highly accurate in practice, with mean absolute error below 56mL and tight predictive intervals, showcasing its impact in reducing pregnancy complications.
APA, Harvard, Vancouver, ISO, and other styles
45

Oubelaid, Adel, Abdelhameed Ibrahim, and Ahmed M. Elshewey. "Bridging the Gap: An Explainable Methodology for Customer Churn Prediction in Supply Chain Management." Journal of Artificial Intelligence and Metaheuristics 4, no. 1 (2023): 16–23. http://dx.doi.org/10.54216/jaim.040102.

Full text
Abstract:
Customer churn prediction is a critical task for businesses aiming to retain their valuable customers. Nevertheless, the lack of transparency and interpretability in machine learning models hinders their implementation in real-world applications. In this paper, we introduce a novel methodology for customer churn prediction in supply chain management that addresses the need for explainability. Our approach take advantage of XGBoost as the underlying predictive model. We recognize the importance of not only accurately predicting churn but also providing actionable insights into the key factors driving customer attrition. To achieve this, we employ Local Interpretable Model-agnostic Explanations (LIME), a state-of-the-art technique for generating intuitive and understandable explanations. By utilizing LIME to the predictions made by XGBoost, we enable decision-makers to gain intuition into the decision process of the model and the reasons behind churn predictions. Through a comprehensive case study on customer churn data, we demonstrate the success of our explainable ML approach. Our methodology not only achieves high prediction accuracy but also offers interpretable explanations that highlight the underlying drivers of customer churn. These insights supply valuable management for decision-making processes within supply chain management.
APA, Harvard, Vancouver, ISO, and other styles
46

Grover, Arman, Debajyoti Roy Burman, Priyansh Kapaida, and Neelamani Samal. "Stock Market Price Prediction." International Journal for Research in Applied Science and Engineering Technology 11, no. 12 (December 31, 2023): 591–95. http://dx.doi.org/10.22214/ijraset.2023.57184.

Full text
Abstract:
Abstract: Investing in the stock market can be a convoluted and refined method of conducting business. Stock prediction is an extremely difficult and complex endeavor since stock values can fluctuate abruptly owing to a variety of reasons, making the stock market incredibly unpredictable.This paper explores predictive models for the stock market, aiming to forecast stock prices using machine learning algorithms. By analyzing historical market data and employing various predictive techniques, thestudy aims to enhance accuracy in predicting future stock movements. this paper contributes understanding into the potential of LSTM models for enhancing stock market prediction accuracy and reliability.
APA, Harvard, Vancouver, ISO, and other styles
47

Varma, Gautam Kumar. "Managing Risk Using Prediction Markets." Journal of Prediction Markets 7, no. 3 (January 8, 2014): 45–60. http://dx.doi.org/10.5750/jpm.v7i3.804.

Full text
Abstract:
Prediction markets have emerged fairly recently as a promising forecasting mechanism to handle efficiently the dynamic aggregation of dispersed information among various agents. The interest that this mechanism attracts seems to be increasing at a steady rate, in terms of both business interest and academic work. Applications of predictive markets span the areas of political predictions, sports prediction, Governance to name a few. This paper makes a bold attempt to explore the use of predictive markets for effective risk management process in projects to derive certainty in projects. The key components of prediction markets i.e. the specification of contracts traded in a prediction market, the trading mechanism and finally the incentives provided to ensure information revelation will then be presented. Innovative concept of fusing Predictive Markets for Risk Management will then be outlined. How the probability of Risk Occurrence can be predicted with a greater degree of certainty using the predictive markets and aggregation of wisdom of project stakeholders will be described. Further areas to be explored such as a framework for deployment of Predictive markets in a project context, guidelines for use are then presented. We close with conclusions and area for further investigation. Case Studies complement the paper and the model proposed.
APA, Harvard, Vancouver, ISO, and other styles
48

Okasha, Hassan M., Chuanmei Wang, and Jianhua Wang. "E-Bayesian Prediction for the Burr XII Model Based on Type-II Censored Data with Two Samples." Advances in Mathematical Physics 2020 (February 1, 2020): 1–13. http://dx.doi.org/10.1155/2020/3510673.

Full text
Abstract:
Type-II censored data is an important scheme of data in lifetime studies. The purpose of this paper is to obtain E-Bayesian predictive functions which are based on observed order statistics with two samples from two parameter Burr XII model. Predictive functions are developed to derive both point prediction and interval prediction based on type-II censored data, where the median Bayesian estimation is a novel formulation to get Bayesian sample prediction, as the integral for calculating the Bayesian prediction directly does not exist. All kinds of predictions are obtained with symmetric and asymmetric loss functions. Two sample techniques are considered, and gamma conjugate prior density is assumed. Illustrative examples are provided for all the scenarios considered in this article. Both illustrative examples with real data and the Monte Carlo simulation are carried out to show the new method is acceptable. The results show that Bayesian and E-Bayesian predictions with the two kinds of loss functions have little difference for the point prediction, and E-Bayesian confidence interval (CI) with the two kinds of loss functions are almost similar and they are more accurate for the interval prediction.
APA, Harvard, Vancouver, ISO, and other styles
49

Timonidis, Nestor, Rembrandt Bakker, and Paul Tiesinga. "Prediction of a Cell-Class-Specific Mouse Mesoconnectome Using Gene Expression Data." Neuroinformatics 18, no. 4 (May 24, 2020): 611–26. http://dx.doi.org/10.1007/s12021-020-09471-x.

Full text
Abstract:
Abstract Reconstructing brain connectivity at sufficient resolution for computational models designed to study the biophysical mechanisms underlying cognitive processes is extremely challenging. For such a purpose, a mesoconnectome that includes laminar and cell-class specificity would be a major step forward. We analyzed the ability of gene expression patterns to predict cell-class and layer-specific projection patterns and assessed the functional annotations of the most predictive groups of genes. To achieve our goal we used publicly available volumetric gene expression and connectivity data and we trained computational models to learn and predict cell-class and layer-specific axonal projections using gene expression data. Predictions were done in two ways, namely predicting projection strengths using the expression of individual genes and using the co-expression of genes organized in spatial modules, as well as predicting binary forms of projection. For predicting the strength of projections, we found that ridge (L2-regularized) regression had the highest cross-validated accuracy with a median r2 score of 0.54 which corresponded for binarized predictions to a median area under the ROC value of 0.89. Next, we identified 200 spatial gene modules using a dictionary learning and sparse coding approach. We found that these modules yielded predictions of comparable accuracy, with a median r2 score of 0.51. Finally, a gene ontology enrichment analysis of the most predictive gene groups resulted in significant annotations related to postsynaptic function. Taken together, we have demonstrated a prediction workflow that can be used to perform multimodal data integration to improve the accuracy of the predicted mesoconnectome and support other neuroscience use cases.
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Kui, Gang Hu, Zhonghua Wu, Vladimir N. Uversky, and Lukasz Kurgan. "Assessment of Disordered Linker Predictions in the CAID2 Experiment." Biomolecules 14, no. 3 (February 28, 2024): 287. http://dx.doi.org/10.3390/biom14030287.

Full text
Abstract:
Disordered linkers (DLs) are intrinsically disordered regions that facilitate movement between adjacent functional regions/domains, contributing to many key cellular functions. The recently completed second Critical Assessments of protein Intrinsic Disorder prediction (CAID2) experiment evaluated DL predictions by considering a rather narrow scenario when predicting 40 proteins that are already known to have DLs. We expand this evaluation by using a much larger set of nearly 350 test proteins from CAID2 and by investigating three distinct scenarios: (1) prediction residues in DLs vs. in non-DL regions (typical use of DL predictors); (2) prediction of residues in DLs vs. other disordered residues (to evaluate whether predictors can differentiate residues in DLs from other types of intrinsically disordered residues); and (3) prediction of proteins harboring DLs. We find that several methods provide relatively accurate predictions of DLs in the first scenario. However, only one method, APOD, accurately identifies DLs among other types of disordered residues (scenario 2) and predicts proteins harboring DLs (scenario 3). We also find that APOD’s predictive performance is modest, motivating further research into the development of new and more accurate DL predictors. We note that these efforts will benefit from a growing amount of training data and the availability of sophisticated deep network models and emphasize that future methods should provide accurate results across the three scenarios.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography