Статті в журналах з теми "XGBOOST MODEL"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: XGBOOST MODEL.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "XGBOOST MODEL".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Yang, Hao, Jiaxi Li, Siru Liu, Xiaoling Yang, and Jialin Liu. "Predicting Risk of Hypoglycemia in Patients With Type 2 Diabetes by Electronic Health Record–Based Machine Learning: Development and Validation." JMIR Medical Informatics 10, no. 6 (June 16, 2022): e36958. http://dx.doi.org/10.2196/36958.

Повний текст джерела
Анотація:
Background Hypoglycemia is a common adverse event in the treatment of diabetes. To efficiently cope with hypoglycemia, effective hypoglycemia prediction models need to be developed. Objective The aim of this study was to develop and validate machine learning models to predict the risk of hypoglycemia in adult patients with type 2 diabetes. Methods We used the electronic health records of all adult patients with type 2 diabetes admitted to West China Hospital between November 2019 and December 2021. The prediction model was developed based on XGBoost and natural language processing. F1 score, area under the receiver operating characteristic curve (AUC), and decision curve analysis (DCA) were used as the main criteria to evaluate model performance. Results We included 29,843 patients with type 2 diabetes, of whom 2804 patients (9.4%) developed hypoglycemia. In this study, the embedding machine learning model (XGBoost3) showed the best performance among all the models. The AUC and the accuracy of XGBoost are 0.82 and 0.93, respectively. The XGboost3 was also superior to other models in DCA. Conclusions The Paragraph Vector–Distributed Memory model can effectively extract features and improve the performance of the XGBoost model, which can then effectively predict hypoglycemia in patients with type 2 diabetes.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

OUKHOUYA, HASSAN, HAMZA KADIRI, KHALID EL HIMDI, and RABY GUERBAZ. "Forecasting International Stock Market Trends: XGBoost, LSTM, LSTM-XGBoost, and Backtesting XGBoost Models." Statistics, Optimization & Information Computing 12, no. 1 (November 3, 2023): 200–209. http://dx.doi.org/10.19139/soic-2310-5070-1822.

Повний текст джерела
Анотація:
Forecasting time series is crucial for financial research and decision-making in business. The nonlinearity of stock market prices profoundly impacts global economic and financial sectors. This study focuses on modeling and forecasting the daily prices of key stock indices - MASI, CAC 40, DAX, FTSE 250, NASDAQ, and HKEX, representing the Moroccan, French, German, British, US, and Hong Kong markets, respectively. We compare the performance of machine learning models, including Long Short-Term Memory (LSTM), eXtreme Gradient Boosting (XGBoost), and the hybrid LSTM-XGBoost, and utilize the skforecast library for backtesting. Results show that the hybrid LSTM-XGBoost model, optimized using Grid Search (GS), outperforms other models, achieving high accuracy in forecasting daily prices. This contribution offers financial analysts and investors valuable insights, facilitating informed decision-making through precise forecasts of international stock prices.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gu, Kai, Jianqi Wang, Hong Qian, and Xiaoyan Su. "Study on Intelligent Diagnosis of Rotor Fault Causes with the PSO-XGBoost Algorithm." Mathematical Problems in Engineering 2021 (April 26, 2021): 1–17. http://dx.doi.org/10.1155/2021/9963146.

Повний текст джерела
Анотація:
On basis of fault categories detection, the diagnosis of rotor fault causes is proposed, which has great contributions to the field of intelligent operation and maintenance. To improve the diagnostic accuracy and practical efficiency, a hybrid model based on the particle swarm optimization-extreme gradient boosting algorithm, namely, PSO-XGBoost is designed. XGBoost is used as a classifier to diagnose rotor fault causes, having good performance due to the second-order Taylor expansion and the explicit regularization term. PSO is used to automatically optimize the process of adjusting the XGBoost’s parameters, which overcomes the shortcomings when using the empirical method or the trial-and-error method to adjust parameters of the XGBoost model. The hybrid model combines the advantages of the two algorithms and can diagnose nine rotor fault causes accurately. Following diagnostic results, maintenance measures referring to the corresponding knowledge base are provided intelligently. Finally, the proposed PSO-XGBoost model is compared with five state-of-the-art intelligent classification methods. The experimental results demonstrate that the proposed method has higher diagnostic accuracy and practical efficiency in diagnosing rotor fault causes.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Liu, Jialin, Jinfa Wu, Siru Liu, Mengdie Li, Kunchang Hu, and Ke Li. "Predicting mortality of patients with acute kidney injury in the ICU using XGBoost model." PLOS ONE 16, no. 2 (February 4, 2021): e0246306. http://dx.doi.org/10.1371/journal.pone.0246306.

Повний текст джерела
Анотація:
Purpose The goal of this study is to construct a mortality prediction model using the XGBoot (eXtreme Gradient Boosting) decision tree model for AKI (acute kidney injury) patients in the ICU (intensive care unit), and to compare its performance with that of three other machine learning models. Methods We used the eICU Collaborative Research Database (eICU-CRD) for model development and performance comparison. The prediction performance of the XGBoot model was compared with the other three machine learning models. These models included LR (logistic regression), SVM (support vector machines), and RF (random forest). In the model comparison, the AUROC (area under receiver operating curve), accuracy, precision, recall, and F1 score were used to evaluate the predictive performance of each model. Results A total of 7548 AKI patients were analyzed in this study. The overall in-hospital mortality of AKI patients was 16.35%. The best performing algorithm in this study was XGBoost with the highest AUROC (0.796, p < 0.01), F1(0.922, p < 0.01) and accuracy (0.860). The precision (0.860) and recall (0.994) of the XGBoost model rank second among the four models. Conclusion XGBoot model had obvious advantages of performance compared to the other machine learning models. This will be helpful for risk identification and early intervention for AKI patients at risk of death.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ji, Shouwen, Xiaojing Wang, Wenpeng Zhao, and Dong Guo. "An Application of a Three-Stage XGBoost-Based Model to Sales Forecasting of a Cross-Border E-Commerce Enterprise." Mathematical Problems in Engineering 2019 (September 16, 2019): 1–15. http://dx.doi.org/10.1155/2019/8503252.

Повний текст джерела
Анотація:
Sales forecasting is even more vital for supply chain management in e-commerce with a huge amount of transaction data generated every minute. In order to enhance the logistics service experience of customers and optimize inventory management, e-commerce enterprises focus more on improving the accuracy of sales prediction with machine learning algorithms. In this study, a C-A-XGBoost forecasting model is proposed taking sales features of commodities and tendency of data series into account, based on the XGBoost model. A C-XGBoost model is first established to forecast for each cluster of the resulting clusters based on two-step clustering algorithm, incorporating sales features into the C-XGBoost model as influencing factors of forecasting. Secondly, an A-XGBoost model is used to forecast the tendency with the ARIMA model for the linear part and the XGBoost model for the nonlinear part. The final results are summed by assigning weights to forecasting results of the C-XGBoost and A-XGBoost models. By comparison with the ARIMA, XGBoost, C-XGBoost, and A-XGBoost models using data from Jollychic cross-border e-commerce platform, the C-A-XGBoost is proved to outperform than other four models.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhu, Yiming. "Stock Price Prediction based on LSTM and XGBoost Combination Model." Transactions on Computer Science and Intelligent Systems Research 1 (October 12, 2023): 94–109. http://dx.doi.org/10.62051/z6dere47.

Повний текст джерела
Анотація:
In recent years, many machine learning and deep learning algorithms have been applied to stock prediction, providing a reference basis for stock trading, and LSTM neural network and XGBoost algorithm are two typical representatives, each with advantages and disadvantages in prediction. In view of this, we propose a combination model based on LSTM and XGBoost, which combines the advantages of LSTM in processing time series data and the ability of XGBoost to evaluate the importance of features. The combination model first selects feature variables with high importance through XGBoost, performs data dimensionality reduction, and then uses LSTM to make predictions. In order to verify the feasibility of the combination model, we built XGBoost, LSTM and LSTM-XGBoost models, and carried out experiments on three data sets of China Eastern Airlines, China Merchants Bank and Kweichow Moutai respectively. Finally, we concluded that the proposed LSTM-XGBoost model has good feasibility and universality in stock price prediction by comparing the accuracy of the predicted images and their performance in RMSE, RMAE, and MAPE indicators.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Xiong, Shuai, Zhixiang Liu, Chendi Min, Ying Shi, Shuangxia Zhang, and Weijun Liu. "Compressive Strength Prediction of Cemented Backfill Containing Phosphate Tailings Using Extreme Gradient Boosting Optimized by Whale Optimization Algorithm." Materials 16, no. 1 (December 28, 2022): 308. http://dx.doi.org/10.3390/ma16010308.

Повний текст джерела
Анотація:
Unconfined compressive strength (UCS) is the most significant mechanical index for cemented backfill, and it is mainly determined by traditional mechanical tests. This study optimized the extreme gradient boosting (XGBoost) model by utilizing the whale optimization algorithm (WOA) to construct a hybrid model for the UCS prediction of cemented backfill. The PT proportion, the OPC proportion, the FA proportion, the solid concentration, and the curing age were selected as input variables, and the UCS of the cemented PT backfill was selected as the output variable. The original XGBoost model, the XGBoost model optimized by particle swarm optimization (PSO-XGBoost), and the decision tree (DT) model were also constructed for comparison with the WOA-XGBoost model. The results showed that the values of the root mean square error (RMSE), coefficient of determination (R2), and mean absolute error (MAE) obtained from the WOA-XGBoost model, XGBoost model, PSO-XGBoost model, and DT model were equal to (0.241, 0.967, 0.184), (0.426, 0.917, 0.336), (0.316, 0.943, 0.258), and (0.464, 0.852, 0.357), respectively. The results show that the proposed WOA-XGBoost has better prediction accuracy than the other machine learning models, confirming the ability of the WOA to enhance XGBoost in cemented PT backfill strength prediction. The WOA-XGBoost model could be a fast and accurate method for the UCS prediction of cemented PT backfill.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wang, Yu, Li Guo, Yanrui Zhang, and Xinyue Ma. "Research on CSI 300 Stock Index Price Prediction Based On EMD-XGBoost." Frontiers in Computing and Intelligent Systems 3, no. 1 (March 17, 2023): 72–77. http://dx.doi.org/10.54097/fcis.v3i1.6027.

Повний текст джерела
Анотація:
The combination of artificial intelligence techniques and quantitative investment has given birth to various types of price prediction models based on machine learning algorithms. In this study, we verify the applicability of machine learning fused with statistical method models through the EMD-XGBoost model for stock price prediction. In the modeling process, specific solutions are proposed for overfitting problems that arise. The stock prediction model of machine learning fused with statistical learning was constructed from an empirical perspective, and an XGBoost algorithm model based on empirical modal decomposition was proposed. The data set selected for the experiment was the closing price of the CSI 300 index, and the model was judged by four indicators:mean absolute error, mean error, and root mean square error, etc. The method used for the experiment was the EMD-XGBoost network model, which had the following advantages: first, combining the empirical modal decomposition method with the XGBoost model is conducive to mining the time series data for Second, the decomposition of the CSI 300 index data by the empirical modal decomposition method is helpful to improve the accuracy of the XGBoost model for time series data prediction. The experiments show that the EMD-XGBoost model outperforms the single ARIMA or LSTM network model as well as the EMD-LSTM network model in terms of mean absolute error, mean error, and root mean square error.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Harriz, Muhammad Alfathan, Nurhaliza Vania Akbariani, Harlis Setiyowati, and Handri Santoso. "Enhancing the Efficiency of Jakarta's Mass Rapid Transit System with XGBoost Algorithm for Passenger Prediction." Jambura Journal of Informatics 5, no. 1 (April 27, 2023): 1–6. http://dx.doi.org/10.37905/jji.v5i1.18814.

Повний текст джерела
Анотація:
This study is based on a machine learning algorithm known as XGBoost. We used the XGBoost algorithm to forecast the capacity of Jakarta's mass transit system. Using preprocessed raw data obtained from the Jakarta Open Data website for the period 2020-2021 as a training medium, we achieved a mean absolute percentage error of 69. However, after the model was fine-tuned, the MAPE was significantly reduced by 28.99% to 49.97. The XGBoost algorithm was found to be effective in detecting patterns and trends in the data, which can be used to improve routes and plan future studies by providing valuable insights. It is possible that additional data points, such as holidays and weather conditions, will further enhance the accuracy of the model in future research. As a result of implementing XGBoost, Jakarta's transportation system can optimize resource utilization and improve customer service in order to improve passenger satisfaction. Future studies may benefit from additional data points, such as holidays and weather conditions, in order to improve XGBoost's efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Siringoringo, Rimbun, Resianta Perangin-angin, and Jamaluddin Jamaluddin. "MODEL HIBRID GENETIC-XGBOOST DAN PRINCIPAL COMPONENT ANALYSIS PADA SEGMENTASI DAN PERAMALAN PASAR." METHOMIKA Jurnal Manajemen Informatika dan Komputerisasi Akuntansi 5, no. 2 (October 31, 2021): 97–103. http://dx.doi.org/10.46880/jmika.vol5no2.pp97-103.

Повний текст джерела
Анотація:
Extreme Gradient Boosting(XGBoost) is a popular boosting algorithm based on decision trees. XGBoost is the best in the boosting group. XGBoost has excellent convergence. On the other hand, XGBoost is a Hyper parameterized model. Determining the value of each parameter is classified as difficult, resulting in the results obtained being trapped in the local optimum situation. Determining the value of each parameter manually, of course, takes a lot of time. In this study, a Genetic Algorithm (GA) is applied to find the optimal value of the XGBoost hyperparameter on the market segmentation problem. The evaluation of the model is based on the ROC curve. Test result. The ROC test results for several SVM, Logistic Regression, and Genetic-XGBoost models are 0.89; 0.98; 0.99. The results show that the Genetic-XGBoost model can be applied to market segmentation and forecasting.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Gu, Zhongyuan, Miaocong Cao, Chunguang Wang, Na Yu, and Hongyu Qing. "Research on Mining Maximum Subsidence Prediction Based on Genetic Algorithm Combined with XGBoost Model." Sustainability 14, no. 16 (August 22, 2022): 10421. http://dx.doi.org/10.3390/su141610421.

Повний текст джерела
Анотація:
The extreme gradient boosting (XGBoost) ensemble learning algorithm excels in solving complex nonlinear relational problems. In order to accurately predict the surface subsidence caused by mining, this work introduces the genetic algorithm (GA) and XGBoost integrated algorithm model for mining subsidence prediction and uses the Python language to develop the GA-XGBoost combined model. The hyperparameter vector of XGBoost is optimized by a genetic algorithm to improve the prediction accuracy and reliability of the XGBoost model. Using some domestic mining subsidence data sets to conduct a model prediction evaluation, the results show that the R2 (coefficient of determination) of the prediction results of the GA-XGBoost model is 0.941, the RMSE (root mean square error) is 0.369, and the MAE (mean absolute error) is 0.308. Then, compared with classic ensemble learning models such as XGBoost, random deep forest, and gradient boost, the GA-XGBoost model has higher prediction accuracy and performance than a single machine learning model.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lee, Jong-Hyun, and In-Soo Lee. "Hybrid Estimation Method for the State of Charge of Lithium Batteries Using a Temporal Convolutional Network and XGBoost." Batteries 9, no. 11 (November 5, 2023): 544. http://dx.doi.org/10.3390/batteries9110544.

Повний текст джерела
Анотація:
Lithium batteries have recently attracted significant attention as highly promising energy storage devices within the secondary battery industry. However, it is important to note that they may pose safety risks, including the potential for explosions during use. Therefore, achieving stable and safe utilization of these batteries necessitates accurate state-of-charge (SOC) estimation. In this study, we propose a hybrid model combining temporal convolutional network (TCN) and eXtreme gradient boosting (XGBoost) to investigate the nonlinear and evolving characteristics of batteries. The primary goal is to enhance SOC estimation performance by leveraging TCN’s long-effective memory capabilities and XGBoost’s robust generalization abilities. We conducted experiments using datasets from NASA, Oxford, and a vehicle simulator to validate the model’s performance. Additionally, we compared the performance of our model with that of a multilayer neural network, long short-term memory, gated recurrent unit, XGBoost, and TCN. The experimental results confirm that our proposed TCN–XGBoost hybrid model outperforms the other models in SOC estimation across all datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Zhang, Kun. "Transmission Line Fault Diagnosis Method Based on SDA-ISSA-XGBoost under Meteorological Factors." Journal of Physics: Conference Series 2666, no. 1 (December 1, 2023): 012006. http://dx.doi.org/10.1088/1742-6596/2666/1/012006.

Повний текст джерела
Анотація:
Abstract Transmission lines are directly exposed to the natural environment and are prone to failure due to meteorological factors. A novel approach for diagnosing transmission line faults under various meteorological conditions has been introduced. This method, known as SDA-ISSA-XGBoost, combines the power of Stacked Denoising Autoencoder (SDA), an improved Sparrow Search Algorithm (ISSA) enhanced with chaotic mapping sequences, adaptive weights, improved iterative local search, and a random differential mutation strategy, and eXtreme Gradient Boosting (XGBoost). The process begins with SDA, which extracts essential features from the initial meteorological data. Subsequently, ISSA is applied to optimize the parameters of the XGBoost model. This results in the ISSA-XGBoost fault diagnosis model. The performance of this model is compared to PSO-XGBoost and SSA-XGBoost. The experimental findings demonstrate that the ISSA-XGBoost model achieves an impressive fault diagnosis accuracy of 94.39%, surpassing both PSO-XGBoost and SSA-XGBoost by 6.54% and 3.74%, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Xiaobing Lin, Xiaobing Lin, Zhe Wu Xiaobing Lin, Jianfa Chen Zhe Wu, Lianfen Huang Jianfa Chen, and Zhiyuan Shi Lianfen Huang. "A Credit Scoring Model Based on Integrated Mixed Sampling and Ensemble Feature Selection: RBR_XGB." 網際網路技術學刊 23, no. 5 (September 2022): 1061–68. http://dx.doi.org/10.53106/160792642022092305014.

Повний текст джерела
Анотація:
<p>With the rapid development of the economy, financial institutions pay more and more attention to the importance of financial credit risk. The XGBoost algorithm is often used in credit scoring. However, it should be noted that XGBoost has three disadvantages when dealing with small samples of high-dimensional imbalance: (1) the model classification results are more biased towards the majority class when the XGBoost algorithm is used in training imbalanced data, this results in reduced model accuracy. (2) XGBoost algorithm is prone to overfitting in high-dimensional data because the higher the data dimension, the sparser the samples. (3) In small datasets, it is prone to form data fragmentation, resulting in reduced model accuracy. A Credit Scoring Model Based On Integrated Mixed Sampling And Ensemble Feature Selection (RBR_XGB) is proposed on the following issues in this paper. The model first aims at the model failure and overfitting problems of XGBoost in the face of highly imbalanced small samples, and uses the improved hybrid sampling algorithm combining RUS and BSMOTE1 to balance and expand the data set. For feature redundancy problems, the RFECV_XGB algorithm is used to filter features for reducing interference features. Then, considering the strength of the distinguishing ability of different models, the validation set is used to assign weights to different models, and the weighted ensemble is used to further improve the performance of the model. The experimental results show that the classification performance of the RBR_XGB algorithm for high-dimensional imbalanced small data is higher than that of the traditional XGBoost algorithm, and it can be used for commercial use. </p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
15

He, Wenwen, Hongli Le, and Pengcheng Du. "Stroke Prediction Model Based on XGBoost Algorithm." International Journal of Applied Sciences & Development 1 (December 13, 2022): 7–10. http://dx.doi.org/10.37394/232029.2022.1.2.

Повний текст джерела
Анотація:
In this paper, individual sample data randomly measured are preprocessed, for example, outliers values are deleted and the characteristics of the samples are normalized to between 0 and 1. The correlation analysis approach is then used to determine and rank the relevance of stroke characteristics, and factors with poor correlation are discarded. The samples are randomly split into a 70% training set and a 30% testing set. Finally,the random forest model and XGBoost algorithm combined with cross-validation and grid search method are implemented to learn the stroke characteristics. The accuracy of the testing set by the XGBoost algorithm is 0.9257, which is better than that of the random forest model with 0.8991. Thus, the XGBoost model is selected to predict the stroke for ten people, and the obtained conclusion is that two people have a stroke and eight people have no stroke.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Guo, RuYan, MinFang Peng, ZhenQi Cao, and RunFu Zhou. "Transformer graded fault diagnosis based on neighborhood rough set and XGBoost." E3S Web of Conferences 243 (2021): 01002. http://dx.doi.org/10.1051/e3sconf/202124301002.

Повний текст джерела
Анотація:
Aiming at the uncertainty of fault type reasoning based on fault data in transformer fault diagnosis model, this paper proposed a hierarchical diagnosis model based on neighborhood rough set and XGBoost. The model used arctangent transformation to preprocess the DGA data, which could reduce the distribution span of data features and the complexity of model training. Using 5 characteristic gases and 16 gas ratios as the input characteristic parameters of the XGBoost model at all levels, reduction was performed on these 21 input feature attributes, features that had a high contribution to fault classification were retained, and redundant features were removed to improve the accuracy and efficiency of model prediction. Taking advantage of XGBoost's strong ability to extract a few features, the output of the model was the superposition of leaf node scores for each type of fault, the maximum score was the type of failure the sample belonged to, and its value was also the probability value. The obtained probability was used as one of the evidence sources to use D-S evidence theory for information fusion to verify the reliability of the model. Experiments have proved that the XGBoost graded diagnosis model proposed in this article has the highest overall accuracy rate comparing with the traditional model, reaching 93.01%, the accuracy of XGBoost models at all levels has reached more than 90%, the average accuracy rate is higher than that of the traditional model by an average of more than 2.7%, and the average time-consuming is only 0.0695 s. After D-S multi-source information fusion, the reliability of the prediction results of the model proposed in this paper has been improved.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Ogunleye, Adeola, and Qing-Guo Wang. "XGBoost Model for Chronic Kidney Disease Diagnosis." IEEE/ACM Transactions on Computational Biology and Bioinformatics 17, no. 6 (November 1, 2020): 2131–40. http://dx.doi.org/10.1109/tcbb.2019.2911071.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Yin, Yilan, Yanguang Sun, Feng Zhao, and Jinxiang Chen. "Improved XGBoost model based on genetic algorithm." International Journal of Computer Applications in Technology 62, no. 3 (2020): 240. http://dx.doi.org/10.1504/ijcat.2020.10028423.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Chen, Jinxiang, Feng Zhao, Yanguang Sun, and Yilan Yin. "Improved XGBoost model based on genetic algorithm." International Journal of Computer Applications in Technology 62, no. 3 (2020): 240. http://dx.doi.org/10.1504/ijcat.2020.106571.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Zhao, Haolei, Yixian Wang, Xian Li, Panpan Guo, and Hang Lin. "Prediction of Maximum Tunnel Uplift Caused by Overlying Excavation Using XGBoost Algorithm with Bayesian Optimization." Applied Sciences 13, no. 17 (August 28, 2023): 9726. http://dx.doi.org/10.3390/app13179726.

Повний текст джерела
Анотація:
The uplifting behaviors of existing tunnels due to overlying excavations are complex and non-linear. They are contributed to by multiple factors, and therefore, they are difficult to be accurately predicted. To address this issue, an extreme gradient boosting (XGBoost) prediction model based on Bayesian optimization (BO), namely, BO-XGBoost, was developed specifically for assessing the tunnel uplift. The modified model incorporated various factors such as an engineering design, soil types, and site construction conditions as input parameters. The performance of the BO-XGBoost model was compared with other models such as support vector machines (SVMs), the classification and regression tree (CART) model, and the extreme gradient boosting (XGBoost) model. In preparation for the model, 170 datasets from a construction site were collected and divided into 70% for training and 30% for testing. The BO-XGBoost model demonstrated a superior predictive performance, providing the most accurate displacement predictions and exhibiting better generalization capabilities. Further analysis revealed that the accuracy of the BO-XGBoost model was primarily influenced by the site’s construction factors. The interpretability of the BO-XGBoost model will provide valuable guidance for geotechnical practitioners in their decision-making processes.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Xu, Bing, Youcheng Tan, Weibang Sun, Tianxing Ma, Hengyu Liu, and Daguo Wang. "Study on the Prediction of the Uniaxial Compressive Strength of Rock Based on the SSA-XGBoost Model." Sustainability 15, no. 6 (March 15, 2023): 5201. http://dx.doi.org/10.3390/su15065201.

Повний текст джерела
Анотація:
The uniaxial compressive strength of rock is one of the important parameters characterizing the properties of rock masses in geotechnical engineering. To quickly and accurately predict the uniaxial compressive strength of rock, a new SSA-XGBoost optimizer prediction model was produced to predict the uniaxial compressive strength of 290 rock samples. With four parameters, namely, porosity (n,%), Schmidt rebound number (Rn), longitudinal wave velocity (Vp, m/s), and point load strength (Is(50), MPa) as input variables and uniaxial compressive strength (UCS, MPa) as the output variables, a prediction model of uniaxial compressive strength was built based on the SSA-XGBoost model. To verify the effectiveness of the SSA-XGBoost model, empirical formulas, XGBoost, SVM, RF, BPNN, KNN, PLSR, and other models were also established and compared with the SSA-XGBoost model. All models were evaluated using the root mean square error (RMSE), correlation coefficient (R2), mean absolute error (MAE), and variance interpretation (VAF). The results calculated by the SSA-XGBoost model (R2 = 0.84, RMSE = 19.85, MAE = 14.79, and VAF = 81.36), are the best among all prediction models. Therefore, the SSA-XGBoost model is the best model to predict the uniaxial compressive strength of rock, for the dataset tested.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Feng, Dachun, Bing Zhou, Shahbaz Gul Hassan, Longqin Xu, Tonglai Liu, Liang Cao, Shuangyin Liu, and Jianjun Guo. "A Hybrid Model for Temperature Prediction in a Sheep House." Animals 12, no. 20 (October 17, 2022): 2806. http://dx.doi.org/10.3390/ani12202806.

Повний текст джерела
Анотація:
Too high or too low temperature in the sheep house will directly threaten the healthy growth of sheep. Prediction and early warning of temperature changes is an important measure to ensure the healthy growth of sheep. Aiming at the randomness and empirical problem of parameter selection of the traditional single extreme Gradient boosting (XGBoost) model, this paper proposes an optimization method based on Principal Component Analysis (PCA) and Particle Swarm Optimization (PSO). Then, using the proposed PCA-PSO-XGBoost to predict the temperature in the sheep house. First, PCA is used to screen the key influencing factors of the sheep house temperature. The dimension of the input vector of the model is reduced; PSO-XGBoost is used to build a temperature prediction model, and the PSO optimization algorithm selects the main hyperparameters of XGBoost. We carried out a global search and determined the optimal hyperparameters of the XGBoost model through iterative calculation. Using the data of the Xinjiang Manas intensive sheep breeding base to conduct a simulation experiment, the results show that it is different from the existing ones. Compared with the temperature prediction model, the evaluation indicators of the PCA-PSO-XGBoost model proposed in this paper are root mean square error (RMSE), mean square error (MSE), coefficient of determination (R2), mean absolute error (MAE) are 0.0433, 0.0019, 0.9995, 0.0065, respectively. RMSE, MSE, and MAE are improved by 68, 90, and 94% compared with the traditional XGBoost model. The experimental results show that the model established in this paper has higher accuracy and better stability, can effectively provide guiding suggestions for monitoring and regulating temperature changes in intensive housing and can be extended to the prediction research of other environmental parameters of other animal houses such as pig houses and cow houses in the future.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zheng, Jiayan, Tianchen Yao, Jianhong Yue, Minghui Wang, and Shuangchen Xia. "Compressive Strength Prediction of BFRC Based on a Novel Hybrid Machine Learning Model." Buildings 13, no. 8 (July 29, 2023): 1934. http://dx.doi.org/10.3390/buildings13081934.

Повний текст джерела
Анотація:
Basalt fiber-reinforced concrete (BFRC) represents a form of high-performance concrete. In structural design, a 28-day resting period is required to achieve compressive strength. This study extended an extreme gradient boosting tree (XGBoost) hybrid model by incorporating genetic algorithm (GA) optimization, named GA-XGBoost, for the projection of compressive strength (CS) on BFRC. GA optimization may reduce many debugging efforts and provide optimal parameter combinations for machine learning (ML) algorithms. The XGBoost is a powerful integrated learning algorithm with efficient, accurate, and scalable features. First, we created and provided a common dataset using test data on BFRC strength from the literature. We segmented and scaled this dataset to enhance the robustness of the ML model. Second, to better predict and evaluate the CS of BFRC, we simultaneously used five other regression models: XGBoost, random forest (RF), gradient-boosted decision tree (GBDT) regressor, AdaBoost, and support vector regression (SVR). The analysis results of test sets indicated that the correlation coefficient and mean absolute error were 0.9483 and 2.0564, respectively, when using the GA-XGBoost model. The GA-XGBoost model demonstrated superior performance, while the AdaBoost model exhibited the poorest performance. In addition, we verified the accuracy and feasibility of the GA-XGBoost model through SHAP analysis. The findings indicated that the water–binder ratio (W/B), fine aggregate (FA), and water–cement ratio (W/C) in BFRC were the variables that had the greatest effect on CS, while silica fume (SF) had the least effect on CS. The results demonstrated that GA-XGBoost exhibits exceptional accuracy in predicting the CS of BFRC, which offers a valuable reference for the engineering domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Lin, Nan, Jiawei Fu, Ranzhe Jiang, Genjun Li, and Qian Yang. "Lithological Classification by Hyperspectral Images Based on a Two-Layer XGBoost Model, Combined with a Greedy Algorithm." Remote Sensing 15, no. 15 (July 28, 2023): 3764. http://dx.doi.org/10.3390/rs15153764.

Повний текст джерела
Анотація:
Lithology classification is important in mineral resource exploration, engineering geological exploration, and disaster monitoring. Traditional laboratory methods for the qualitative analysis of rocks are limited by sampling conditions and analytical techniques, resulting in high costs, low efficiency, and the inability to quickly obtain large-scale geological information. Hyperspectral remote sensing technology can classify and identify lithology using the spectral characteristics of rock, and is characterized by fast detection, large coverage area, and environmental friendliness, which provide the application potential for lithological mapping at a large regional scale. In this study, ZY1-02D hyperspectral images were used as data sources to construct a new two-layer extreme gradient boosting (XGBoost) lithology classification model based on the XGBoost decision tree and an improved greedy search algorithm. A total of 153 spectral bands of the preprocessed hyperspectral images were input into the first layer of the XGBoost model. Based on the tree traversal structural characteristics of the leaf nodes in the XGBoost model, three built-in XGBoost importance indexes were split and combined. The improved greedy search algorithm was used to extract the spectral band variables, which were imported into the second layer of the XGBoost model, and the bat algorithm was used to optimize the modeling parameters of XGBoost. The extraction model of rock classification information was constructed, and the classification map of regional surface rock types was drawn. Field verification was performed for the two-layer XGBoost rock classification model, and its accuracy and reliability were evaluated based on four indexes, namely, accuracy, precision, recall, and F1 score. The results showed that the two-layer XGBoost model had a good lithological classification effect, robustness, and adaptability to small sample datasets. Compared with the traditional machine learning model, the two-layer XGBoost model shows superior performance. The accuracy, precision, recall, and F1 score of the verification set were 0.8343, 0.8406, 0.8350, and 0.8157, respectively. The variable extraction ability of the constructed two-layer XGBoost model was significantly improved. Compared with traditional feature selection methods, the GREED-GFC method, when applied to the two-layer XGBoost model, contributes to more stable rock classification performance and higher lithology prediction accuracy, and the smallest number of extracted features. The lithological distribution information identified by the model was in good agreement with the lithology information verified in the field.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Wu, Kehe, Yanyu Chai, Xiaoliang Zhang, and Xun Zhao. "Research on Power Price Forecasting Based on PSO-XGBoost." Electronics 11, no. 22 (November 16, 2022): 3763. http://dx.doi.org/10.3390/electronics11223763.

Повний текст джерела
Анотація:
With the reform of the power system, the prediction of power market pricing has become one of the key problems that needs to be solved in time. Power price prediction plays an important role in maximizing the profits of the participants in the power market and making full use of power energy. In order to improve the prediction accuracy of the power price, this paper proposes a power price prediction method based on PSO optimization of the XGBoost model, which optimizes eight main parameters of the XGBoost model through particle swarm optimization to improve the prediction accuracy of the XGBoost model. Using the electricity price data of Australia from January to December 2019, the proposed model is compared with the XGBoost model. The experimental results show that PSO can effectively improve the performance of the model. In addition, the prediction results of PSO-XGBoost are compared with those of SVM, LSTM, ARIMA, RW and XGBoost, and the average relative error and root mean square error of different power price prediction models are calculated. The experimental results show that the prediction accuracy of the PSO-XGBoost model is higher and more in line with the actual trend of power price change.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Yuan, Jianming. "Predicting Death Risk of COVID-19 Patients Leveraging Machine Learning Algorithm." Applied and Computational Engineering 8, no. 1 (August 1, 2023): 186–90. http://dx.doi.org/10.54254/2755-2721/8/20230122.

Повний текст джерела
Анотація:
The first instance of COVID-19 was found in Wuhan, China, which mainly caused damage to human body in the form of respiratory diseases. In this study, an XGBoost prediction model was put forward according to the analysis on age, pneumonia, diabetes, and other attributes in the dataset, which was employed to estimate the COVID-19 patients' risk of death. In this study, a lot of preprocessing was carried out on the dataset, such as deleting null values in the dataset. In addition, there are strong correlation between sex, pnueumonia and death probability. In this study, XGBoost, CatBoost, logistic regression and random forest were established by machine learning method to forecast the COVID-19 patients' chance of mortality. The findings revealed that XGBoost's prediction performance was the best, while the logistic regression model performed poorly in this reported dataset of COVID-19 patients when compared to other approaches. From the feature importance map of XGBoost, it is found that age and pneumonia have great influence on the prediction of death risk.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Ha, Jinbing, and Ziyi Zhou. "Subway Energy Consumption Prediction based on XGBoost Model." Highlights in Science, Engineering and Technology 70 (November 15, 2023): 548–52. http://dx.doi.org/10.54097/hset.v70i.13958.

Повний текст джерела
Анотація:
In the process of urban rail transit operation and management, accurate prediction of subway energy consumption is beneficial for establishing a reasonable operational organization mode and evaluating energy efficiency. Due to the multitude of factors affecting train energy consumption, traditional mathematical regression methods struggle to guarantee predictive accuracy. Thus, a energy consumption prediction method based on XGBoost is proposed. To enhance model training efficiency and accuracy, the Lasso model is utilized for feature selection of subway energy consumption influencing factors. Additionally, the K-means++ algorithm is employed for clustering subway energy consumption. Using the operational energy consumption data of Qingdao Subway Line 3 as an example for validation, XGBoost algorithm is employed to predict subway energy consumption. The results are then compared with those of the SVR and LSTM algorithms using three evaluation metrics. It is found that the XGBoost algorithm provides predictions of subway energy consumption that are closer to the experimental values.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Wan, Zhi, Yading Xu, and Branko Šavija. "On the Use of Machine Learning Models for Prediction of Compressive Strength of Concrete: Influence of Dimensionality Reduction on the Model Performance." Materials 14, no. 4 (February 3, 2021): 713. http://dx.doi.org/10.3390/ma14040713.

Повний текст джерела
Анотація:
Compressive strength is the most significant metric to evaluate the mechanical properties of concrete. Machine Learning (ML) methods have shown promising results for predicting compressive strength of concrete. However, at present, no in-depth studies have been devoted to the influence of dimensionality reduction on the performance of different ML models for this application. In this work, four representative ML models, i.e., Linear Regression (LR), Support Vector Regression (SVR), Extreme Gradient Boosting (XGBoost), and Artificial Neural Network (ANN), are trained and used to predict the compressive strength of concrete based on its mixture composition and curing age. For each ML model, three kinds of features are used as input: the eight original features, six Principal Component Analysis (PCA)-selected features, and six manually selected features. The performance as well as the training speed of those four ML models with three different kinds of features is assessed and compared. Based on the obtained results, it is possible to make a relatively accurate prediction of concrete compressive strength using SVR, XGBoost, and ANN with an R-square of over 0.9. When using different features, the highest R-square of the test set occurs in the XGBoost model with manually selected features as inputs (R-square = 0.9339). The prediction accuracy of the SVR model with manually selected features (R-square = 0.9080) or PCA-selected features (R-square = 0.9134) is better than the model with original features (R-square = 0.9003) without dramatic running time change, indicating that dimensionality reduction has a positive influence on SVR model. For XGBoost, the model with PCA-selected features shows poorer performance (R-square = 0.8787) than XGBoost model with original features or manually selected features. A possible reason for this is that the PCA-selected features are not as distinguishable as the manually selected features in this study. In addition, the running time of XGBoost model with PCA-selected features is longer than XGBoost model with original features or manually selected features. In other words, dimensionality reduction by PCA seems to have an adverse effect both on the performance and the running time of XGBoost model. Dimensionality reduction has an adverse effect on the performance of LR model and ANN model because the R-squares on test set of those two models with manually selected features or PCA-selected features are lower than models with original features. Although the running time of ANN is much longer than the other three ML models (less than 1s) in three scenarios, dimensionality reduction has an obviously positive influence on running time without losing much prediction accuracy for ANN model.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Ubaidillah, Rahmad, Muliadi Muliadi, Dodon Turianto Nugrahadi, M. Reza Faisal, and Rudy Herteno. "Implementasi XGBoost Pada Keseimbangan Liver Patient Dataset dengan SMOTE dan Hyperparameter Tuning Bayesian Search." JURNAL MEDIA INFORMATIKA BUDIDARMA 6, no. 3 (July 25, 2022): 1723. http://dx.doi.org/10.30865/mib.v6i3.4146.

Повний текст джерела
Анотація:
Liver disease is a disorder of liver function caused by infection with viruses, bacteria or other toxic substances so that the liver cannot function properly. This liver disease needs to be diagnosed early using a classification algorithm. By using the Indian liver patient dataset, predictions can be made using a classification algorithm to determine whether or not patients have liver disease. However, this dataset has a problem where there is an imbalance of data between patients with liver disease and those without, so it can reduce the performance of the prediction model because it tends to produce non-specific predictions. In this study, classification uses the XGBoost method which is then added with SMOTE to overcome class imbalances in the dataset and/or combined with Bayesian search hyperparameter tuning so that the resulting model performance is better. From the research, the results obtained from the XGBoost model get an AUC value of 0.618, for the XGBoost model with Bayesian search the AUC value is 0.658, then for the XGBoost SMOTE model the AUC value is 0.716, then for the XGBoost SMOTE model with Bayesian search the AUC value is 0.767. From the comparison of the four models, XGBoost SMOTE with Bayesian search obtained the highest AUC results and has an AUC difference of 0.149 compared to the XGBoost model without SMOTE and Bayesian search.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Liu, Yuan, Wenyi Du, Yi Guo, Zhiqiang Tian, and Wei Shen. "Identification of high-risk factors for recurrence of colon cancer following complete mesocolic excision: An 8-year retrospective study." PLOS ONE 18, no. 8 (August 11, 2023): e0289621. http://dx.doi.org/10.1371/journal.pone.0289621.

Повний текст джерела
Анотація:
Background Colon cancer recurrence is a common adverse outcome for patients after complete mesocolic excision (CME) and greatly affects the near-term and long-term prognosis of patients. This study aimed to develop a machine learning model that can identify high-risk factors before, during, and after surgery, and predict the occurrence of postoperative colon cancer recurrence. Methods The study included 1187 patients with colon cancer, including 110 patients who had recurrent colon cancer. The researchers collected 44 characteristic variables, including patient demographic characteristics, basic medical history, preoperative examination information, type of surgery, and intraoperative information. Four machine learning algorithms, namely extreme gradient boosting (XGBoost), random forest (RF), support vector machine (SVM), and k-nearest neighbor algorithm (KNN), were used to construct the model. The researchers evaluated the model using the k-fold cross-validation method, ROC curve, calibration curve, decision curve analysis (DCA), and external validation. Results Among the four prediction models, the XGBoost algorithm performed the best. The ROC curve results showed that the AUC value of XGBoost was 0.962 in the training set and 0.952 in the validation set, indicating high prediction accuracy. The XGBoost model was stable during internal validation using the k-fold cross-validation method. The calibration curve demonstrated high predictive ability of the XGBoost model. The DCA curve showed that patients who received interventional treatment had a higher benefit rate under the XGBoost model. The external validation set’s AUC value was 0.91, indicating good extrapolation of the XGBoost prediction model. Conclusion The XGBoost machine learning algorithm-based prediction model for colon cancer recurrence has high prediction accuracy and clinical utility.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Zhu, Mini, Gang Wang, Chaoping Li, Hongjun Wang, and Bin Zhang. "Artificial Intelligence Classification Model for Modern Chinese Poetry in Educatio." Sustainability 15, no. 6 (March 16, 2023): 5265. http://dx.doi.org/10.3390/su15065265.

Повний текст джерела
Анотація:
Various modern Chinese poetry styles have influenced the development of new Chinese poetry; therefore, the classification of poetry style is very important for understanding these poems and promoting education regarding new Chinese poetry. For poetry learners, due to a lack of experience, it is difficult to accurately judge the style of poetry, which makes it difficult for learners to understand poetry. For poetry researchers, classification of poetry styles in modern poetry is mainly carried out by experts, and there are some disputes between them, which leads to the incorrect and subjective classification of modern poetry. To solve these problems in the classification of modern Chinese poetry, the eXtreme Gradient Boosting (XGBoost) algorithm is used in this paper to build an automatic classification model of modern Chinese poetry, which can automatically and objectively classify poetry. First, modern Chinese poetry is divided into words, and stopwords are removed. Then, Doc2Vec is used to obtain the vector of each poem. The classification model for modern Chinese poetry was iteratively trained using XGBoost, and each iteration promotes the optimization of the next generation of the model until the automatic classification model of modern Chinese poetry is obtained, which is named Modern Chinese Poetry based on XGBoost (XGBoost-MCP). Finally, the XGBoost-MCP model built in this paper was used in experiments on real datasets and compared with Support Vector Machine (SVM), Deep Neural Network (DNN), and Decision Tree (DT) models. The experimental results show that the XGBoost-MCP model performs above 90% in all data evaluations, is obviously superior to the other three algorithms, and has high accuracy and objectivity. Applying this to education can help learners and researchers better understand and study poetry.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Li, Dan, Delan Zhu, Tao Tao, and Jiwei Qu. "Power Generation Prediction for Photovoltaic System of Hose-Drawn Traveler Based on Machine Learning Models." Processes 12, no. 1 (December 22, 2023): 39. http://dx.doi.org/10.3390/pr12010039.

Повний текст джерела
Анотація:
A photovoltaic (PV)-powered electric motor is used for hose-drawn traveler driving instead of a water turbine to achieve high transmission efficiency. PV power generation (PVPG) is affected by different meteorological conditions, resulting in different power generation of PV panels for a hose-drawn traveler. In the above situation, the hose-drawn traveler may experience deficit power generation. The reasonable determination of the PV panel capacity is crucial. Predicting the PVPG is a prerequisite for the reasonable determination of the PV panel capacity. Therefore, it is essential to develop a method for accurately predicting PVPG. Extreme gradient boosting (XGBoost) is currently an outstanding machine learning model for prediction performance, but its hyperparameters are difficult to set. Thus, the XGBoost model based on particle swarm optimization (PSO-XGBoost) is applied for PV power prediction in this study. The PSO algorithm is introduced to optimize hyperparameters in XGBoost model. The meteorological data are segmented into four seasons to develop tailored prediction models, ensuring accurate prediction of PVPG in four seasons for hose-drawn travelers. The input variables of the models include solar irradiance, time, and ambient temperature. The prediction accuracy and stability of the model is then assessed statistically. The predictive accuracy and stability of PV power prediction by the PSO-XGBoost model are higher compared to the XGBoost model. Finally, application of the PSO-XGBoost model is implemented based on meteorological data.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Song, Weihua, Xiaowei Han, and Jifei Qi. "Prediction of Gas Emission in the Working Face Based on LASSO-WOA-XGBoost." Atmosphere 14, no. 11 (October 30, 2023): 1628. http://dx.doi.org/10.3390/atmos14111628.

Повний текст джерела
Анотація:
In order to improve the prediction accuracy of gas emission in the mining face, a method combining least absolute value convergence and selection operator (LASSO), whale optimization algorithm (WOA), and extreme gradient boosting (XGBoost) was proposed, along with the LASSO-WOA-XGBoost gas emission prediction model. Aiming at the monitoring data of gas emission in Qianjiaying mine, LASSO is used to perform feature selection on 13 factors that affect gas emission, and 9 factors that have a high impact on gas emission are screened out. The three main parameters of n_estimators, learning_rate, and max_depth in XGBoost are optimized through WOA, which solves the problem of difficult parameter adjustment due to the large number of parameters in the XGBoost algorithm and improves the prediction effect of the XGBoost algorithm. "When comparing PCA-BP, PCA-SVM, LASSO-XGBoost, and PCA-WOA-XGBoost prediction models, the results indicate that utilizing LASSO for feature selection is more effective in enhancing model prediction accuracy than employing principal component analysis (PCA) for dimensionality reduction." The average absolute error of the LASSO-WOA-XGBoost model is 0.1775, and the root mean square error is 0.2697, which is the same as other models. Compared with the four prediction models, the LASSO-WOA-XGBoost prediction model reduced the mean absolute error by 7.43%, 8.81%, 4.16%, and 9.92%, respectively, and the root mean square error was reduced by 0.24%, 1.13%, 5.81%, and 8.78%. It provides a new method for predicting the gas emission from the mining face in actual mine production.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Wang, Jiayi, and Shaohua Zhou. "CS-GA-XGBoost-Based Model for a Radio-Frequency Power Amplifier under Different Temperatures." Micromachines 14, no. 9 (August 27, 2023): 1673. http://dx.doi.org/10.3390/mi14091673.

Повний текст джерела
Анотація:
Machine learning methods, such as support vector regression (SVR) and gradient boosting, have been introduced into the modeling of power amplifiers and achieved good results. Among various machine learning algorithms, XGBoost has been proven to obtain high-precision models faster with specific parameters. Hyperparameters have a significant impact on the model performance. A traditional grid search for hyperparameters is time-consuming and labor-intensive and may not find the optimal parameters. To solve the problem of parameter searching, improve modeling accuracy, and accelerate modeling speed, this paper proposes a PA modeling method based on CS-GA-XGBoost. The cuckoo search (CS)-genetic algorithm (GA) integrates GA’s crossover operator into CS, making full use of the strong global search ability of CS and the fast rate of convergence of GA so that the improved CS-GA can expand the size of the bird nest population and reduce the scope of the search, with a better optimization ability and faster rate of convergence. This paper validates the effectiveness of the proposed modeling method by using measured input and output data of 2.5-GHz-GaN class-E PA under different temperatures (−40 °C, 25 °C, and 125 °C) as examples. The experimental results show that compared to XGBoost, GA-XGBoost, and CS-XGBoost, the proposed CS-GA-XGBoost can improve the modeling accuracy by one order of magnitude or more and shorten the modeling time by one order of magnitude or more. In addition, compared with classic machine learning algorithms, including gradient boosting, random forest, and SVR, the proposed CS-GA-XGBoost can improve modeling accuracy by three orders of magnitude or more and shorten modeling time by two orders of magnitude, demonstrating the superiority of the algorithm in terms of modeling accuracy and speed. The CS-GA-XGBoost modeling method is expected to be introduced into the modeling of other devices/circuits in the radio-frequency/microwave field and achieve good results.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

M.I., Omogbhemhe, and Momodu I.B.A. "Model for Predicting Bank Loan Default using XGBoost." International Journal of Computer Applications 183, no. 32 (October 16, 2021): 1–4. http://dx.doi.org/10.5120/ijca2021921705.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Zhang, Huimin, Renshuang Ding, Qi Zhang, Mingxing Fang, Guanghua Zhang, and Naiwen Yu. "An ARDS Severity Recognition Model based on XGBoost." Journal of Physics: Conference Series 2138, no. 1 (December 1, 2021): 012009. http://dx.doi.org/10.1088/1742-6596/2138/1/012009.

Повний текст джерела
Анотація:
Abstract Given the subjectivity and non-real-time of disease scoring system and invasive parameters in evaluating the development of acute respiratory distress syndrome (ARDS), combined with noninvasive parameters, this paper proposed an ARDS severity recognition model based on extreme gradient boosting (XGBoost). Firstly, the physiological parameters of patients were extracted based on the MIMIC-III database for statistical analysis, and the outliers and unbalanced samples were processed by the interquartile range and synthetic minority oversampling technique. Then, Pearson correlation coefficient and random forest were used as hybrid feature selection to score the noninvasive parameters comprehensively, and essential parameters for identifying diseases were obtained. Finally, XGBoost combined with grid search cross-validation to determine the best hyper-parameters of the model to realize the accurate classification of disease degree. The experimental results show that the model’s area under the curve (AUC) is as high as 0.98, and the accuracy is 0.90; the total score of blood oxygen saturation (SpO2) is 0.625, which could be used as an essential parameter to evaluate the severity of ARDS. Compared with traditional methods, this model has excellent advantages in real-time and accuracy and could provide more accurate diagnosis and treatment suggestions for medical staff.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Kang, Yunxiang, Minsheng Tan, Ding Lin, and Zhiguo Zhao. "Intrusion Detection Model Based on Autoencoder and XGBoost." Journal of Physics: Conference Series 2171, no. 1 (January 1, 2022): 012053. http://dx.doi.org/10.1088/1742-6596/2171/1/012053.

Повний текст джерела
Анотація:
Abstract In recent years, machine learning algorithms have been extensive used for intrusion detection field. At the same time, these algorithms still suffered from low accuracy due to data imbalance. To improve accuracy of detection, an intrusion detection model based on Autoencoder (AE) and XGBoost (IDAE-XG) is proposed. The training algorithm and detection algorithm related to IDAE-XG are given. IDAE-XG constructs the training set with preprocessed normal data. Data preprocessing includes feature selection and feature grouping. Through detection, XGBoost is used to predict results, which effectively improves prediction accuracy. The superiority of the proposed IDAE-XG is empirically demonstrated with extensive experiments conducted upon CSE-CIC-IDS2018. The experimental comparison show that IDAE-XG performs better than the KitNet model in the test, and has achieved a great improvement in accuracy and recall rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Jiang, Hui, Zheng He, Gang Ye, and Huyin Zhang. "Network Intrusion Detection Based on PSO-Xgboost Model." IEEE Access 8 (2020): 58392–401. http://dx.doi.org/10.1109/access.2020.2982418.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Alim, Mirxat, Guo-Hua Ye, Peng Guan, De-Sheng Huang, Bao-Sen Zhou, and Wei Wu. "Comparison of ARIMA model and XGBoost model for prediction of human brucellosis in mainland China: a time-series study." BMJ Open 10, no. 12 (December 2020): e039676. http://dx.doi.org/10.1136/bmjopen-2020-039676.

Повний текст джерела
Анотація:
ObjectivesHuman brucellosis is a public health problem endangering health and property in China. Predicting the trend and the seasonality of human brucellosis is of great significance for its prevention. In this study, a comparison between the autoregressive integrated moving average (ARIMA) model and the eXtreme Gradient Boosting (XGBoost) model was conducted to determine which was more suitable for predicting the occurrence of brucellosis in mainland China.DesignTime-series study.SettingMainland China.MethodsData on human brucellosis in mainland China were provided by the National Health and Family Planning Commission of China. The data were divided into a training set and a test set. The training set was composed of the monthly incidence of human brucellosis in mainland China from January 2008 to June 2018, and the test set was composed of the monthly incidence from July 2018 to June 2019. The mean absolute error (MAE), root mean square error (RMSE) and mean absolute percentage error (MAPE) were used to evaluate the effects of model fitting and prediction.ResultsThe number of human brucellosis patients in mainland China increased from 30 002 in 2008 to 40 328 in 2018. There was an increasing trend and obvious seasonal distribution in the original time series. For the training set, the MAE, RSME and MAPE of the ARIMA(0,1,1)×(0,1,1)12 model were 338.867, 450.223 and 10.323, respectively, and the MAE, RSME and MAPE of the XGBoost model were 189.332, 262.458 and 4.475, respectively. For the test set, the MAE, RSME and MAPE of the ARIMA(0,1,1)×(0,1,1)12 model were 529.406, 586.059 and 17.676, respectively, and the MAE, RSME and MAPE of the XGBoost model were 249.307, 280.645 and 7.643, respectively.ConclusionsThe performance of the XGBoost model was better than that of the ARIMA model. The XGBoost model is more suitable for prediction cases of human brucellosis in mainland China.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Li, Xiangcheng, Jialong Wang, Zhirui Geng, Yang Jin, and Jiawei Xu. "Short-term Wind Power Prediction Method Based on Genetic Algorithm Optimized XGBoost Regression Model." Journal of Physics: Conference Series 2527, no. 1 (June 1, 2023): 012061. http://dx.doi.org/10.1088/1742-6596/2527/1/012061.

Повний текст джерела
Анотація:
Abstract In order to solve the problem of accuracy and rapidity of short-term prediction of wind power output, the eXtreme Gradient Boosting (XGBoost) regression model is used in this paper to predict wind power output. For the models commonly used at the present stage, such as Long Short Term Memory (LSTM), random forest and ordinary XGBoost model, the modelling time is long, and the accuracy is not enough. In this paper, a genetic algorithm (GA) is introduced to improve the accuracy and speed of prediction of the XGBoost regression model. Firstly, the learning rate of the XGBoost model is optimized by using the good searching ability and flexibility of the genetic algorithm. Then variable weight combination prediction is carried out. The objective function for this problem is the mean square error that occurs between the value that is predicted and the value that actually occurs in the training set. GA is responsible for determining the model’s final weight. The historical output data of the wind plant is used in this paper to verify the XGBoost regression model based on a genetic algorithm and get the predicted value, which is then compared with the prediction results of LSTM and random forest algorithm. Example simulation and analysis show that the XGBoost regression model optimized by the genetic algorithm can be more significantly in solving the accuracy and rapidity of the prediction of short-term wind power output.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Tang, Jinjun, Lanlan Zheng, Chunyang Han, Fang Liu, and Jianming Cai. "Traffic Incident Clearance Time Prediction and Influencing Factor Analysis Using Extreme Gradient Boosting Model." Journal of Advanced Transportation 2020 (June 9, 2020): 1–12. http://dx.doi.org/10.1155/2020/6401082.

Повний текст джерела
Анотація:
Accurate prediction and reliable significant factor analysis of incident clearance time are two main objects of traffic incident management (TIM) system, as it could help to relieve traffic congestion caused by traffic incidents. This study applies the extreme gradient boosting machine algorithm (XGBoost) to predict incident clearance time on freeway and analyze the significant factors of clearance time. The XGBoost integrates the superiority of statistical and machine learning methods, which can flexibly deal with the nonlinear data in high-dimensional space and quantify the relative importance of the explanatory variables. The data collected from the Washington Incident Tracking System in 2011 are used in this research. To investigate the potential philosophy hidden in data, K-means is chosen to cluster the data into two clusters. The XGBoost is built for each cluster. Bayesian optimization is used to optimize the parameters of XGBoost, and the MAPE is considered as the predictive indicator to evaluate the prediction performance. A comparative study confirms that the XGBoost outperforms other models. In addition, response time, AADT (annual average daily traffic), incident type, and lane closure type are identified as the significant explanatory variables for clearance time.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Noorunnahar, Mst, Arman Hossain Chowdhury, and Farhana Arefeen Mila. "A tree based eXtreme Gradient Boosting (XGBoost) machine learning model to forecast the annual rice production in Bangladesh." PLOS ONE 18, no. 3 (March 27, 2023): e0283452. http://dx.doi.org/10.1371/journal.pone.0283452.

Повний текст джерела
Анотація:
In this study, we attempt to anticipate annual rice production in Bangladesh (1961–2020) using both the Autoregressive Integrated Moving Average (ARIMA) and the eXtreme Gradient Boosting (XGBoost) methods and compare their respective performances. On the basis of the lowest Corrected Akaike Information Criteria (AICc) values, a significant ARIMA (0, 1, 1) model with drift was chosen based on the findings. The drift parameter value shows that the production of rice positively trends upward. Thus, the ARIMA (0, 1, 1) model with drift was found to be significant. On the other hand, the XGBoost model for time series data was developed by changing the tunning parameters frequently with the greatest result. The four prominent error measures, such as mean absolute error (MAE), mean percentage error (MPE), root mean square error (RMSE), and mean absolute percentage error (MAPE), were used to assess the predictive performance of each model. We found that the error measures of the XGBoost model in the test set were comparatively lower than those of the ARIMA model. Comparatively, the MAPE value of the test set of the XGBoost model (5.38%) was lower than that of the ARIMA model (7.23%), indicating that XGBoost performs better than ARIMA at predicting the annual rice production in Bangladesh. Hence, the XGBoost model performs better than the ARIMA model in predicting the annual rice production in Bangladesh. Therefore, based on the better performance, the study forecasted the annual rice production for the next 10 years using the XGBoost model. According to our predictions, the annual rice production in Bangladesh will vary from 57,850,318 tons in 2021 to 82,256,944 tons in 2030. The forecast indicated that the amount of rice produced annually in Bangladesh will increase in the years to come.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Jiang, Jinyang, Zhi Liu, Pengbo Wang, and Fan Yang. "Improved Crow Search Algorithm and XGBoost for Transformer Fault Diagnosis." Journal of Physics: Conference Series 2666, no. 1 (December 1, 2023): 012040. http://dx.doi.org/10.1088/1742-6596/2666/1/012040.

Повний текст джерела
Анотація:
Abstract To enhance the accuracy of transformer fault diagnosis, this study proposes an enhanced transformer fault diagnosis model incorporating the Improved Crow Search Algorithm (ICSA) and XGBoost. The dissolved gas analysis in oil (DGA) technique is employed to extract 9-dimensional fault features of transformers as model inputs, in conjunction with the codeless ratio method for training. The output layer utilizes a gradient boosting-based decision tree addition model to obtain the fault diagnosis type. Furthermore, the Golden Sine Algorithm (GSA) is employed for improvement, and the ICSA’s performance is tested by using typical test functions, demonstrating faster convergence and stronger merit-seeking capabilities. The obtained results reveal that the comprehensive diagnostic accuracy of the proposed model reaches 94.4056%, marking an improvement of 8.3916%, 6.2937%, 4.1958%, and 2.0979% compared to the original base XGBoost, PSO-XGBoost, GWO-XGBoost, and CSA-XGBoost fault diagnosis models, respectively. These findings validate the effectiveness of the proposed method in enhancing the fault diagnosis performance of transformers.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Wang, Jun, Wei Rong, Zhuo Zhang, and Dong Mei. "Credit Debt Default Risk Assessment Based on the XGBoost Algorithm: An Empirical Study from China." Wireless Communications and Mobile Computing 2022 (March 19, 2022): 1–14. http://dx.doi.org/10.1155/2022/8005493.

Повний текст джерела
Анотація:
The bond market is an important part of China’s capital market. However, defaults have become frequent in the bond market in recent years, and consequently, the default risk of Chinese credit bonds has become increasingly prominent. Therefore, the assessment of default risk is particularly important. In this paper, we utilize 31 indicators at the macroeconomic level and the corporate microlevel for the prediction of bond defaults, and we conduct principal component analysis to extract 10 principal components from them. We use the XGBoost algorithm to analyze the importance of variables and assess the credit debt default risk based on the XGBoost prediction model through the calculation of evaluation indicators such as the area under the ROC curve (AUC), accuracy, precision, recall, and F1-score, in order to evaluate the classification prediction effect of the model. Finally, the grid search algorithm and k -fold cross-validation are used to optimize the parameters of the XGBoost model and determine the final classification prediction model. Existing research has focused on the selection of bond default risk prediction indicators and the application of XGBoost algorithm in default risk prediction. After optimization of the parameters, the optimized XGBoost algorithm is found to be more accurate than the original algorithm. The grid search and k -fold cross-validation algorithms are used to optimize the XGBoost model for predicting the default risk of credit bonds, resulting in higher accuracy of the proposed model. Our research results demonstrate that the optimized XGBoost model has a significantly improved prediction accuracy, compared to the original model, which is beneficial to improving the prediction effect for practical applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Le, Le Thi, Hoang Nguyen, Jian Zhou, Jie Dou, and Hossein Moayedi. "Estimating the Heating Load of Buildings for Smart City Planning Using a Novel Artificial Intelligence Technique PSO-XGBoost." Applied Sciences 9, no. 13 (July 4, 2019): 2714. http://dx.doi.org/10.3390/app9132714.

Повний текст джерела
Анотація:
In this study, a novel technique to support smart city planning in estimating and controlling the heating load (HL) of buildings, was proposed, namely PSO-XGBoost. Accordingly, the extreme gradient boosting machine (XGBoost) was developed to estimate HL first; then, the particle swarm optimization (PSO) algorithm was applied to optimize the performance of the XGBoost model. The classical XGBoost model, support vector machine (SVM), random forest (RF), Gaussian process (GP), and classification and regression trees (CART) models were also investigated and developed to predict the HL of building systems, and compared with the proposed PSO-XGBoost model; 837 investigations of buildings were considered and analyzed with many influential factors, such as glazing area distribution (GAD), glazing area (GA), orientation (O), overall height (OH), roof area (RA), wall area (WA), surface area (SA), and relative compactness (RC). Mean absolute percentage error (MAPE), root-mean-squared error (RMSE), variance account for (VAF), mean absolute error (MAE), and determination coefficient (R2), were used as the statistical criteria for evaluating the performance of the above models. The color intensity, as well as the ranking method, were also used to compare and evaluate the models. The results showed that the proposed PSO-XGBoost model was the most robust technique for estimating the HL of building systems. The remaining models (i.e., XGBoost, SVM, RF, GP, and CART) yielded more mediocre performance through RMSE, MAE, R2, VAF, and MAPE metrics. Another finding of this study also indicated that OH, RA, WA, and SA were the most critical parameters for the accuracy of the proposed PSO-XGBoost model. They should be particularly interested in smart city planning as well as the optimization of smart cities.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Luo, Xiong, Lijia Xu, Peng Huang, Yuchao Wang, Jiang Liu, Yan Hu, Peng Wang, and Zhiliang Kang. "Nondestructive Testing Model of Tea Polyphenols Based on Hyperspectral Technology Combined with Chemometric Methods." Agriculture 11, no. 7 (July 16, 2021): 673. http://dx.doi.org/10.3390/agriculture11070673.

Повний текст джерела
Анотація:
Nondestructive detection of tea’s internal quality is of great significance for the processing and storage of tea. In this study, hyperspectral imaging technology is adopted to quantitatively detect the content of tea polyphenols in Tibetan teas by analyzing the features of the tea spectrum in the wavelength ranging from 420 to 1010 nm. The samples are divided with joint x-y distances (SPXY) and Kennard-Stone (KS) algorithms, while six algorithms are used to preprocess the spectral data. Six other algorithms, Random Forest (RF), Gradient Boosting (GB), Adaptive boost (AdaBoost), Categorical Boosting (CatBoost), LightGBM, and XGBoost, are used to carry out feature extractions. Then based on a stacking combination strategy, a new two-layer combination prediction model is constructed, which is used to compare with the four individual regressor prediction models: RF Regressor (RFR), CatBoost Regressor (CatBoostR), LightGBM Regressor (LightGBMR) and XGBoost Regressor (XGBoostR). The experimental results show that the newly-built Stacking model predicts more accurately than the individual regressor prediction models. The coefficients of determination Rc2 andRp2 for the prediction of Tibetan tea polyphenols are 0.9709 and 0.9625, and the root mean square error RMSEC and RMSEP are 0.2766 and 0.3852 for the new model, respectively, which shows that the content of Tibetan tea polyphenols can be determined with precision.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Admassu, Tsehay. "Evaluation of Local Interpretable Model-Agnostic Explanation and Shapley Additive Explanation for Chronic Heart Disease Detection." Proceedings of Engineering and Technology Innovation 23 (January 1, 2023): 48–59. http://dx.doi.org/10.46604/peti.2023.10101.

Повний текст джерела
Анотація:
This study aims to investigate the effectiveness of local interpretable model-agnostic explanation (LIME) and Shapley additive explanation (SHAP) approaches for chronic heart disease detection. The efficiency of LIME and SHAP are evaluated by analyzing the diagnostic results of the XGBoost model and the stability and quality of counterfactual explanations. Firstly, 1025 heart disease samples are collected from the University of California Irvine. Then, the performance of LIME and SHAP is compared by using the XGBoost model with various measures, such as consistency and proximity. Finally, Python 3.7 programming language with Jupyter Notebook integrated development environment is used for simulation. The simulation result shows that the XGBoost model achieves 99.79% accuracy, indicating that the counterfactual explanation of the XGBoost model describes the smallest changes in the feature values for changing the diagnosis outcome to the predefined output.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Yang, Tian. "Sales Prediction of Walmart Sales Based on OLS, Random Forest, and XGBoost Models." Highlights in Science, Engineering and Technology 49 (May 21, 2023): 244–49. http://dx.doi.org/10.54097/hset.v49i.8513.

Повний текст джерела
Анотація:
The technique of estimating future sales levels for a good or service is known as sales forecasting. The corresponding forecasting methods range from initially qualitative analysis to later time series methods, regression analysis and econometric models, as well as machine learning methods that have emerged in recent decades. This paper compares the different performances of OLS, Random Forest and XGBoost machine learning models in predicting the sales of Walmart stores. According to the analysis, XGBoost model has the best sales forecasting ability. In the case of logarithmic sales, R2 of the XGBoost model is as high as 0.984, while MSE and MAE are only 0.065 and 0.124, respectively. The XGBoost model is therefore an option when making sales forecasts. These results compare different types of models, find out the best prediction model, and provide suggestions for future prediction model selection.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Meng, Yunxiang, Qihong Duan, Kai Jiao, and Jiang Xue. "A screened predictive model for esophageal squamous cell carcinoma based on salivary flora data." Mathematical Biosciences and Engineering 20, no. 10 (2023): 18368–85. http://dx.doi.org/10.3934/mbe.2023816.

Повний текст джерела
Анотація:
<abstract><p>Esophageal squamous cell carcinoma (ESCC) is a malignant tumor of the digestive system in the esophageal squamous epithelium. Many studies have linked esophageal cancer (EC) to the imbalance of oral microecology. In this work, different machine learning (ML) models including Random Forest (RF), Gaussian mixture model (GMM), K-nearest neighbor (KNN), logistic regression (LR), support vector machine (SVM) and extreme gradient boosting (XGBoost) based on Genetic Algorithm (GA) optimization was developed to predict the relationship between salivary flora and ESCC by combining the relative abundance data of <italic>Bacteroides</italic>, <italic>Firmicutes</italic>, <italic>Proteobacteria</italic>, <italic>Fusobacteria</italic> and <italic>Actinobacteria</italic> in the saliva of patients with ESCC and healthy control. The results showed that the XGBoost model without parameter optimization performed best on the entire dataset for ESCC diagnosis by cross-validation (Accuracy = 73.50%). Accuracy and the other evaluation indicators, including Precision, Recall, F1-score and the area under curve (AUC) of the receiver operating characteristic (ROC), revealed XGBoost optimized by the GA (GA-XGBoost) achieved the best outcome on the testing set (Accuracy = 89.88%, Precision = 89.43%, Recall = 90.75%, F1-score = 90.09%, AUC = 0.97). The predictive ability of GA-XGBoost was validated in phylum-level salivary microbiota data from ESCC patients and controls in an external cohort. The results obtained in this validation (Accuracy = 70.60%, Precision = 46.00%, Recall = 90.55%, F1-score = 61.01%) illustrate the reliability of the predictive performance of the model. The feature importance rankings obtained by XGBoost indicate that <italic>Bacteroides</italic> and <italic>Actinobacteria</italic> are the two most important factors in predicting ESCC. Based on these results, GA-XGBoost can predict and diagnose ESCC according to the relative abundance of salivary flora, providing an effective tool for the non-invasive prediction of esophageal malignancies.</p></abstract>
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Zong, Jing, Xin Xiong, Jianhua Zhou, Ying Ji, Diao Zhou, and Qi Zhang. "FCAN–XGBoost: A Novel Hybrid Model for EEG Emotion Recognition." Sensors 23, no. 12 (June 17, 2023): 5680. http://dx.doi.org/10.3390/s23125680.

Повний текст джерела
Анотація:
In recent years, artificial intelligence (AI) technology has promoted the development of electroencephalogram (EEG) emotion recognition. However, existing methods often overlook the computational cost of EEG emotion recognition, and there is still room for improvement in the accuracy of EEG emotion recognition. In this study, we propose a novel EEG emotion recognition algorithm called FCAN–XGBoost, which is a fusion of two algorithms, FCAN and XGBoost. The FCAN module is a feature attention network (FANet) that we have proposed for the first time, which processes the differential entropy (DE) and power spectral density (PSD) features extracted from the four frequency bands of the EEG signal and performs feature fusion and deep feature extraction. Finally, the deep features are fed into the eXtreme Gradient Boosting (XGBoost) algorithm to classify the four emotions. We evaluated the proposed method on the DEAP and DREAMER datasets and achieved a four-category emotion recognition accuracy of 95.26% and 94.05%, respectively. Additionally, our proposed method reduces the computational cost of EEG emotion recognition by at least 75.45% for computation time and 67.51% for memory occupation. The performance of FCAN–XGBoost outperforms the state-of-the-art four-category model and reduces computational costs without losing classification performance compared with other models.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії