Literatura académica sobre el tema "Optimization, Forecasting, Meta Learning, Model Selection"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Optimization, Forecasting, Meta Learning, Model Selection".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Optimization, Forecasting, Meta Learning, Model Selection"

1

Thi Kieu Tran, Trang, Taesam Lee, Ju-Young Shin, Jong-Suk Kim y Mohamad Kamruzzaman. "Deep Learning-Based Maximum Temperature Forecasting Assisted with Meta-Learning for Hyperparameter Optimization". Atmosphere 11, n.º 5 (10 de mayo de 2020): 487. http://dx.doi.org/10.3390/atmos11050487.

Texto completo
Resumen
Time series forecasting of meteorological variables such as daily temperature has recently drawn considerable attention from researchers to address the limitations of traditional forecasting models. However, a middle-range (e.g., 5–20 days) forecasting is an extremely challenging task to get reliable forecasting results from a dynamical weather model. Nevertheless, it is challenging to develop and select an accurate time-series prediction model because it involves training various distinct models to find the best among them. In addition, selecting an optimum topology for the selected models is important too. The accurate forecasting of maximum temperature plays a vital role in human life as well as many sectors such as agriculture and industry. The increase in temperature will deteriorate the highland urban heat, especially in summer, and have a significant influence on people’s health. We applied meta-learning principles to optimize the deep learning network structure for hyperparameter optimization. In particular, the genetic algorithm (GA) for meta-learning was used to select the optimum architecture for the network used. The dataset was used to train and test three different models, namely the artificial neural network (ANN), recurrent neural network (RNN), and long short-term memory (LSTM). Our results demonstrate that the hybrid model of an LSTM network and GA outperforms other models for the long lead time forecasting. Specifically, LSTM forecasts have superiority over RNN and ANN for 15-day-ahead in summer with the root mean square error (RMSE) value of 2.719 (°C).
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Samuel, Omaji, Fahad A. Alzahrani, Raja Jalees Ul Hussen Khan, Hassan Farooq, Muhammad Shafiq, Muhammad Khalil Afzal y Nadeem Javaid. "Towards Modified Entropy Mutual Information Feature Selection to Forecast Medium-Term Load Using a Deep Learning Model in Smart Homes". Entropy 22, n.º 1 (4 de enero de 2020): 68. http://dx.doi.org/10.3390/e22010068.

Texto completo
Resumen
Over the last decades, load forecasting is used by power companies to balance energy demand and supply. Among the several load forecasting methods, medium-term load forecasting is necessary for grid’s maintenance planning, settings of electricity prices, and harmonizing energy sharing arrangement. The forecasting of the month ahead electrical loads provides the information required for the interchange of energy among power companies. For accurate load forecasting, this paper proposes a model for medium-term load forecasting that uses hourly electrical load and temperature data to predict month ahead hourly electrical loads. For data preprocessing, modified entropy mutual information-based feature selection is used. It eliminates the redundancy and irrelevancy of features from the data. We employ the conditional restricted Boltzmann machine (CRBM) for the load forecasting. A meta-heuristic optimization algorithm Jaya is used to improve the CRBM’s accuracy rate and convergence. In addition, the consumers’ dynamic consumption behaviors are also investigated using a discrete-time Markov chain and an adaptive k-means is used to group their behaviors into clusters. We evaluated the proposed model using GEFCom2012 US utility dataset. Simulation results confirm that the proposed model achieves better accuracy, fast convergence, and low execution time as compared to other existing models in the literature.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ahmad, Waqas, Nasir Ayub, Tariq Ali, Muhammad Irfan, Muhammad Awais, Muhammad Shiraz y Adam Glowacz. "Towards Short Term Electricity Load Forecasting Using Improved Support Vector Machine and Extreme Learning Machine". Energies 13, n.º 11 (5 de junio de 2020): 2907. http://dx.doi.org/10.3390/en13112907.

Texto completo
Resumen
Forecasting the electricity load provides its future trends, consumption patterns and its usage. There is no proper strategy to monitor the energy consumption and generation; and high variation among them. Many strategies are used to overcome this problem. The correct selection of parameter values of a classifier is still an issue. Therefore, an optimization algorithm is applied with deep learning and machine learning techniques to select the optimized values for the classifier’s hyperparameters. In this paper, a novel deep learning-based method is implemented for electricity load forecasting. A three-step model is also implemented, including feature selection using a hybrid feature selector (XGboost and decision tee), redundancy removal using feature extraction technique (Recursive Feature Elimination) and classification/forecasting using improved Support Vector Machine (SVM) and Extreme Learning Machine (ELM). The hyperparameters of ELM are tuned with a meta-heuristic algorithm, i.e., Genetic Algorithm (GA) and hyperparameters of SVM are tuned with the Grid Search Algorithm. The simulation results are shown in graphs and the values are shown in tabular form and they clearly show that our improved methods outperform State Of The Art (SOTA) methods in terms of accuracy and performance. The forecasting accuracy of Extreme Learning Machine based Genetic Algo (ELM-GA) and Support Vector Machine based Grid Search (SVM-GS) is 96.3% and 93.25%, respectively. The accuracy of our improved techniques, i.e., ELM-GA and SVM-GS is 10% and 7%, respectively, higher than the SOTA techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ayub, Nasir, Muhammad Irfan, Muhammad Awais, Usman Ali, Tariq Ali, Mohammed Hamdi, Abdullah Alghamdi y Fazal Muhammad. "Big Data Analytics for Short and Medium-Term Electricity Load Forecasting Using an AI Techniques Ensembler". Energies 13, n.º 19 (5 de octubre de 2020): 5193. http://dx.doi.org/10.3390/en13195193.

Texto completo
Resumen
Electrical load forecasting provides knowledge about future consumption and generation of electricity. There is a high level of fluctuation behavior between energy generation and consumption. Sometimes, the energy demand of the consumer becomes higher than the energy already generated, and vice versa. Electricity load forecasting provides a monitoring framework for future energy generation, consumption, and making a balance between them. In this paper, we propose a framework, in which deep learning and supervised machine learning techniques are implemented for electricity-load forecasting. A three-step model is proposed, which includes: feature selection, extraction, and classification. The hybrid of Random Forest (RF) and Extreme Gradient Boosting (XGB) is used to calculate features’ importance. The average feature importance of hybrid techniques selects the most relevant and high importance features in the feature selection method. The Recursive Feature Elimination (RFE) method is used to eliminate the irrelevant features in the feature extraction method. The load forecasting is performed with Support Vector Machines (SVM) and a hybrid of Gated Recurrent Units (GRU) and Convolutional Neural Networks (CNN). The meta-heuristic algorithms, i.e., Grey Wolf Optimization (GWO) and Earth Worm Optimization (EWO) are applied to tune the hyper-parameters of SVM and CNN-GRU, respectively. The accuracy of our enhanced techniques CNN-GRU-EWO and SVM-GWO is 96.33% and 90.67%, respectively. Our proposed techniques CNN-GRU-EWO and SVM-GWO perform 7% and 3% better than the State-Of-The-Art (SOTA). In the end, a comparison with SOTA techniques is performed to show the improvement of the proposed techniques. This comparison showed that the proposed technique performs well and results in the lowest performance error rates and highest accuracy rates as compared to other techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Li, Yiyan, Si Zhang, Rongxing Hu y Ning Lu. "A meta-learning based distribution system load forecasting model selection framework". Applied Energy 294 (julio de 2021): 116991. http://dx.doi.org/10.1016/j.apenergy.2021.116991.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

El-kenawy, El-Sayed M., Seyedali Mirjalili, Nima Khodadadi, Abdelaziz A. Abdelhamid, Marwa M. Eid, M. El-Said y Abdelhameed Ibrahim. "Feature selection in wind speed forecasting systems based on meta-heuristic optimization". PLOS ONE 18, n.º 2 (7 de febrero de 2023): e0278491. http://dx.doi.org/10.1371/journal.pone.0278491.

Texto completo
Resumen
Technology for anticipating wind speed can improve the safety and stability of power networks with heavy wind penetration. Due to the unpredictability and instability of the wind, it is challenging to accurately forecast wind power and speed. Several approaches have been developed to improve this accuracy based on processing time series data. This work proposes a method for predicting wind speed with high accuracy based on a novel weighted ensemble model. The weight values in the proposed model are optimized using an adaptive dynamic grey wolf-dipper throated optimization (ADGWDTO) algorithm. The original GWO algorithm is redesigned to emulate the dynamic group-based cooperative to address the difficulty of establishing the balance between exploration and exploitation. Quick bowing movements and a white breast, which distinguish the dipper throated birds hunting method, are employed to improve the proposed algorithm exploration capability. The proposed ADGWDTO algorithm optimizes the hyperparameters of the multi-layer perceptron (MLP), K-nearest regressor (KNR), and Long Short-Term Memory (LSTM) regression models. A dataset from Kaggle entitled Global Energy Forecasting Competition 2012 is employed to assess the proposed algorithm. The findings confirm that the proposed ADGWDTO algorithm outperforms the literature’s state-of-the-art wind speed forecasting algorithms. The proposed binary ADGWDTO algorithm achieved average fitness of 0.9209 with a standard deviation fitness of 0.7432 for feature selection, and the proposed weighted optimized ensemble model (Ensemble using ADGWDTO) achieved a root mean square error of 0.0035 compared to state-of-the-art algorithms. The proposed algorithm’s stability and robustness are confirmed by statistical analysis of several tests, such as one-way analysis of variance (ANOVA) and Wilcoxon’s rank-sum.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Yang, Yi, Wei Liu, Tingting Zeng, Linhan Guo, Yong Qin y Xue Wang. "An Improved Stacking Model for Equipment Spare Parts Demand Forecasting Based on Scenario Analysis". Scientific Programming 2022 (14 de junio de 2022): 1–15. http://dx.doi.org/10.1155/2022/5415702.

Texto completo
Resumen
The purpose of spare parts management is to maximize the system’s availability and minimize the economic costs. The problem of cost availability trade-off leads to the problem of spare parts demand prediction. Accurate and reasonable spare parts demand forecasting can realize the balance between cost and availability. So, this paper focuses on spare parts management during the equipment normal operation phase and tries to forecast the demand of spare parts in a specific inspection and replacement cycle. Firstly, the equipment operation and support scenarios are analyzed to obtain the supportability data related to spare parts requirements. Then, drawing on the idea of ensemble learning, a new feature selection method has been designed, which can overcome the limitations of a single feature selection method. In addition, an improved stacking model is proposed to predict the demand for spare parts. In the traditional stacking model, there are two levels of learning, base-learning, and meta-learning, in which the outputs of base learners are taken as the input of the meta learner. However, the proposed model brings the initial feature together with the output of the base learner layer as the input of the meta learner layer. And experiments have shown that the performance of the improved stacking model is better than the base learners and the traditional stacking model on the same data set.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Cawood, Pieter y Terence Van Zyl. "Evaluating State-of-the-Art, Forecasting Ensembles and Meta-Learning Strategies for Model Fusion". Forecasting 4, n.º 3 (18 de agosto de 2022): 732–51. http://dx.doi.org/10.3390/forecast4030040.

Texto completo
Resumen
The techniques of hybridisation and ensemble learning are popular model fusion techniques for improving the predictive power of forecasting methods. With limited research that instigates combining these two promising approaches, this paper focuses on the utility of the Exponential Smoothing-Recurrent Neural Network (ES-RNN) in the pool of base learners for different ensembles. We compare against some state-of-the-art ensembling techniques and arithmetic model averaging as a benchmark. We experiment with the M4 forecasting dataset of 100,000 time-series, and the results show that the Feature-Based FORecast Model Averaging (FFORMA), on average, is the best technique for late data fusion with the ES-RNN. However, considering the M4’s Daily subset of data, stacking was the only successful ensemble at dealing with the case where all base learner performances were similar. Our experimental results indicate that we attain state-of-the-art forecasting results compared to Neural Basis Expansion Analysis (N-BEATS) as a benchmark. We conclude that model averaging is a more robust ensembling technique than model selection and stacking strategies. Further, the results show that gradient boosting is superior for implementing ensemble learning strategies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Hafeez, Ghulam, Khurram Saleem Alimgeer, Zahid Wadud, Zeeshan Shafiq, Mohammad Usman Ali Khan, Imran Khan, Farrukh Aslam Khan y Abdelouahid Derhab. "A Novel Accurate and Fast Converging Deep Learning-Based Model for Electrical Energy Consumption Forecasting in a Smart Grid". Energies 13, n.º 9 (3 de mayo de 2020): 2244. http://dx.doi.org/10.3390/en13092244.

Texto completo
Resumen
Energy consumption forecasting is of prime importance for the restructured environment of energy management in the electricity market. Accurate energy consumption forecasting is essential for efficient energy management in the smart grid (SG); however, the energy consumption pattern is non-linear with a high level of uncertainty and volatility. Forecasting such complex patterns requires accurate and fast forecasting models. In this paper, a novel hybrid electrical energy consumption forecasting model is proposed based on a deep learning model known as factored conditional restricted Boltzmann machine (FCRBM). The deep learning-based FCRBM model uses a rectified linear unit (ReLU) activation function and a multivariate autoregressive technique for the network training. The proposed model predicts future electrical energy consumption for efficient energy management in the SG. The proposed model is a novel hybrid model comprising four modules: (i) data processing and features selection module, (ii) deep learning-based FCRBM forecasting module, (iii) genetic wind driven optimization (GWDO) algorithm-based optimization module, and (iv) utilization module. The proposed hybrid model, called FS-FCRBM-GWDO, is tested and evaluated on real power grid data of USA in terms of four performance metrics: mean absolute percentage deviation (MAPD), variance, correlation coefficient, and convergence rate. Simulation results validate that the proposed hybrid FS-FCRBM-GWDO model has superior performance than existing models such as accurate fast converging short-term load forecasting (AFC-STLF) model, mutual information-modified enhanced differential evolution algorithm-artificial neural network (MI-mEDE-ANN)-based model, features selection-ANN (FS-ANN)-based model, and Bi-level model, in terms of forecast accuracy and convergence rate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Dokur, Emrah, Cihan Karakuzu, Uğur Yüzgeç y Mehmet Kurban. "Using optimal choice of parameters for meta-extreme learning machine method in wind energy application". COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 40, n.º 3 (8 de febrero de 2021): 390–401. http://dx.doi.org/10.1108/compel-07-2020-0246.

Texto completo
Resumen
Purpose This paper aims to deal with the optimal choice of a novel extreme learning machine (ELM) architecture based on an ensemble of classic ELM called Meta-ELM structural parameters by using a forecasting process. Design/methodology/approach The modelling performance of the Meta-ELM architecture varies depending on the network parameters it contains. The choice of Meta-ELM parameters is important for the accuracy of the models. For this reason, the optimal choice of Meta-ELM parameters is investigated on the problem of wind speed forecasting in this paper. The hourly wind-speed data obtained from Bilecik and Bozcaada stations in Turkey are used. The different number of ELM groups (M) and nodes (Nh) are analysed for determining the best modelling performance of Meta-ELM. Also, the optimal Meta-ELM architecture forecasting results are compared with four different learning algorithms and a hybrid meta-heuristic approach. Finally, the linear model based on correlation between the parameters was given as three dimensions (3D) and calculated. Findings It is observed that the analysis has better performance for parameters of Meta-ELM, M = 15 − 20 and Nh = 5 − 10. Also considering the performance metric, the Meta-ELM model provides the best results in all regions and the Levenberg–Marquardt algorithm -feed forward neural network and adaptive neuro fuzzy inference system -particle swarm optimization show competitive results for forecasting process. In addition, the Meta-ELM provides much better results in terms of elapsed time. Originality/value The original contribution of the study is to investigate of determination Meta-ELM parameters based on forecasting process.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Tesis sobre el tema "Optimization, Forecasting, Meta Learning, Model Selection"

1

Rakotoarison, Herilalaina. "Some contributions to AutoML : hyper-parameter optimization and meta-learning". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG044.

Texto completo
Resumen
Cette thèse présente trois principales contributions afin d’améliorer l’état de l’art de ces approches AutoML. Elles sont divisées entre deux thèmes de recherche: l’optimisation et meta-apprentissage. La première contribution concerne un algorithme d’optimisation hybride, appelé Mosaic, qui exploite les méthodes MCTS et optimisation bayésienne pour résoudre respectivement la sélection des algorithmes et la configuration des hyperparamètres. L’évaluation, conduite à travers le benchmark OpenML 100, montre que la performance empirique de Mosaic surpasse ceux des systèmes d’AutoML de l’état de l’art (Auto-Sklearn et TPOT). La deuxième contribution introduit une architecture de réseau neuronal, appelée Dida, qui permet d’apprendre des descripteurs de données invariants à la permutation de colonnes et d’exemples. Deux tâches (classification des patchs et prédiction des performances sont considérées lors de l’évaluation de la méthode. Les résultats de Dida sont encourageants comparés à ceux de ses concurrents (Dataset2 vvec et DSS). Enfin, la troisième contribution, intitulée Metabu, vise à surmonter les limites de Dida à opérer sur de vrais jeux de données d’AutoML. La stratégie de Metabu comporte deux étapes. Tout d’abord, une topologie idéale de ces jeux de données, basée sur les meilleurs hyperparamètres, est définie. Puis, une transformation linéaire d es descripteurs manuels est apprise pour les aligner, selon un critère de transport optimal, avec la représentation idéale. Les comparaisons empiriques montrent que les descripteurs Metabu sont plus performants que les descripteurs manuels sur trois problèmes différents (évaluation du voisinage des jeux de données, recommandation d’hyperparamètres, et initialisation d’un algorithme d’optimisation)
This thesis proposes three main contributions to advance the state-of-the-art of AutoML approaches. They are divided into two research directions: optimization (first contribution) and meta-learning (second and third contributions). The first contribution is a hybrid optimization algorithm, dubbed Mosaic, leveraging Monte-Carlo Tree Search and Bayesian Optimization to address the selection of algorithms and the tuning of hyper-parameters, respectively. The empirical assessment of the proposed approach shows its merits compared to Auto-sklearn and TPOT AutoML systems on OpenML 100. The second contribution introduces a novel neural network architecture, termed Dida, to learn a good representation of datasets (i.e., metafeatures) from scratch while enforcing invariances w.r.t features and rows permutations. Two proofof-concept tasks (patch classification and performance prediction tasks) are considered. The proposed approach yields superior empirical performance compared to Dataset2Vec and DSS on both tasks. The third contribution addresses the limitation of Dida on handling standard dataset benchmarks. The proposed approach, called Metabu, relies on hand-crafted meta-features. The novelty of Metabu is two-fold: i) defining an "oracle" topology of datasets based on top-performing hyper-parameters; ii) leveraging Optimal Transport approach to align a mapping of the handcrafted meta-features with the oracle topology. The empirical results suggest that Metabu metafeature outperforms the baseline hand-cr afted meta-features on three different tasks (assessing meta-features based topology, recommending hyper-parameters w.r.t topology, and warmstarting optimization algorithms)
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Villanova, Laura. "Response surface optimization for high dimensional systems with multiple responses". Doctoral thesis, Università degli studi di Padova, 2010. http://hdl.handle.net/11577/3421551.

Texto completo
Resumen
This thesis is about the optimization of physical systems (or processes) characterized by a high number of input variables (e.g., operations, machines, methods, people, and materials) and multiple responses (output characteristics). These systems are of interest because they are common scenarios in real-world studies and they present many challenges for practitioners in a wide range of applicative fields (e.g., science, engineering). The first objective of the study was to develop a model-based approach to support the practitioners in planning the experiments and optimizing the system responses. Of interest was the creation of a methodology capable of providing a feedback to the practitioner while taking into account his/her point of view. The second objective was to identify a procedure to select the most promising model, to be combined with the model-based approach, on the basis of the features of the applicative problem of interest. To cope with the first objective, experimental design, modeling and optimization techniques have been combined in a sequential procedure that interacts with the practitioner at each stage. The developed approach has roots in nonparametric and semiparametric response surface ethodology (NPRSM), design and analysis of computer experiments (DACE), multi-objective optimization and swarm intelligence computation. It consists of augmenting an initial experimental design (set of experiments) by sequentially identifying additional design points (experiments) with expected improved performance. The identification of new experimental points is guided by a particle swarm optimization (PSO) algorithm that minimizes a distance-based function. In particular, the distance between the measured response values and a target is minimized. The target is composed of ideal values of the responses and is selected using a multivariate adaptive regression splines (MARS) model, which is updated as soon as new experiments are implemented and the corresponding response values are measured. The developed approach resulted in a sequential procedure named Evolutionary Model-based Multiresponse Approach (EMMA). When tested on a set of benchmark functions, EMMA was shown to overcome the potential problem of premature convergence to a local optimum and to correctly identify the true global optimum. Furthermore, EMMA is distribution-free and it allows the automatic selection of the target, in contrast to the trial-and-error procedures usually employed for this purpose. Finally, EMMA was applied to a real-world chemical problem devoted to the functionalization of a substrate for possible biomedical studies. With respect to the method typically employed by the scientists, improvements of the responses of up to 380% were detected. The proposed approach was thus shown to hold much promise for the optimization of multiresponse high dimensional systems. Moreover, EMMA turned out to be a valuable methodology for industrial research. Indeed, by means of a preliminary simulation study, it gave an initial estimate of the number of experiments and time necessary to achieve a specific goal, thus providing an indication of the budget required for the research. To deal with the second objective of the research, a meta-learning approach for model selection was adopted. Interest in model selection strategies arose from questions such as ‘Is MARS the best model we could have used?’ and ‘Given an applicative problem, how can we select the most promising modeling technique to be combined with EMMA?’. Indeed, it is now generally accepted that no single model can outperform some other models over all possible regression problems. Furthermore, the model performance ‘... may depend on the detailed nature of the problem at hand in terms of the number of observations, the number of response variables, their correlation structure, signal-to-noise ratio, collinearity of the predictor variables, etc.’ (Breiman & Friedman 1997). The meta-learning approach was adopted to select the most promising model on the basis of measurable characteristics of the investigated problem. The basic idea was to study a set of multiresponse regression models and evaluate their performance on a broad class of problems, that were characterized by various degrees of complexity. By matching the problem characteristics and the models’ performance, the aim was to discover the conditions under which a model outperforms others as well as to acquire some rules to be used as a guidance when faced with a new application. The procedures to simulate the datasets were developed, the metrics to measure the problems characteristics were identified, and the R code to evaluate the models’ performances was generated. The foundations for a large computational study was therefore established. Implementation of such study is part of ongoing research, and future works will aim to examine the obtained empirical rules from a theoretical perspective with a view to confirm their validity, as well as generating insights into each model’s behaviour.
La tesi riguarda l’ottimizzazione di sistemi (o processi) fisici caratterizzati da un elevato numero di variabili in ingresso (operazioni, macchine, metodi, persone, materiali) e da più variabili risposta, impiegate per misurare le proprietà del prodotto finale. Questa tipologia di sistemi è molto frequente in un ampio spettro di campi applicativi, che spaziano dalla scienza all’ingegneria, e pone lo sperimentatore di fronte a delle problematiche di non sempre facile risoluzione. Il primo obiettivo di questo studio era di sviluppare un approcio, basato su un modello statistico, che fosse in grado di supportare lo sperimentatore nella pianificazione degli esperimenti e nell’ottimizzazione delle risposte del sistema. Fondamentale era lo sviluppo di una procedura capace di tenere in considerazione il punto di vista dello sperimentatore e fornirgli continuamente un feedback. Il secondo obiettivo della ricerca era l’identificazione di un metodo volto a selezionare il miglior modello statistico, da integrare all’approcio proposto, sulla base delle caratteristiche del problema applicativo investigato. Il primo obiettivo ha portato allo sviluppo di una procedura sequenziale che impiega tecniche di disegno sperimentale, modellazione e ottimizzazione, e che interagisce, ad ogni passo, con lo sperimentatore. La metodologia proposta è stata denominata EMMA e coinvolge varie aree di ricerca scientifica e computazionale, quali superfici di risposta nonparametriche e semiparametriche, disegno e analisi di esperimenti a computer, ottimizzazione multiobiettivo e computazione ispirata al comportamento degli sciami in natura. EMMA prevede l’identificazione di un disegno sperimentale (insieme di esperimenti) che viene successivamente integrato con dei punti sperimentali (esperimenti), identificati in modo sequentiale. Il processo di identificazione dei nuovi punti sperimentali è guidato da un algoritmo di ottimizzazione particle swarm, che minimizza la distanza fra i valori di risposta osservati e un target. Il target è un insieme di valori ottimali, uno per ogni risposta, che vengono selezionati usando un modello di regressione multivariata basato su spline (MARS). Tale target viene aggiornato non appena i nuovi esperimenti vengono implementati e le corrispondenti risposte vengono misurate. Quando testato su un insieme di funzioni standard, EMMA ha dimostrato di poter superare il potenziale problema di convergenza prematura verso un ottimo locale e di poter identificare correttamente il vero ottimo globale. Inoltre, EMMA non richiede nessuna assunzione sulla distribuzione dei dati e, diversamente da altre procedure, permette di selezionare automaticamente il target. Infine, EMMA è stata applicata ad un problema chimico volto alla funzionalizzazione di un substrato per possibili applicazioni biomediche. Rispetto al metodo generalmente usato dagli scienziati, EMMA ha permesso di migliorare le risposte del sistema di vari punti percentuali, e incrementi fino al 380% sono stati osservati. L’approccio proposto costituisce pertanto un metodologia con elevate potenzialità per l’ottimizzazione di sistemi multirisposta ad alta dimensionalità. Inoltre, grazie a degli studi di simulazione, EMMA permette di ottenere una stima iniziale del numero di esperimenti e del tempo necessario per raggiungere il miglioramento desiderato. Di conseguenza, potendo fornire un’indicazione del budget richiesto per lo studio di interesse, la metodologia risulta essere di interesse specialmente nel settore della ricerca industriale. Il secondo obiettivo ha portato allo sviluppo di un approcio di meta-apprendimento per la selezione del modello. L’interesse nella selezione del modello deriva da domande quali ‘E’ MARS il miglior modello che avremmo potuto usare?’ e ‘Dato un problema applicativo, come possiamo selezionare la tecnica di modellazione più promettente da combinare con EMMA?’. Infatti, `e ormai riconosciuto che non esiste un modello le cui performance sono migliori, rispetto ad altre tecniche di modellazione, per tutti i possibili problemi di regressione. Inoltre, le performance di un modello ‘... possono dipendere dalla natura del problema investigato in termini di numero di osservazioni, numero di variabili risposta, struttura di correlazione delle variabili, rapporto segnale-rumore, grado di collinearity dei predittori, etc.’ (Breiman & Friedman 1997). L’approcio di meta-apprendimento è stato adottato per identificare il modello statistico più promettente, sulla base delle caratteristiche del problema investigato. L’idea consisteva nello studiare un insieme di modelli di regressione multirisposta e valutare la loro performance su un’ampia classe di problemi caratterizzati da diversi gradi di complessità. Studiando la relazione fra le caratteristiche del problema e la performance dei modelli, lo scopo è di scoprire sotto quali condizioni un modello è migliore di altri e simultaneamente acquisire alcune regole da poter usare come linee guida nello studio di nuove applicazioni. A tale scopo sono state sviluppate le procedure per simulare i dati, le metriche per misurare le caratteristiche dei problemi, e il codice R necessario per la valutazione delle performance dei modelli. Questo ha permesso di gettare le fondamenta di un ampio studio di simulazione, la cui implementazione fa parte della ricerca attualmente in corso. Lo scopo della ricerca futura è di esaminare, da un punto di vista teorico, le regole empiriche ottenute in modo da poterne confermare la validità, oltre che favorire una migliore comprensione del comportamento delle tecniche di modellazione investigate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Gangi, Leonardo Di. "Optimization and machine learning in support of statistical modeling". Doctoral thesis, 2022. http://hdl.handle.net/2158/1258572.

Texto completo
Resumen
A crucial step in data analysis is to formulate the most appropriate model for reliable inference or prediction. Both optimization and machine learning assist the modeler towards this task. Whenever the inference is the focus, a linear regression model represents a suitable tool for an initial understanding of the reality that the model aims to describe. An automatic and objective procedure to select the predictors of the regression model is fundamental to achieve this target. On this matter, as a first contribution, we propose an algorithm, based on Mixed Integer Optimization (MIO), for best subset selection problem in Gaussian linear regression scenario. The algorithm, with simple modifications, is also suitable for the order selection problem in Gaussian ARMA models. The proposed approach has the advantage of considering both model selection as well as parameter estimation as a single optimization problem. The core of the algorithm is based on a two-step Gauss-Seidel decomposition scheme which favors the computational efficiency of the procedure. The performed experiments show that the algorithm is fast and reliable although not guaranteed to deliver the optimal solution. As a second contribution, we consider the maximum likelihood estimation problem of causal and invertible Gaussian ARMA models of a given order (p,q). We highlight the convenience of fitting these models directly in the space of partial autocorrelations (autoregressive component) and in the space of partial moving average coefficients (moving average component) without having to exploit the additional Jones reparametrization. In our method, causality and invertibility constraints are handled by formulating the estimation problem as a bound constrained optimization problem. Our approach is compared to the classical estimation method based on the Jones reparametrization which leads to an unconstrained formulation of the problem. The convenience of our approach is assessed by the results of several computational experiments which reveal a significant reduction of fitting times and an improvement in terms of numerical stability. We also propose a regularization term in the model and we show how this addition improves the out of sample quality of the fitted model. As a final contribution, the problem of forecasting univariate temporal data is considered. When the purpose of the model is prediction, combining forecasting models is a well known successful strategy leading to an improvement of the accuracy of prediction. Usually, knowledge of experts is needed to combine forecasting models in an appropriate way. However, especially in real-time applications, the need of automatic procedures, which replace the knowledge of experts, is evident. By learning from past forecasting episodes, a meta learning model can be properly trained to learn the combination task. On this matter, we introduce two meta-learning systems which recommend a weighting schema for the combination of forecasting models based on time series features. We focus on sparse convex combinations. Zero weighted forecasting models do not contribute to the computation of the final forecast and their fit can be avoided. Therefore, the more the degree of sparsity increases, the more the computational time for producing final forecasts decreases. The methodology is tested on the M4 competition dataset. Obtained results highlight that it is possible to reduce significantly the number of models in the combination without affecting the quality of prediction.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Capítulos de libros sobre el tema "Optimization, Forecasting, Meta Learning, Model Selection"

1

Aburasain, R. Y., E. A. Edirisinghe y M. Y. Zamim. "A Coarse-to-Fine Multi-class Object Detection in Drone Images Using Convolutional Neural Networks". En Digital Interaction and Machine Intelligence, 12–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11432-8_2.

Texto completo
Resumen
AbstractMulti-class object detection has a rapid evolution in the last few years with the rise of deep Convolutional Neural Networks (CNNs) learning based, in particular. However, the success approaches are based on high resolution ground level images and extremely large volume of data as in COCO and VOC datasets. On the other hand, the availability of the drones has been increased in the last few years and hence several new applications have been established. One of such is understanding drone footage by analysing, detecting, recognizing different objects in the covered area. In this study conducted, a collection of large images captured by a drone flying at a fixed altitude in a desert area located within the United Arab Emirates (UAE) is given and it is utilised for training and evaluating the CNN networks to be investigated. Three state-of-the-art CNN architectures, namely SSD-500 with VGGNet-16 meta-architecture, SSD-500 with ResNet meta-architecture and YOLO-V3 with Darknet-53 are optimally configured, re-trained, tested and evaluated for the detection of three different classes of objects in the captured footage, namely, palm trees, group-of-animals/cattle and animal sheds in farms. Our preliminary experiments revealed that YOLO-V3 outperformed SSD-500 with VGGNet-16 by a large margin and has a considerable improvement as compared to using SSD-500 with ResNet. Therefore, it has been selected for further investigation, aiming to propose an efficient coarse-to-fine object detection model for multi-class object detection in drone images. To this end, the impact of changing the activation function of the hidden units and the pooling type in the pooling layer has been investigated in detail. In addition, the impact of tuning the learning rate and the selection of the most effective optimization method for general hyper-parameters tuning is also investigated. The result demonstrated that the multi-class object detector developed has precision of 0.99, a recall of 0.94 and an F-score of 0.96, proving the efficiency of the multi-class object detection network developed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Deo, Ravinesh C., Sujan Ghimire, Nathan J. Downs y Nawin Raj. "Optimization of Windspeed Prediction Using an Artificial Neural Network Compared With a Genetic Programming Model". En Research Anthology on Multi-Industry Uses of Genetic Programming and Algorithms, 116–47. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-8048-6.ch007.

Texto completo
Resumen
The precise prediction of windspeed is essential in order to improve and optimize wind power prediction. However, due to the sporadic and inherent complexity of weather parameters, the prediction of windspeed data using different patterns is difficult. Machine learning (ML) is a powerful tool to deal with uncertainty and has been widely discussed and applied in renewable energy forecasting. In this chapter, the authors present and compare an artificial neural network (ANN) and genetic programming (GP) model as a tool to predict windspeed of 15 locations in Queensland, Australia. After performing feature selection using neighborhood component analysis (NCA) from 11 different metrological parameters, seven of the most important predictor variables were chosen for 85 Queensland locations, 60 of which were used for training the model, 10 locations for model validation, and 15 locations for the model testing. For all 15 target sites, the testing performance of ANN was significantly superior to the GP model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Deo, Ravinesh C., Sujan Ghimire, Nathan J. Downs y Nawin Raj. "Optimization of Windspeed Prediction Using an Artificial Neural Network Compared With a Genetic Programming Model". En Advances in Computational Intelligence and Robotics, 328–59. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-4766-2.ch015.

Texto completo
Resumen
The precise prediction of windspeed is essential in order to improve and optimize wind power prediction. However, due to the sporadic and inherent complexity of weather parameters, the prediction of windspeed data using different patterns is difficult. Machine learning (ML) is a powerful tool to deal with uncertainty and has been widely discussed and applied in renewable energy forecasting. In this chapter, the authors present and compare an artificial neural network (ANN) and genetic programming (GP) model as a tool to predict windspeed of 15 locations in Queensland, Australia. After performing feature selection using neighborhood component analysis (NCA) from 11 different metrological parameters, seven of the most important predictor variables were chosen for 85 Queensland locations, 60 of which were used for training the model, 10 locations for model validation, and 15 locations for the model testing. For all 15 target sites, the testing performance of ANN was significantly superior to the GP model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Kumar, Akshi, Arunima Jaiswal, Shikhar Garg, Shobhit Verma y Siddhant Kumar. "Sentiment Analysis Using Cuckoo Search for Optimized Feature Selection on Kaggle Tweets". En Research Anthology on Implementing Sentiment Analysis Across Multiple Disciplines, 1203–18. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-6303-1.ch062.

Texto completo
Resumen
Selecting the optimal set of features to determine sentiment in online textual content is imperative for superior classification results. Optimal feature selection is computationally hard task and fosters the need for devising novel techniques to improve the classifier performance. In this work, the binary adaptation of cuckoo search (nature inspired, meta-heuristic algorithm) known as the Binary Cuckoo Search is proposed for the optimum feature selection for a sentiment analysis of textual online content. The baseline supervised learning techniques such as SVM, etc., have been firstly implemented with the traditional tf-idf model and then with the novel feature optimization model. Benchmark Kaggle dataset, which includes a collection of tweets is considered to report the results. The results are assessed on the basis of performance accuracy. Empirical analysis validates that the proposed implementation of a binary cuckoo search for feature selection optimization in a sentiment analysis task outperforms the elementary supervised algorithms based on the conventional tf-idf score.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Optimization, Forecasting, Meta Learning, Model Selection"

1

Kuck, Mirko, Sven F. Crone y Michael Freitag. "Meta-learning with neural networks and landmarking for forecasting model selection an empirical evaluation of different feature sets applied to industry data". En 2016 International Joint Conference on Neural Networks (IJCNN). IEEE, 2016. http://dx.doi.org/10.1109/ijcnn.2016.7727376.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Timonov, Alexey Vasilievich, Rinat Alfredovich Khabibullin, Nikolay Sergeevich Gurbatov, Arturas Rimo Shabonas y Alexey Vladimirovich Zhuchkov. "Automated Geosteering Optimization Using Machine Learning". En Abu Dhabi International Petroleum Exhibition & Conference. SPE, 2021. http://dx.doi.org/10.2118/207364-ms.

Texto completo
Resumen
Abstract Geosteering is an important area and its quality determines the efficiency of formation drilling by horizontal wells, which directly affects the project NPV. This paper presents the automated geosteering optimization platform which is based on live well data. The platform implements online corrections of the geological model and forecasts well performance from the target reservoir. The system prepares recommendations of the best reservoir production interval and the direction for horizontal well placements based on reservoir performance analytics. This paper describes the stages of developing a comprehensive system using machine-learning methods, which allows multivariate calculations to refine and predict the geological model. Based on the calculations, a search for the optimal location of a horizontal well to maximize production is carried out. The approach realized in the work takes into account many factors (some specific features of geological structure, history of field development, wells interference, etc.) and can offer optimum horizontal well placement options without performing full-scale or sector hydrodynamic simulation. Machine learning methods (based on decision trees and neural networks) and target function optimization methods are used for geological model refinement and forecasting as well as for selection of optimum interval of well placement. As the result of researches we have developed the complex system including modules of data verification and preprocessing, automatic inter-well correlation, optimization and target interval selection. The system was tested while drilling hydrocarbons in the Western Siberian fields, where the developed approach showed efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Silva, Lucas Barth, Roberto Zanetti Freire y Osíris Canciglieri Junior. "Spot Energy Price Forecasting Using Wavelet Transform and Extreme Learning Machine". En Congresso Brasileiro de Inteligência Computacional. SBIC, 2021. http://dx.doi.org/10.21528/cbic2021-62.

Texto completo
Resumen
Given the social importance of energy, there is a concern to promote the sustainable development of the sector. Aiming at this evolution, from the 90s onwards, a wave of liberalization in the sector began to emerge in various parts of the world. These measures promoted an increase in the dynamism of commercial transactions and the transformation of electricity into a commodity. Consequently, futures, short-term, and spot markets were created. In this context, and due to the volatility of energy prices, the forecast of monetary values has become strategic for traders. This work aims to apply a computational intelligence model using Wavelet Transform on input values and the Extreme Machine Learning algorithm for training and prediction (W-ELM). The macro parameters were optimized using the Particle Swarm Optimization algorithm and for the selection of the input variables, a model based on Mutual Information (MI) was used. In the end, the methodology was compared with the traditional methods: Autoregressive Moving Averages (ARIMA) and General Autoregressive Conditional Heteroskedasticity (GARCH) models. Results showed that the W-ELM had better performance for forecasting 1 to 4 weeks of when compared to ARIMA. When the GARCH model results were considered, the proposed method provided worse performance only for 1 step ahead forecasting.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Liu, Guoxiang, Xiongjun Wu, Veronika Vasylkivska, Chung Yan Shih y Grant Bromhal. "Operations Coupled Virtual Learning for Reservoir Evaluation and Performance Analysis". En SPE Eastern Regional Meeting. SPE, 2022. http://dx.doi.org/10.2118/211883-ms.

Texto completo
Resumen
Abstract The quick and accurate evaluation of reservoir behaviors and responses is essential to achieve successful field development and operations. An emerging technology for field development, physics informed advanced artificial intelligence/machine learning (AI/ML) benefits from both physics-based principles and AI/ML's learning capabilities. The capacitance and resistance model (CRM) method, based on the material balance principle, can provide rapid insights for optimal operations. Its flexible time-window selection and testing capability are especially useful for operation planning and development. Advanced AI/ML models developed for virtual learning environment (VLE) can be coupled to extend and enhance the capability for reservoir evolution evaluation. The objective of this study is to synergize the CRM with the VLE to provide a comprehensive toolset for field operations and reservoir management. The proposed approach has an organic integration of the CRM with the VLE; after completing a rapid reservoir study, the CRM first performs rapid forecasting of the well responses and inter-well connectivity for any given injection situation. The forecasted results from the CRM are then supplied as the inputs to the VLE, which utilizes its ML models to predict the corresponding three-dimensional distributions of key reservoir parameters such as detailed pressure transient and fluid movement for the entire field. This information, together with the field data streams, can be used for decision-making by providing a holistic view of the field operations and reservoir management regarding the injection and production enhancement in a real-time fashion. A simulated reservoir test case based on the SACROC CO2 flooding dataset from West Texas was used to demonstrate the concept and workflow. The test case has shown that the CRM can accurately capture the variations of the production rates and bottom-hole pressures with injection and production plan changes. The responses obtained from the CRM enable the VLE to correctly predict the three-dimensional distributions of the pressure and fluid saturation. The joint force from the CRM and the VLE enable them to capture the effects due to the injection and production changes in the field. Capable of tuning the injection plan, production design, and optimizing reservoir response, this integrated toolset can also assist field design with optimal well location selection/placement as extended benefits. As demonstrated with the preliminary results from above, a comprehensive and integrated toolset that couples the physics with the AI/ML can provide dynamic and real-time decision support for field operations and optimization for de-risked operation support, enhance oil recovery, and CO2 storage/monitoring design. Successful development of such a toolset makes it possible to integrate what-if scenarios and multiple-realizations to the workflow for static and dynamic uncertainty quantification. The toolset shows value and potential for emerging "SMART" field operations and reservoir management with three to four orders of magnitude speedup.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Abbasi, Jassem, Jiuyu Zhao, Sameer Ahmed, Jianchao Cai, Pål Østebø Andersen y Liang Jiao. "Machine Learning Assisted Prediction of Permeability of Tight Sandstones From Mercury Injection Capillary Pressure Tests". En 2022 SPWLA 63rd Annual Symposium. Society of Petrophysicists and Well Log Analysts, 2022. http://dx.doi.org/10.30632/spwla-2022-0032.

Texto completo
Resumen
Capillary pressure is an important parameter in both petrophysical and geological studies. It is a function of different porous media properties, in special, the pore structure of the rock. Mercury Injection Capillary Pressure (MICP) analysis is a consistent methodology for determining different petrophysical properties including porosity, and pore throat distribution. The matrix permeability is dependent on the pore size distribution but is not directly measured from MICP tests. In this work, we consider distinct parameters derived from MICP tests for the prediction of permeability by following a machine learning based approach. Firstly, a vast range of MICP test results (246 samples) related to tight sandstones is gathered with a permeability range of 0.001 to 70 millidarcy. After quality checking of dataset, different theoretical permeability models are tested on the dataset and the results are analyzed. Also, different features related to the pore throat characteristics of rock is analyzed and the best characteristics are selected for the input variables to the machine learning model. The Support Vector Regression (SVR) approach is proposed with the Radial Basis Function (RBF) kernel for the prediction of rock permeability from MICP tests. A Particle Swarm Optimization (PSO) is applied for optimization of the model meta parameters in the validation process to avoid over or underfitting. The training of the model is carried out with random selection of 80% of samples while other points are applied for testing of the model. The data analysis on the correlation between rock permeability and parameters of capillary pressure is studied and showed that using pore throat radius corresponding to saturation range of 0.4-0.8 and the median capillary pressure values obtained from the capillary pressure curves is suitable to be used as input features of the SVM model. Also, the porosity and Winland equation was considered as input features due to their acceptable correlation with the rock permeability. The results showed that the implemented SVM-PSO model can acceptably predict the experimentally measured permeability values with R2 rate of over 0.88 for training and testing datasets. This work represents an analysis of the relationship of capillary pressure curve specifications with permeability on a large MICP dataset, especially focused on tight sandstone rocks. The analysis provided new statistical and physics-based features with the highest correlations with the rock permeability that helped in significant improvement of the SVM-PSO prediction results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Park, Junheung, Kyoung-Yun Kim y Raj Sohmshetty. "A Prediction Modeling Framework: Toward Integration of Noisy Manufacturing Data and Product Design". En ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-46236.

Texto completo
Resumen
In many design and manufacturing applications, data inconsistency or noise is common. These data can be used to create opportunities and/or support critical decisions in many applications, for example, welding quality prediction for material selection and quality monitoring applications. Typical approaches to deal with these data issues are to remove or alter them before constructing any model or conducting any analysis to draw decisions. However, these approaches are limited especially when each data carries important value to extract additional information about the nature of the given problem. In the literature, with the presence of noise in data, bootstrap aggregating has shown an improvement in the prediction accuracy. In order to achieve such an improvement, a bagging model has to be carefully constructed. The base learning algorithm, number of base learning algorithms, and parameters for the base learning algorithms are crucial design parameters in that aspect. Evolutionary algorithms such as genetic algorithm and particle swarm optimization have shown promising results in determining good parameters for different learning algorithms such as multilayer perceptron neural network and support vector regression. However, the computational cost of an evolutionary computation algorithm is usually high as they require a large number of candidate solution evaluations. This requirement even more increases when bagging is involved rather than a single learning algorithm. To reduce such high computational cost, a metamodeling approach is introduced to particle swarm optimization. The meta-modeling approach reduces the number of fitness function evaluations in the particle swarm optimization process and therefore the overall computational cost can be reduced. In this paper, we propose a prediction modeling framework whose aim is to construct a bagging model to improve the prediction accuracy on noisy data. The proposed framework is tested on an artificially generated noisy dataset. The quality of final solutions obtained by the proposed framework is reasonable compared to particle swarm optimization without meta-modeling. In addition, using the proposed framework, the largest improvement in the computational time is about 42 percent.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Curina, Francesco, Ajith Asokan, Leonardo Bori, Ali Qushchi Talat, Vladimir Mitu y Hadi Mustapha. "A Case Study on the Use of Machine Learning and Data Analytics to Improve Rig Operational Efficiency and Equipment Performance". En IADC/SPE Asia Pacific Drilling Technology Conference and Exhibition. SPE, 2022. http://dx.doi.org/10.2118/209888-ms.

Texto completo
Resumen
Abstract Ensuring an efficient workflow on a drilling rig requires the optimization of the equipment output and the extension of its working life. it is essential first to identify equipment behavior and usage and evaluate their possible efficiency variation. This can lead to predicting possible upcoming usage trends and proposing preventive actions like adjustment to equipment working parameters to improve its output and efficiency. In this regard, machine learning and data analytics provide a clear advantage. This paper showcases a case study that makes use of machine learning to detect rig inefficiencies and optimize operations. The platform has been implemented to first collect the rig data and then process it before sending it to be analysed. The rig used in this case study was connected to a platform that makes use of Internet of Things (IoT) protocols. Noise and redundancy of the data coming from the rig were standardized, filtered and therefore the outliers were removed. Feature selection was used to highlight, from the data pool, the most significant parameters for forecasting and optimization. These resulting parameters were then sent to the machine learning model for training and testing. The processed data was then fed to system, which was developed in-house, to extract additional information regarding equipment efficiency. This system tracks the variations in equipment efficiencies. The study focuses on the performance of an HPU powering a hydraulic hoisting rig which was showing low efficiency. IoT technology was used to collect live data from the field. The gathered datasets were cleaned, standardized and divided into coherent batches ready for analysis. Machine learning models were used to evaluate how the workload would change with tweaks to working parameters. Then, the study analyzed the rig tripping speed and how it was connected to HPU performance. For evaluation of tripping speed, the focus was given also to small operational changes which could lead to improved performance. When connected together, changes to both operating parameters and standard procedures can lead to improved efficiency and reduced invisible lost time. Implementing the results allowed the rig to be operated at a higher efficiency, thereby increasing the life of the equipment while keeping the load within design conditions. This ultimately resulted in a reduction in operational time and failure of equipment and hence a major decrease in down time of the rig.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía