Academic literature on the topic 'Variable sparsity kernel learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Variable sparsity kernel learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Variable sparsity kernel learning"

1

Chen, Jingxiang, Chong Zhang, Michael R. Kosorok, and Yufeng Liu. "Double sparsity kernel learning with automatic variable selection and data extraction." Statistics and Its Interface 11, no. 3 (2018): 401–20. http://dx.doi.org/10.4310/sii.2018.v11.n3.a1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huang, Yuan, and Shuangge Ma. "Discussion on “Double sparsity kernel learning with automatic variable selection and data extraction”." Statistics and Its Interface 11, no. 3 (2018): 421–22. http://dx.doi.org/10.4310/sii.2018.v11.n3.a2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Meimei, and Guang Cheng. "Discussion on “Double sparsity kernel learning with automatic variable selection and data extraction”." Statistics and Its Interface 11, no. 3 (2018): 423–24. http://dx.doi.org/10.4310/sii.2018.v11.n3.a3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Hao Helen. "Discussion on “Doubly sparsity kernel learning with automatic variable selection and data extraction”." Statistics and Its Interface 11, no. 3 (2018): 425–28. http://dx.doi.org/10.4310/sii.2018.v11.n3.a4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Jingxiang, Chong Zhang, Michael R. Kosorok, and Yufeng Liu. "Rejoinder of “Double sparsity kernel learning with automatic variable selection and data extraction”." Statistics and Its Interface 11, no. 3 (2018): 429–31. http://dx.doi.org/10.4310/sii.2018.v11.n3.a5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Shuangyue, and Ziyan Luo. "Sparse Support Tensor Machine with Scaled Kernel Functions." Mathematics 11, no. 13 (2023): 2829. http://dx.doi.org/10.3390/math11132829.

Full text
Abstract:
As one of the supervised tensor learning methods, the support tensor machine (STM) for tensorial data classification is receiving increasing attention in machine learning and related applications, including remote sensing imaging, video processing, fault diagnosis, etc. Existing STM approaches lack consideration for support tensors in terms of data reduction. To address this deficiency, we built a novel sparse STM model to control the number of support tensors in the binary classification of tensorial data. The sparsity is imposed on the dual variables in the context of the feature space, whic
APA, Harvard, Vancouver, ISO, and other styles
7

Pan, Chao, Cheng Shi, Honglang Mu, Jie Li, and Xinbo Gao. "EEG-Based Emotion Recognition Using Logistic Regression with Gaussian Kernel and Laplacian Prior and Investigation of Critical Frequency Bands." Applied Sciences 10, no. 5 (2020): 1619. http://dx.doi.org/10.3390/app10051619.

Full text
Abstract:
Emotion plays a nuclear part in human attention, decision-making, and communication. Electroencephalogram (EEG)-based emotion recognition has developed a lot due to the application of Brain-Computer Interface (BCI) and its effectiveness compared to body expressions and other physiological signals. Despite significant progress in affective computing, emotion recognition is still an unexplored problem. This paper introduced Logistic Regression (LR) with Gaussian kernel and Laplacian prior for EEG-based emotion recognition. The Gaussian kernel enhances the EEG data separability in the transformed
APA, Harvard, Vancouver, ISO, and other styles
8

Koltchinskii, Vladimir, and Ming Yuan. "Sparsity in multiple kernel learning." Annals of Statistics 38, no. 6 (2010): 3660–95. http://dx.doi.org/10.1214/10-aos825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jiang, Zhengxiong, Yingsong Li, Xinqi Huang, and Zhan Jin. "A Sparsity-Aware Variable Kernel Width Proportionate Affine Projection Algorithm for Identifying Sparse Systems." Symmetry 11, no. 10 (2019): 1218. http://dx.doi.org/10.3390/sym11101218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yuan, Ying, Weiming Lu, Fei Wu, and Yueting Zhuang. "Multiple kernel learning with NOn-conVex group spArsity." Journal of Visual Communication and Image Representation 25, no. 7 (2014): 1616–24. http://dx.doi.org/10.1016/j.jvcir.2014.08.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Variable sparsity kernel learning"

1

Kolar, Mladen. "Uncovering Structure in High-Dimensions: Networks and Multi-task Learning Problems." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/229.

Full text
Abstract:
Extracting knowledge and providing insights into complex mechanisms underlying noisy high-dimensional data sets is of utmost importance in many scientific domains. Statistical modeling has become ubiquitous in the analysis of high dimensional functional data in search of better understanding of cognition mechanisms, in the exploration of large-scale gene regulatory networks in hope of developing drugs for lethal diseases, and in prediction of volatility in stock market in hope of beating the market. Statistical analysis in these high-dimensional data sets is possible only if an estimation proc
APA, Harvard, Vancouver, ISO, and other styles
2

Le, Van Luong. "Identification de systèmes dynamiques hybrides : géométrie, parcimonie et non-linéarités." Phd thesis, Université de Lorraine, 2013. http://tel.archives-ouvertes.fr/tel-00874283.

Full text
Abstract:
En automatique, l'obtention d'un modèle du système est la pierre angulaire des procédures comme la synthèse d'une commande, la détection des défaillances, la prédiction... Cette thèse traite de l'identification d'une classe de systèmes complexes, les systèmes dynamiques hybrides. Ces systèmes impliquent l'interaction de comportements continus et discrets. Le but est de construire un modèle à partir de mesures expérimentales d'entrée et de sortie. Une nouvelle approche pour l'identification de systèmes hybrides linéaires basée sur les propriétés géométriques des systèmes
APA, Harvard, Vancouver, ISO, and other styles
3

Hakala, Tim. "Settling-Time Improvements in Positioning Machines Subject to Nonlinear Friction Using Adaptive Impulse Control." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1061.

Full text
Abstract:
A new method of adaptive impulse control is developed to precisely and quickly control the position of machine components subject to friction. Friction dominates the forces affecting fine positioning dynamics. Friction can depend on payload, velocity, step size, path, initial position, temperature, and other variables. Control problems such as steady-state error and limit cycles often arise when applying conventional control techniques to the position control problem. Studies in the last few decades have shown that impulsive control can produce repeatable displacements as small as ten nanomete
APA, Harvard, Vancouver, ISO, and other styles
4

Sankaran, Raman. "Structured Regularization Through Convex Relaxations Of Discrete Penalties." Thesis, 2018. https://etd.iisc.ac.in/handle/2005/5456.

Full text
Abstract:
Motivation. Empirical risk minimization(ERM) is a popular framework for learning predictive models from data, which has been used in various domains such as computer vision, text processing, bioinformatics, neuro-biology, temporal point processes, to name a few. We consider the cases where one has apriori information regarding the model structure, the simplest one being the sparsity of the model. The desired sparsity structure can be imputed into ERM problems using a regularizer, which is typically a norm on the model vector. Popular choices of the regularizers include the `1 norm (LASSO
APA, Harvard, Vancouver, ISO, and other styles
5

Naudé, Johannes Jochemus. "Aircraft recognition using generalised variable-kernel similarity metric learning." Thesis, 2014. http://hdl.handle.net/10210/13113.

Full text
Abstract:
M.Ing.<br>Nearest neighbour classifiers are well suited for use in practical pattern recognition applications for a number of reasons, including ease of implementation, rapid training, justifiable decisions and low computational load. However their generalisation performance is perceived to be inferior to that of more complex methods such as neural networks or support vector machines. Closer inspection shows however that the generalisation performance actually varies widely depending on the dataset used. On certain problems they outperform all other known classifiers while on others they fail
APA, Harvard, Vancouver, ISO, and other styles
6

Hwang, Sung Ju. "Discriminative object categorization with external semantic knowledge." 2013. http://hdl.handle.net/2152/21320.

Full text
Abstract:
Visual object category recognition is one of the most challenging problems in computer vision. Even assuming that we can obtain a near-perfect instance level representation with the advances in visual input devices and low-level vision techniques, object categorization still remains as a difficult problem because it requires drawing boundaries between instances in a continuous world, where the boundaries are solely defined by human conceptualization. Object categorization is essentially a perceptual process that takes place in a human-defined semantic space. In this semantic space, the catego
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Variable sparsity kernel learning"

1

Koltchinskii, Vladimir, Dmitry Panchenko, and Savina Andonova. "Generalization Bounds for Voting Classifiers Based on Sparsity and Clustering." In Learning Theory and Kernel Machines. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45167-9_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Naudé, Johannes J., Michaël A. van Wyk, and Barend J. van Wyk. "Generalized Variable-Kernel Similarity Metric Learning." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-27868-9_86.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

U. Torun, Mustafa, Onur Yilmaz, and Ali N. Akansu. "Explicit Kernel and Sparsity of Eigen Subspace for the AR(1) Process." In Financial Signal Processing and Machine Learning. John Wiley & Sons, Ltd, 2016. http://dx.doi.org/10.1002/9781118745540.ch5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gregorová, Magda, Jason Ramapuram, Alexandros Kalousis, and Stéphane Marchand-Maillet. "Large-Scale Nonlinear Variable Selection via Kernel Random Features." In Machine Learning and Knowledge Discovery in Databases. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-10928-8_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Connolly, Andrew J., Jacob T. VanderPlas, Alexander Gray, Andrew J. Connolly, Jacob T. VanderPlas, and Alexander Gray. "Regression and Model Fitting." In Statistics, Data Mining, and Machine Learning in Astronomy. Princeton University Press, 2014. http://dx.doi.org/10.23943/princeton/9780691151687.003.0008.

Full text
Abstract:
Regression is a special case of the general model fitting and selection procedures discussed in chapters 4 and 5. It can be defined as the relation between a dependent variable, y, and a set of independent variables, x, that describes the expectation value of y given x: E [y¦x]. The purpose of obtaining a “best-fit” model ranges from scientific interest in the values of model parameters (e.g., the properties of dark energy, or of a newly discovered planet) to the predictive power of the resulting model (e.g., predicting solar activity). This chapter starts with a general formulation for regression, list various simplified cases, and then discusses methods that can be used to address them, such as regression for linear models, kernel regression, robust regression and nonlinear regression.
APA, Harvard, Vancouver, ISO, and other styles
6

T., Subbulakshmi. "Combating Cyber Security Breaches in Digital World Using Misuse Detection Methods." In Advances in Digital Crime, Forensics, and Cyber Terrorism. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-5225-0193-0.ch006.

Full text
Abstract:
Intrusion Detection Systems (IDS) play a major role in the area of combating security breaches for information security. Current IDS are developed with Machine learning techniques like Artificial Neural Networks, C 4.5, KNN, Naïve Bayes classifiers, Genetic algorithms Fuzzy logic and SVMs. The objective of this paper is to apply Artificial Neural Networks and Support Vector Machines for intrusion detection. Artificial Neural Networks are applied along with faster training methods like variable learning rate and scaled conjugate gradient. Support Vector Machines use various kernel functions to improve the performance. From the kddcup'99 dataset 45,657 instances are taken and used in our experiment. The speed is compared for various training functions. The performance of various kernel functions is assessed. The detection rate of Support Vector Machines is found to be greater than Artificial Neural Networks with less number of false positives and with less time of detection.
APA, Harvard, Vancouver, ISO, and other styles
7

Wong, Andrew K. C., Yang Wang, and Gary C. L. Li. "Pattern Discovery as Event Association." In Machine Learning. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-60960-818-7.ch804.

Full text
Abstract:
A basic task of machine learning and data mining is to automatically uncover &lt;b&gt;patterns&lt;/b&gt; that reflect regularities in a data set. When dealing with a large database, especially when domain knowledge is not available or very weak, this can be a challenging task. The purpose of &lt;b&gt;pattern discovery&lt;/b&gt; is to find non-random relations among events from data sets. For example, the “exclusive OR” (XOR) problem concerns 3 binary variables, A, B and C=A&lt;img src="http://resources.igi-global.com/Marketing/Preface_Figures/x_symbol.png"&gt;B, i.e. C is true when either A or B, but not both, is true. Suppose not knowing that it is the XOR problem, we would like to check whether or not the occurrence of the compound event [A=T, B=T, C=F] is just a random happening. If we could estimate its frequency of occurrences under the random assumption, then we know that it is not random if the observed frequency deviates significantly from that assumption. We refer to such a compound event as an event association pattern, or simply a &lt;b&gt;pattern&lt;/b&gt;, if its frequency of occurrences significantly deviates from the default random assumption in the statistical sense. For instance, suppose that an XOR database contains 1000 samples and each primary event (e.g. [A=T]) occurs 500 times. The expected frequency of occurrences of the compound event [A=T, B=T, C=F] under the independence assumption is 0.5×0.5×0.5×1000 = 125. Suppose that its observed frequency is 250, we would like to see whether or not the difference between the observed and expected frequencies (i.e. 250 – 125) is significant enough to indicate that the compound event is not a random happening.&lt;div&gt;&lt;br&gt;&lt;/div&gt;&lt;div&gt;In statistics, to test the correlation between random variables, &lt;b&gt;contingency table&lt;/b&gt; with chi-squared statistic (Mills, 1955) is widely used. Instead of investigating variable correlations, pattern discovery shifts the traditional correlation analysis in statistics at the variable level to association analysis at the event level, offering an effective method to detect statistical association among events.&lt;/div&gt;&lt;div&gt;&lt;br&gt;&lt;/div&gt;&lt;div&gt;In the early 90’s, this approach was established for second order event associations (Chan &amp;amp; Wong, 1990). A higher order &lt;b&gt;pattern discovery&lt;/b&gt; algorithm was devised in the mid 90’s for discrete-valued data sets (Wong &amp;amp; Yang, 1997). In our methods, patterns inherent in data are defined as statistically significant associations of two or more primary events of different attributes if they pass a statistical test for deviation significance based on &lt;b&gt;residual analysis&lt;/b&gt;. The discovered high order patterns can then be used for classification (Wang &amp;amp; Wong, 2003). With continuous data, events are defined as Borel sets and the pattern discovery process is formulated as an optimization problem which recursively partitions the sample space for the best set of significant events (patterns) in the form of high dimension intervals from which probability density can be estimated by Gaussian kernel fit (Chau &amp;amp; Wong, 1999). Classification can then be achieved using Bayesian classifiers. For data with a mixture of discrete and continuous data (Wong &amp;amp; Yang, 2003), the latter is categorized based on a global optimization discretization algorithm (Liu, Wong &amp;amp; Yang, 2004). As demonstrated in numerous real-world and commercial applications (Yang, 2002), pattern discovery is an ideal tool to uncover subtle and useful patterns in a database. &lt;/div&gt;&lt;div&gt;&lt;br&gt;&lt;/div&gt;&lt;div&gt;In pattern discovery, three open problems are addressed. The first concerns learning where noise and uncertainty are present. In our method, noise is taken as inconsistent samples against statistically significant patterns. Missing attribute values are also considered as noise. Using a standard statistical &lt;b&gt;hypothesis testing&lt;/b&gt; to confirm statistical patterns from the candidates, this method is a less ad hoc approach to discover patterns than most of its contemporaries. The second problem concerns the detection of polythetic patterns without relying on exhaustive search. Efficient systems for detecting monothetic patterns between two attributes exist (e.g. Chan &amp;amp; Wong, 1990). However, for detecting polythetic patterns, an exhaustive search is required (Han, 2001). In many problem domains, polythetic assessments of feature combinations (or higher order relationship detection) are imperative for robust learning. Our method resolves this problem by directly constructing polythetic concepts while screening out non-informative pattern candidates, using statisticsbased heuristics in the discovery process. The third problem concerns the representation of the detected patterns. Traditionally, if-then rules and graphs, including networks and trees, are the most popular ones. However, they have shortcomings when dealing with multilevel and multiple order patterns due to the non-exhaustive and unpredictable hierarchical nature of the inherent patterns. We adopt &lt;b&gt;attributed hypergraph&lt;/b&gt; (AHG) (Wang &amp;amp; Wong, 1996) as the representation of the detected patterns. It is a data structure general enough to encode information at many levels of abstraction, yet simple enough to quantify the information content of its organized structure. It is able to encode both the qualitative and the quantitative characteristics and relations inherent in the data set.&lt;br&gt;&lt;/div&gt;
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Variable sparsity kernel learning"

1

Dellacasagrande, Matteo, Davide Lengani, Pietro Paliotta, Daniele Petronio, Daniele Simoni, and Francesco Bertini. "Evaluation of Different Regression Models Tuned With Experimental Turbine Cascade Data." In ASME Turbo Expo 2022: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/gt2022-81357.

Full text
Abstract:
Abstract In the present work linear and non-linear regression functions have been tuned with an extensive database describing the unsteady aerodynamic efficiency of low-pressure-turbine cascades. The learning strategy has been first defined using a dataset published in a previous work concerning the loss coefficient measured in a large-scale cascade for a large variation of the Reynolds number, the reduced frequency, and the flow coefficient. Linear models have been educated accounting for the Occam’s razor parsimony criterion, condensing the effects due to the parameter variation in few predi
APA, Harvard, Vancouver, ISO, and other styles
2

Yokoi, Sho, Daichi Mochihashi, Ryo Takahashi, Naoaki Okazaki, and Kentaro Inui. "Learning Co-Substructures by Kernel Dependence Maximization." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/465.

Full text
Abstract:
Modeling associations between items in a dataset is a problem that is frequently encountered in data and knowledge mining research. Most previous studies have simply applied a predefined fixed pattern for extracting the substructure of each item pair and then analyzed the associations between these substructures. Using such fixed patterns may not, however, capture the significant association. We, therefore, propose the novel machine learning task of extracting a strongly associated substructure pair (co-substructure) from each input item pair. We call this task dependent co-substructure extrac
APA, Harvard, Vancouver, ISO, and other styles
3

Vahdat, Arash, Kevin Cannons, Greg Mori, Sangmin Oh, and Ilseo Kim. "Compositional Models for Video Event Detection: A Multiple Kernel Learning Latent Variable Approach." In 2013 IEEE International Conference on Computer Vision (ICCV). IEEE, 2013. http://dx.doi.org/10.1109/iccv.2013.463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

He, Jia, Changying Du, Changde Du, Fuzhen Zhuang, Qing He, and Guoping Long. "Nonlinear Maximum Margin Multi-View Learning with Adaptive Kernel." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/254.

Full text
Abstract:
Existing multi-view learning methods based on kernel function either require the user to select and tune a single predefined kernel or have to compute and store many Gram matrices to perform multiple kernel learning. Apart from the huge consumption of manpower, computation and memory resources, most of these models seek point estimation of their parameters, and are prone to overfitting to small training data. This paper presents an adaptive kernel nonlinear max-margin multi-view learning model under the Bayesian framework. Specifically, we regularize the posterior of an efficient multi-view la
APA, Harvard, Vancouver, ISO, and other styles
5

Garcia-Vega, S., E. A. Leon-Gomez, and G. Castellanos-Dominguez. "Time Series Prediction for Kernel-based Adaptive Filters Using Variable Bandwidth, Adaptive Learning-rate, and Dimensionality Reduction." In ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019. http://dx.doi.org/10.1109/icassp.2019.8683117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sclavounos, Paul D., and Yu Ma. "Artificial Intelligence Machine Learning in Marine Hydrodynamics." In ASME 2018 37th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/omae2018-77599.

Full text
Abstract:
Artificial Intelligence (AI) Support Vector Machine (SVM) learning algorithms have enjoyed rapid growth in recent years with applications in a wide range of disciplines often with impressive results. The present paper introduces this machine learning technology to the field of marine hydrodynamics for the study of complex potential and viscous flow problems. Examples considered include the forecasting of the seastate elevations and vessel responses using their past time records as “explanatory variables” or “features” and the development of a nonlinear model for the roll restoring, added momen
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Chao, Gaurav Jain, Craig Schmidt, Carrie Strief, and Melani Sullivan. "Online Estimation of Lithium-Ion Battery Capacity Using Sparse Bayesian Learning." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-46964.

Full text
Abstract:
Lithium-ion (Li-ion) rechargeable batteries are used as one of the major energy storage components for implantable medical devices. Reliability of Li-ion batteries used in these devices has been recognized as of high importance from a broad range of stakeholders, including medical device manufacturers, regulatory agencies, patients and physicians. To ensure a Li-ion battery operates reliably, it is important to develop health monitoring techniques that accurately estimate the capacity of the battery throughout its life-time. This paper presents a sparse Bayesian learning method that utilizes t
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Yanchi, Tan Yan, and Haifeng Chen. "Exploiting Graph Regularized Multi-dimensional Hawkes Processes for Modeling Events with Spatio-temporal Characteristics." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/343.

Full text
Abstract:
Multi-dimensional Hawkes processes (MHP) has been widely used for modeling temporal events. However, when MHP was used for modeling events with spatio-temporal characteristics, the spatial information was often ignored despite its importance. In this paper, we introduce a framework to exploit MHP for modeling spatio-temporal events by considering both temporal and spatial information. Specifically, we design a graph regularization method to effectively integrate the prior spatial structure into MHP for learning influence matrix between different locations. Indeed, the prior spatial structure c
APA, Harvard, Vancouver, ISO, and other styles
9

Cheng, Hongliang, Weilin Yi, and Luchen Ji. "Multi-Point Optimization Design of High Pressure-Ratio Centrifugal Impeller Based on Machine Learning." In ASME Turbo Expo 2020: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/gt2020-14576.

Full text
Abstract:
Abstract The traditional optimization method on turbomachinery has the problem as time-consuming and difficult to solve well multi-parameter optimization in a short time. So an accurate surrogate model that is used to estimate the functional relationship between the independent variable and objective value is important to accelerate computationally expensive CFD-based optimization. Many of them had been developed and proven their reliability, such as the Kriging model, Back Propagation Neural Network (BPNN), Artificial Neural Network (ANN) and Support Vector Regression (SVR), etc. Because SVR
APA, Harvard, Vancouver, ISO, and other styles
10

Reda Ali, Ahmed, Makky Sandra Jaya, and Ernest A. Jones. "Machine Learning Strategies for Accurate Log Prediction in Reservoir Characterization: Self-Calibrating Versus Domain-Knowledge." In SPE/IATMI Asia Pacific Oil & Gas Conference and Exhibition. SPE, 2021. http://dx.doi.org/10.2118/205602-ms.

Full text
Abstract:
Abstract Petrophysical evaluation is a crucial task for reservoir characterization but it is often complicated, time-consuming and associated with uncertainties. Moreover, this job is subjective and ambiguous depending on the petrophysicist's experience. Utilizing the flourishing Artificial Intelligence (AI)/Machine Learning (ML) is a way to build an automating process with minimal human intervention, improving consistency and efficiency of well log prediction and interpretation. Nowadays, the argument is whether AI-ML should base on a statistically self-calibrating or knowledge-based predicti
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!