Journal articles on the topic 'Bayesian optimization technique'

To see the other types of publications on this topic, follow the link: Bayesian optimization technique.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Bayesian optimization technique.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

He, Xiang Dong, Jun Yan Huang, and Shu Tian Liu. "Bayesian Reliability-Based Optimization Design of Torsion Bar." Advanced Materials Research 538-541 (June 2012): 3085–88. http://dx.doi.org/10.4028/www.scientific.net/amr.538-541.3085.

Full text
Abstract:
Based on bayesian statistics theory and reliability-based optimization design, the research presents a new reliability-based optimization design approach that solves the form of finite test samples. In the article, the bayesian reliability-based optimization mathematical model is established and the bayesian reliability-based optimization approach of torsion bar is proposed. The method adopts a bayesian inference technique to estimate reliability, gives definition of bayesian reliability. The results illustrates the method presented is an efficient and practical reliability-based optimization approach of torsion bar.
APA, Harvard, Vancouver, ISO, and other styles
2

Lockhart, Brandon, Jinglin Peng, Weiyuan Wu, Jiannan Wang, and Eugene Wu. "Explaining inference queries with bayesian optimization." Proceedings of the VLDB Endowment 14, no. 11 (July 2021): 2576–85. http://dx.doi.org/10.14778/3476249.3476304.

Full text
Abstract:
Obtaining an explanation for an SQL query result can enrich the analysis experience, reveal data errors, and provide deeper insight into the data. Inference query explanation seeks to explain unexpected aggregate query results on inference data; such queries are challenging to explain because an explanation may need to be derived from the source, training, or inference data in an ML pipeline. In this paper, we model an objective function as a black-box function and propose BOExplain, a novel framework for explaining inference queries using Bayesian optimization (BO). An explanation is a predicate defining the input tuples that should be removed so that the query result of interest is significantly affected. BO --- a technique for finding the global optimum of a black-box function --- is used to find the best predicate. We develop two new techniques (individual contribution encoding and warm start) to handle categorical variables. We perform experiments showing that the predicates found by BOExplain have a higher degree of explanation compared to those found by the state-of-the-art query explanation engines. We also show that BOExplain is effective at deriving explanations for inference queries from source and training data on a variety of real-world datasets. BOExplain is open-sourced as a Python package at https://github.com/sfu-db/BOExplain.
APA, Harvard, Vancouver, ISO, and other styles
3

Coles, Darrell, and Andrew Curtis. "Efficient nonlinear Bayesian survey design using DN optimization." GEOPHYSICS 76, no. 2 (March 2011): Q1—Q8. http://dx.doi.org/10.1190/1.3552645.

Full text
Abstract:
A new method for fully nonlinear, Bayesian survey design renders the optimization of industrial-scale geoscientific surveys as a practical possibility. The method, DN optimization, designs surveys to maximally discriminate between different possible models. It is based on a generalization to nonlinear design problems of the D criterion (which is for linearized design problems). The main practical advantage of DN optimization is that it uses efficient algorithms developed originally for linearized design theory, resulting in lower computing and storage costs than for other nonlinear Bayesian design techniques. In a real example in which we optimized a seafloor microseismic sensor network to monitor a fractured petroleum reservoir, we compared DN optimization with two other networks: one proposed by an industrial contractor and one optimized using a linearized Bayesian design method. Our technique yielded a network with superior expected data quality in terms of reduced uncertainties on hypocenter locations.
APA, Harvard, Vancouver, ISO, and other styles
4

Vendrov, Ivan, Tyler Lu, Qingqing Huang, and Craig Boutilier. "Gradient-Based Optimization for Bayesian Preference Elicitation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10292–301. http://dx.doi.org/10.1609/aaai.v34i06.6592.

Full text
Abstract:
Effective techniques for eliciting user preferences have taken on added importance as recommender systems (RSs) become increasingly interactive and conversational. A common and conceptually appealing Bayesian criterion for selecting queries is expected value of information (EVOI). Unfortunately, it is computationally prohibitive to construct queries with maximum EVOI in RSs with large item spaces. We tackle this issue by introducing a continuous formulation of EVOI as a differentiable network that can be optimized using gradient methods available in modern machine learning computational frameworks (e.g., TensorFlow, PyTorch). We exploit this to develop a novel Monte Carlo method for EVOI optimization, which is much more scalable for large item spaces than methods requiring explicit enumeration of items. While we emphasize the use of this approach for pairwise (or k-wise) comparisons of items, we also demonstrate how our method can be adapted to queries involving subsets of item attributes or “partial items,” which are often more cognitively manageable for users. Experiments show that our gradient-based EVOI technique achieves state-of-the-art performance across several domains while scaling to large item spaces.
APA, Harvard, Vancouver, ISO, and other styles
5

Sun, Xingping, Chang Chen, Lu Wang, Hongwei Kang, Yong Shen, and Qingyi Chen. "Hybrid Optimization Algorithm for Bayesian Network Structure Learning." Information 10, no. 10 (September 24, 2019): 294. http://dx.doi.org/10.3390/info10100294.

Full text
Abstract:
Since the beginning of the 21st century, research on artificial intelligence has made great progress. Bayesian networks have gradually become one of the hotspots and important achievements in artificial intelligence research. Establishing an effective Bayesian network structure is the foundation and core of the learning and application of Bayesian networks. In Bayesian network structure learning, the traditional method of utilizing expert knowledge to construct the network structure is gradually replaced by the data learning structure method. However, as a result of the large amount of possible network structures, the search space is too large. The method of Bayesian network learning through training data usually has the problems of low precision or high complexity, which make the structure of learning differ greatly from that of reality, which has a great influence on the reasoning and practical application of Bayesian networks. In order to solve this problem, a hybrid optimization artificial bee colony algorithm is discretized and applied to structure learning. A hybrid optimization technique for the Bayesian network structure learning method is proposed. Experimental simulation results show that the proposed hybrid optimization structure learning algorithm has better structure and better convergence.
APA, Harvard, Vancouver, ISO, and other styles
6

SATO, Wataru, Koma SATO, Nobuyuki ISOSHIMA, Yoko MAKINO, and Masashi SHIBAHARA. "Development of Optimizing Technique for Temperature Control Based on Bayesian Optimization." Proceedings of Mechanical Engineering Congress, Japan 2021 (2021): J122–10. http://dx.doi.org/10.1299/jsmemecj.2021.j122-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

OKIKIOLA, F. M., O. S. ADEWALE, A. M. MUSTAPHA, A. M. IKOTUN, and O. L. LAWAL. "A FRAMEWORK FOR ONTOLOGY- BASED DIABETES DIAGNOSIS USING BAYELSIAN OPTIMIZATION TECHNIQUE." Journal of Natural Sciences Engineering and Technology 17, no. 1 (November 6, 2019): 156–68. http://dx.doi.org/10.51406/jnset.v17i1.1906.

Full text
Abstract:
Diabetes Management System (DMS) is a computer-based system which aid physicians in properly diagnosing diabetes mellitus disease in patients. The DMS is essential in making individuals who have diabetes aware of their state and type. Existing approaches employed have not been efficient in considering all the diabetes type as well as making full prescription to diabetes patients. In this paper, a framework for an improved Ontology-based Diabetes Management System with a Bayesian optimization technique is presented. This helped in managing the diagnosis of diabetes and the prescription of treatment and drug to patients using the ontology knowledge management. The framework was implemented using Java programming language on Netbeans IDE, Protégé 4.2 and mysql. An extract of the ontology graph and acyclic probability graph was shown. The result showed that the nature of Bayesian network which has to do with statistical calculations based on equations, functions and sample frequencies led to more precise and reliable outcome.
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Yumei, Xuezhi Wang, Hua Lan, Zengfu Wang, Bill Moran, and Quan Pan. "An Iterative Nonlinear Filter Using Variational Bayesian Optimization." Sensors 18, no. 12 (December 1, 2018): 4222. http://dx.doi.org/10.3390/s18124222.

Full text
Abstract:
We propose an iterative nonlinear estimator based on the technique of variational Bayesian optimization. The posterior distribution of the underlying system state is approximated by a solvable variational distribution approached iteratively using evidence lower bound optimization subject to a minimal weighted Kullback-Leibler divergence, where a penalty factor is considered to adjust the step size of the iteration. Based on linearization, the iterative nonlinear filter is derived in a closed-form. The performance of the proposed algorithm is compared with several nonlinear filters in the literature using simulated target tracking examples.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Bin, and Chun Lin Ji. "Automated Metamaterial Design with Computer Model Emulation and Bayesian Optimization." Applied Mechanics and Materials 575 (June 2014): 201–5. http://dx.doi.org/10.4028/www.scientific.net/amm.575.201.

Full text
Abstract:
We present an automated computation system for large scale design of metamaterials (MTMs). A computer model emulation (CME) technique is used to generate a forward mapping from the MTM particle’s geometric dimension to the corresponding electromagnetic (EM) response. Then the design problem translates to be a reverse engineering process which aims to find optimal values of the geometric dimensions for the MTM particles. The core of the CME process is a statistical functional regression module using a Gaussian Process mixture (GPM) model. The reverse engineering process is implemented with a Bayesian optimization technique. Experimental results demonstrate that the proposed approach can facilitate rapid design of MTMs.
APA, Harvard, Vancouver, ISO, and other styles
10

Lavielle, Marc. "2-D Bayesian deconvolution." GEOPHYSICS 56, no. 12 (December 1991): 2008–18. http://dx.doi.org/10.1190/1.1443013.

Full text
Abstract:
Inverse problems can be solved in different ways. One way is to define natural criteria of good recovery and build an objective function to be minimized. If, instead, we prefer a Bayesian approach, inversion can be formulated as an estimation problem where a priori information is introduced and the a posteriori distribution of the unobserved variables is maximized. When this distribution is a Gibbs distribution, these two methods are equivalent. Furthermore, global optimization of the objective function can be performed with a Monte Carlo technique, in spite of the presence of numerous local minima. Application to multitrace deconvolution is proposed. In traditional 1-D deconvolution, a set of uni‐dimensional processes models the seismic data, while a Markov random field is used for 2-D deconvolution. In fact, the introduction of a neighborhood system permits one to model the layer structure that exists in the earth and to obtain solutions that present lateral coherency. Moreover, optimization of an appropriated objective function by simulated annealing allows one to control the fit with the input data as well as the spatial distribution of the reflectors. Extension to 3-D deconvolution is straightforward.
APA, Harvard, Vancouver, ISO, and other styles
11

Cui, Ming-Yu, and Yu Zhang. "Deep Learning Method for Evaporation Duct Inversion Based on GPS Signal." Atmosphere 13, no. 12 (December 12, 2022): 2091. http://dx.doi.org/10.3390/atmos13122091.

Full text
Abstract:
Accurate evaporation duct prediction is one of the critical technologies for realizing the over-the-horizon impact of marine communication, ship radar, and other systems. Using GPS signals to invert evaporation ducts provides more benefits in terms of method realization and ease. In order to invert the evaporation duct from GPS-received power data, a deep learning technique based on Bayesian optimization is proposed to increase the prediction accuracy of evaporation ducts. The evaporation duct propagation mechanism of the GPS signal is explored. The GPS-received power is estimated using the two-parameter evaporation duct model, and a better neural network structure is built using Bayesian optimization. The study results show that the Bayesian optimization model has a smaller root mean square error (RMSE) than the human empirical model, which allows for rapid and accurate inversion of duct parameters even in noisy interference.
APA, Harvard, Vancouver, ISO, and other styles
12

Hu, Chen, Xiaoming Hu, and Yiguang Hong. "Distributed adaptive Kalman filter based on variational Bayesian technique." Control Theory and Technology 17, no. 1 (January 25, 2019): 37–47. http://dx.doi.org/10.1007/s11768-019-8183-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Delanghe, Rémi, Tom Van Steenkiste, Ivo Couckuyt, Dirk Deschrijver, and Tom Dhaene. "A Bayesian Optimisation Procedure for Estimating Optimal Trajectories in Electromagnetic Compliance Testing." Engineering Proceedings 3, no. 1 (October 30, 2020): 8. http://dx.doi.org/10.3390/iec2020-06972.

Full text
Abstract:
The need for accurate physical measurements is omnipresent in both scientific and engineering applications. Such measurements can be used to explore and characterize the behavior of a system over the parameters of interest. These procedures are often very costly and time-consuming, requiring many measurements or samples. Therefore, a suitable data collection strategy can be used to reduce the cost of acquiring the required samples. One important consideration which often surfaces in physical experiments, like near-field measurements for electromagnetic compliance testing, is the total path length between consecutively visited samples by the measurement probe, as the time needed to travel along this path is often a limiting factor. A line-based sampling strategy optimizes the sample locations in order to reduce the overall path length while achieving the intended goal. Previous research on line-based sampling techniques solely focused on exploring the measurement space. None of these techniques considered the actual measurements themselves despite these values hold the potential to identify interesting regions in the parameter space, such as an optimum, quickly during the sampling process. In this paper, we extend Bayesian optimization, a point-based optimization technique into a line-based setting. The proposed algorithm is assessed using an artificial example and an electromagnetic compatibility use-case. The results show that our line-based technique is able to find the optimum using a significantly shorter total path length compared to the point-based approach.
APA, Harvard, Vancouver, ISO, and other styles
14

Elgeldawi, Enas, Awny Sayed, Ahmed R. Galal, and Alaa M. Zaki. "Hyperparameter Tuning for Machine Learning Algorithms Used for Arabic Sentiment Analysis." Informatics 8, no. 4 (November 17, 2021): 79. http://dx.doi.org/10.3390/informatics8040079.

Full text
Abstract:
Machine learning models are used today to solve problems within a broad span of disciplines. If the proper hyperparameter tuning of a machine learning classifier is performed, significantly higher accuracy can be obtained. In this paper, a comprehensive comparative analysis of various hyperparameter tuning techniques is performed; these are Grid Search, Random Search, Bayesian Optimization, Particle Swarm Optimization (PSO), and Genetic Algorithm (GA). They are used to optimize the accuracy of six machine learning algorithms, namely, Logistic Regression (LR), Ridge Classifier (RC), Support Vector Machine Classifier (SVC), Decision Tree (DT), Random Forest (RF), and Naive Bayes (NB) classifiers. To test the performance of each hyperparameter tuning technique, the machine learning models are used to solve an Arabic sentiment classification problem. Sentiment analysis is the process of detecting whether a text carries a positive, negative, or neutral sentiment. However, extracting such sentiment from a complex derivational morphology language such as Arabic has been always very challenging. The performance of all classifiers is tested using our constructed dataset both before and after the hyperparameter tuning process. A detailed analysis is described, along with the strengths and limitations of each hyperparameter tuning technique. The results show that the highest accuracy was given by SVC both before and after the hyperparameter tuning process, with a score of 95.6208 obtained when using Bayesian Optimization.
APA, Harvard, Vancouver, ISO, and other styles
15

Geeitha, S., and M. Thangamani. "Qualitative Analysis for Improving Prediction Accuracy in Parkinson's Disease Detection Using Hybrid Technique." Journal of Computational and Theoretical Nanoscience 16, no. 2 (February 1, 2019): 393–99. http://dx.doi.org/10.1166/jctn.2019.7738.

Full text
Abstract:
A PSO based SVM method has been implemented in diagnosing Parkinson's disease. This hybrid method produces parameter optimization and it helps to predict the gene expression pattern of the patient affected from Parkinson's disease. Implementing a computational tool on the PD data set alleviates the symptoms to predict accurately the occurrence of the disease. In data classification, there may arise some incomplete or missing data during pre-processing in the probabilistic model. In order to overcome this, an Expectation Maximization (EM) algorithm is implemented. The proposed Particle Swarm Optimization (PSO) based Support Vector Machine (SVM) technique is also compared with the Bayesian network model that outperforms in prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
16

Nishimura, Haruki, and Mac Schwager. "SACBP: Belief space planning for continuous-time dynamical systems via stochastic sequential action control." International Journal of Robotics Research 40, no. 10-11 (August 13, 2021): 1167–95. http://dx.doi.org/10.1177/02783649211037697.

Full text
Abstract:
We propose a novel belief space planning technique for continuous dynamics by viewing the belief system as a hybrid dynamical system with time-driven switching. Our approach is based on the perturbation theory of differential equations and extends sequential action control to stochastic dynamics. The resulting algorithm, which we name SACBP, does not require discretization of spaces or time and synthesizes control signals in near real-time. SACBP is an anytime algorithm that can handle general parametric Bayesian filters under certain assumptions. We demonstrate the effectiveness of our approach in an active sensing scenario and a model-based Bayesian reinforcement learning problem. In these challenging problems, we show that the algorithm significantly outperforms other existing solution techniques including approximate dynamic programming and local trajectory optimization.
APA, Harvard, Vancouver, ISO, and other styles
17

Gunawan, Subroto, and Panos Y. Papalambros. "A Bayesian Approach to Reliability-Based Optimization With Incomplete Information." Journal of Mechanical Design 128, no. 4 (January 25, 2006): 909–18. http://dx.doi.org/10.1115/1.2204969.

Full text
Abstract:
In engineering design, information regarding the uncertain variables or parameters is usually in the form of finite samples. Existing methods in optimal design under uncertainty cannot handle this form of incomplete information; they have to either discard some valuable information or postulate existence of additional information. In this article, we present a reliability-based optimization method that is applicable when information of the uncertain variables or parameters is in the form of both finite samples and probability distributions. The method adopts a Bayesian binomial inference technique to estimate reliability, and uses this estimate to maximize the confidence that the design will meet or exceed a target reliability. The method produces a set of Pareto trade-off designs instead of a single design, reflecting the levels of confidence about a design’s reliability given certain incomplete information. As a demonstration, we apply the method to design an optimal piston-ring/cylinder-liner assembly under surface roughness uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
18

Humbird, K. D., and J. L. Peterson. "Transfer learning driven design optimization for inertial confinement fusion." Physics of Plasmas 29, no. 10 (October 2022): 102701. http://dx.doi.org/10.1063/5.0100364.

Full text
Abstract:
Transfer learning is a promising approach to create predictive models that incorporate simulation and experimental data into a common framework. In this technique, a neural network is first trained on a large database of simulations and then partially retrained on sparse sets of experimental data to adjust predictions to be more consistent with reality. Previously, this technique has been used to create predictive models of Omega [Humbird et al., IEEE Trans. Plasma Sci. 48, 61–70 (2019)] and NIF [Humbird et al., Phys. Plasmas 28, 042709 (2021); Kustowski et al., Mach. Learn. 3, 015035 (2022)] inertial confinement fusion (ICF) experiments that are more accurate than simulations alone. In this work, we conduct a transfer learning driven hypothetical ICF campaign in which the goal is to maximize experimental neutron yield via Bayesian optimization. The transfer learning model achieves yields within 5% of the maximum achievable yield in a modest-sized design space in fewer than 20 experiments. Furthermore, we demonstrate that this method is more efficient at optimizing designs than traditional model calibration techniques commonly employed in ICF design. Such an approach to ICF design could enable robust optimization of experimental performance under uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
19

He, Xiang Dong, Wei Qi, and Yan Jin. "Robust Reliability Design of Banjo Flange Based on Bayesian Theory." Advanced Materials Research 753-755 (August 2013): 1603–6. http://dx.doi.org/10.4028/www.scientific.net/amr.753-755.1603.

Full text
Abstract:
The perturbation method, bayesian statistics theory, the reliability optimization design, the reliability sensitivity technique and robust design are employed to present a practical and effective approach for the robust reliability design of the Banjo flange under incomplete information. The theoretical formulae of robust reliability design for the Banjo flange under incomplete information are obtained. The respective program can be used to obtain the robust reliability design parameters of the Banjo flange under incomplete information accurately and quickly.
APA, Harvard, Vancouver, ISO, and other styles
20

Peralta, Federico, Daniel Gutierrez Reina, Sergio Toral, Mario Arzamendia, and Derlis Gregor. "A Bayesian Optimization Approach for Multi-Function Estimation for Environmental Monitoring Using an Autonomous Surface Vehicle: Ypacarai Lake Case Study." Electronics 10, no. 8 (April 18, 2021): 963. http://dx.doi.org/10.3390/electronics10080963.

Full text
Abstract:
Bayesian optimization is a sequential method that can optimize a single and costly objective function based on a surrogate model. In this work, we propose a Bayesian optimization system dedicated to monitoring and estimating multiple water quality parameters simultaneously using a single autonomous surface vehicle. The proposed work combines different strategies and methods for this monitoring task, evaluating two approaches for acquisition function fusion: the coupled and the decoupled techniques. We also consider dynamic parametrization of the maximum measurement distance traveled by the ASV so that the monitoring system balances the total number of measurements and the total distance, which is related to the energy required. To evaluate the proposed approach, the Ypacarai Lake (Paraguay) serves as the test scenario, where multiple maps of water quality parameters, such as pH and dissolved oxygen, need to be obtained efficiently. The proposed system is compared with the predictive entropy search for multi-objective optimization with constraints (PESMOC) algorithm and the genetic algorithm (GA) path planning for the Ypacarai Lake scenario. The obtained results show that the proposed approach is 10.82% better than other optimization methods in terms of R2 score with noiseless measurements and up to 17.23% better when the data are noisy. Additionally, the proposed approach achieves a good average computational time for the whole mission when compared with other methods, 3% better than the GA technique and 46.5% better than the PESMOC approach.
APA, Harvard, Vancouver, ISO, and other styles
21

Yamada, Tetsuyasu, Hisao Ayame, Shigeyuki Nagasaka, and Hiroo Hirose. "Method of System Identification for Air Conditioning Systems in Operation." International Journal of Emerging Technology and Advanced Engineering 12, no. 5 (May 1, 2022): 38–48. http://dx.doi.org/10.46338/ijetae0522_05.

Full text
Abstract:
This study aims to optimize the operation of an air conditioning (AC) system by tracking situational changes due to outside temperature, number of people and computers, and other factors. Therefore, we studied the accurate estimation of system parameters of an AC unit during operation. We modeled the AC system using the first-order plus dead time model and discretized it using the autoregressive with exogenous input model. We developed a technique to estimate the system parameters using Bayesian optimization. Here, the system parameters are values that determine the physical characteristics of the combined air conditioning system and room. Therefore, we determined that there are cases where the characteristics deteriorate after repeated estimation. By solving this problem, we were able to establish a practical system. Keywords—ARX time-series model, Bayesian optimization method, First-order plus dead time, Gaussian process, PID control
APA, Harvard, Vancouver, ISO, and other styles
22

Lorenz, Romy, Michelle Johal, Frederic Dick, Adam Hampshire, Robert Leech, and Fatemeh Geranmayeh. "A Bayesian optimization approach for rapidly mapping residual network function in stroke." Brain 144, no. 7 (March 16, 2021): 2120–34. http://dx.doi.org/10.1093/brain/awab109.

Full text
Abstract:
Abstract Post-stroke cognitive and linguistic impairments are debilitating conditions, with limited therapeutic options. Domain-general brain networks play an important role in stroke recovery and characterizing their residual function with functional MRI has the potential to yield biomarkers capable of guiding patient-specific rehabilitation. However, this is challenging as such detailed characterization requires testing patients on multitudes of cognitive tasks in the scanner, rendering experimental sessions unfeasibly lengthy. Thus, the current status quo in clinical neuroimaging research involves testing patients on a very limited number of tasks, in the hope that it will reveal a useful neuroimaging biomarker for the whole cohort. Given the great heterogeneity among stroke patients and the volume of possible tasks this approach is unsustainable. Advancing task-based functional MRI biomarker discovery requires a paradigm shift in order to be able to swiftly characterize residual network activity in individual patients using a diverse range of cognitive tasks. Here, we overcome this problem by leveraging neuroadaptive Bayesian optimization, an approach combining real-time functional MRI with machine-learning, by intelligently searching across many tasks, this approach rapidly maps out patient-specific profiles of residual domain-general network function. We used this technique in a cross-sectional study with 11 left-hemispheric stroke patients with chronic aphasia (four female, age ± standard deviation: 59 ± 10.9 years) and 14 healthy, age-matched control subjects (eight female, age ± standard deviation: 55.6 ± 6.8 years). To assess intra-subject reliability of the functional profiles obtained, we conducted two independent runs per subject, for which the algorithm was entirely reinitialized. Our results demonstrate that this technique is both feasible and robust, yielding reliable patient-specific functional profiles. Moreover, we show that group-level results are not representative of patient-specific results. Whereas controls have highly similar profiles, patients show idiosyncratic profiles of network abnormalities that are associated with behavioural performance. In summary, our study highlights the importance of moving beyond traditional ‘one-size-fits-all’ approaches where patients are treated as one group and single tasks are used. Our approach can be extended to diverse brain networks and combined with brain stimulation or other therapeutics, thereby opening new avenues for precision medicine targeting a diverse range of neurological and psychiatric conditions.
APA, Harvard, Vancouver, ISO, and other styles
23

Jenkins, Porter, Hua Wei, J. Stockton Jenkins, and Zhenhui Li. "Bayesian Model-Based Offline Reinforcement Learning for Product Allocation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 11 (June 28, 2022): 12531–37. http://dx.doi.org/10.1609/aaai.v36i11.21523.

Full text
Abstract:
Product allocation in retail is the process of placing products throughout a store to connect consumers with relevant products. Discovering a good allocation strategy is challenging due to the scarcity of data and the high cost of experimentation in the physical world. Some work explores Reinforcement learning (RL) as a solution, but these approaches are often limited because of the sim2real problem. Learning policies from logged trajectories of a system is a key step forward for RL in physical systems. Recent work has shown that model-based offline RL can improve the effectiveness of offline policy estimation through uncertainty-penalized exploration. However, existing work assumes a continuous state space and access to a covariance matrix of the environment dynamics, which is not possible in the discrete case. To solve this problem, we propose a Bayesian model-based technique that naturally produces probabilistic estimates of the environment dynamics via the posterior predictive distribution, which we use for uncertainty-penalized exploration. We call our approach Posterior Penalized Offline Policy Optimization (PPOPO). We show that our world model better fits historical data due to informative priors, and that PPOPO outperforms other offline techniques in simulation and against real-world data.
APA, Harvard, Vancouver, ISO, and other styles
24

Rothman, Daniel H. "Nonlinear inversion, statistical mechanics, and residual statics estimation." GEOPHYSICS 50, no. 12 (December 1985): 2784–96. http://dx.doi.org/10.1190/1.1441899.

Full text
Abstract:
Nonlinear inverse problems are usually solved with linearized techniques that depend strongly on the accuracy of initial estimates of the model parameters. With linearization, objective functions can be minimized efficiently, but the risk of local rather than global optimization can be severe. I address the problem confronted in nonlinear inversion when no good initial guess of the model parameters can be made. The fully nonlinear approach presented is rooted in statistical mechanics. Although a large nonlinear problem might appear computationally intractable without linearization, reformulation of the same problem into smaller, interdependent parts can lead to tractable computation while preserving nonlinearities. I formulate inversion as a problem of Bayesian estimation, in which the prior probability distribution is the Gibbs distribution of statistical mechanics. Solutions are then obtained by maximizing the posterior probability of the model parameters. Optimization is performed with a Monte Carlo technique that was originally introduced to simulate the statistical mechanics of systems in equilibrium. The technique is applied to residual statics estimation when statics are unusually large and data are contaminated by noise. Poorly picked correlations (“cycle skips” or “leg jumps”) appear as local minima of the objective function, but global optimization is successfully performed. Further applications to deconvolution and velocity estimation are proposed.
APA, Harvard, Vancouver, ISO, and other styles
25

Atteia, Ghada, Amel A. Alhussan, and Nagwan Abdel Samee. "BO-ALLCNN: Bayesian-Based Optimized CNN for Acute Lymphoblastic Leukemia Detection in Microscopic Blood Smear Images." Sensors 22, no. 15 (July 24, 2022): 5520. http://dx.doi.org/10.3390/s22155520.

Full text
Abstract:
Acute lymphoblastic leukemia (ALL) is a deadly cancer characterized by aberrant accumulation of immature lymphocytes in the blood or bone marrow. Effective treatment of ALL is strongly associated with the early diagnosis of the disease. Current practice for initial ALL diagnosis is performed through manual evaluation of stained blood smear microscopy images, which is a time-consuming and error-prone process. Deep learning-based human-centric biomedical diagnosis has recently emerged as a powerful tool for assisting physicians in making medical decisions. Therefore, numerous computer-aided diagnostic systems have been developed to autonomously identify ALL in blood images. In this study, a new Bayesian-based optimized convolutional neural network (CNN) is introduced for the detection of ALL in microscopic smear images. To promote classification performance, the architecture of the proposed CNN and its hyperparameters are customized to input data through the Bayesian optimization approach. The Bayesian optimization technique adopts an informed iterative procedure to search the hyperparameter space for the optimal set of network hyperparameters that minimizes an objective error function. The proposed CNN is trained and validated using a hybrid dataset which is formed by integrating two public ALL datasets. Data augmentation has been adopted to further supplement the hybrid image set to boost classification performance. The Bayesian search-derived optimal CNN model recorded an improved performance of image-based ALL classification on test set. The findings of this study reveal the superiority of the proposed Bayesian-optimized CNN over other optimized deep learning ALL classification models.
APA, Harvard, Vancouver, ISO, and other styles
26

Yin, Ruyang, Jiping Xing, Pengli Mo, Nan Zheng, and Zhiyuan Liu. "BO-B&B: A hybrid algorithm based on Bayesian optimization and branch-and-bound for discrete network design problems." Electronic Research Archive 30, no. 11 (2022): 3993–4014. http://dx.doi.org/10.3934/era.2022203.

Full text
Abstract:
<abstract> <p>A discrete network design problem (DNDP) is conventionally formulated as an analytical bi-level programming problem to acquire an optimal network design strategy for an existing traffic network. In recent years, multimodal network design problems have benefited from simulation-based models. The nonconvexity and implicity of bi-level DNDPs make it challenging to obtain an optimal solution, especially for simulation-related models. Bayesian optimization (BO) has been proven to be an effective method for optimizing the costly black-box functions of simulation-based continuous network design problems. However, there are only discrete inputs in DNDPs, which cannot be processed using standard BO algorithms. To address this issue, we develop a hybrid method (BO-B&amp;B) that combines Bayesian optimization and a branch-and-bound algorithm to deal with discrete variables. The proposed algorithm exploits the advantages of the cutting-edge machine-learning parameter-tuning technique and the exact mathematical optimization method, thereby balancing efficiency and accuracy. Our experimental results show that the proposed method outperforms benchmarking discrete optimization heuristics for simulation-based DNDPs in terms of total computational time. Thus, BO-B&amp;B can potentially aid decision makers in mapping practical network design schemes for large-scale networks.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
27

Bassman, Lindsay, Pankaj Rajak, Rajiv K. Kalia, Aiichiro Nakano, Fei Sha, Muratahan Aykol, Patrick Huck, et al. "Efficient Discovery of Optimal N-Layered TMDC Hetero-Structures." MRS Advances 3, no. 6-7 (2018): 397–402. http://dx.doi.org/10.1557/adv.2018.260.

Full text
Abstract:
ABSTRACTVertical hetero-structures made from stacked monolayers of transition metal dichalcogenides (TMDC) are promising candidates for next-generation optoelectronic and thermoelectric devices. Identification of optimal layered materials for these applications requires the calculation of several physical properties, including electronic band structure and thermal transport coefficients. However, exhaustive screening of the material structure space using ab initio calculations is currently outside the bounds of existing computational resources. Furthermore, the functional form of how the physical properties relate to the structure is unknown, making gradient-based optimization unsuitable. Here, we present a model based on the Bayesian optimization technique to optimize layered TMDC hetero-structures, performing a minimal number of structure calculations. We use the electronic band gap and thermoelectric figure of merit as representative physical properties for optimization. The electronic band structure calculations were performed within the Materials Project framework, while thermoelectric properties were computed with BoltzTraP. With high probability, the Bayesian optimization process is able to discover the optimal hetero-structure after evaluation of only ∼20% of all possible 3-layered structures. In addition, we have used a Gaussian regression model to predict not only the band gap but also the valence band maximum and conduction band minimum energies as a function of the momentum.
APA, Harvard, Vancouver, ISO, and other styles
28

Martino, Luca, Fernando Llorente, Ernesto Curbelo, Javier López-Santiago, and Joaquín Míguez. "Automatic Tempered Posterior Distributions for Bayesian Inversion Problems." Mathematics 9, no. 7 (April 6, 2021): 784. http://dx.doi.org/10.3390/math9070784.

Full text
Abstract:
We propose a novel adaptive importance sampling scheme for Bayesian inversion problems where the inference of the variables of interest and the power of the data noise are carried out using distinct (but interacting) methods. More specifically, we consider a Bayesian analysis for the variables of interest (i.e., the parameters of the model to invert), whereas we employ a maximum likelihood approach for the estimation of the noise power. The whole technique is implemented by means of an iterative procedure with alternating sampling and optimization steps. Moreover, the noise power is also used as a tempered parameter for the posterior distribution of the the variables of interest. Therefore, a sequence of tempered posterior densities is generated, where the tempered parameter is automatically selected according to the current estimate of the noise power. A complete Bayesian study over the model parameters and the scale parameter can also be performed. Numerical experiments show the benefits of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
29

Ait Amou, Mohamed, Kewen Xia, Souha Kamhi, and Mohamed Mouhafid. "A Novel MRI Diagnosis Method for Brain Tumor Classification Based on CNN and Bayesian Optimization." Healthcare 10, no. 3 (March 8, 2022): 494. http://dx.doi.org/10.3390/healthcare10030494.

Full text
Abstract:
Brain tumor is one of the most aggressive diseases nowadays, resulting in a very short life span if it is diagnosed at an advanced stage. The treatment planning phase is thus essential for enhancing the quality of life for patients. The use of Magnetic Resonance Imaging (MRI) in the diagnosis of brain tumors is extremely widespread, but the manual interpretation of large amounts of images requires considerable effort and is prone to human errors. Hence, an automated method is necessary to identify the most common brain tumors. Convolutional Neural Network (CNN) architectures are successful in image classification due to their high layer count, which enables them to conceive the features effectively on their own. The tuning of CNN hyperparameters is critical in every dataset since it has a significant impact on the efficiency of the training model. Given the high dimensionality and complexity of the data, manual hyperparameter tuning would take an inordinate amount of time, with the possibility of failing to identify the optimal hyperparameters. In this paper, we proposed a Bayesian Optimization-based efficient hyperparameter optimization technique for CNN. This method was evaluated by classifying 3064 T-1-weighted CE-MRI images into three types of brain tumors (Glioma, Meningioma, and Pituitary). Based on Transfer Learning, the performance of five well-recognized deep pre-trained models is compared with that of the optimized CNN. After using Bayesian Optimization, our CNN was able to attain 98.70% validation accuracy at best without data augmentation or cropping lesion techniques, while VGG16, VGG19, ResNet50, InceptionV3, and DenseNet201 achieved 97.08%, 96.43%, 89.29%, 92.86%, and 94.81% validation accuracy, respectively. Moreover, the proposed model outperforms state-of-the-art methods on the CE-MRI dataset, demonstrating the feasibility of automating hyperparameter optimization.
APA, Harvard, Vancouver, ISO, and other styles
30

Madhuri, Sai, and Jitendranath Mungara. "Fusion of cuckoo search and hill climbing techniques based optimal forwarder selection and detect the intrusion." Indonesian Journal of Electrical Engineering and Computer Science 27, no. 1 (July 1, 2022): 328. http://dx.doi.org/10.11591/ijeecs.v27.i1.pp328-335.

Full text
Abstract:
The cuckoo search (CS) technique is applying to discover the optimal route from source to destination. The main objective of this work is to offer suitable solutions for getting better the optimal routing and communicating the data via reliable sensor nodes. This CS optimization method not capable for managing the diversity of the solutions. To solve this issue, we use the CS technique to hybridize it with the hill climbing (HC) technique to minimize the probability of early convergence. This approach introduces a fusion of CS and HC techniques (CSHC) based optimal forwarder selection and detect the Intrusion in wireless sensor network (WSN) . Here, a Bayesian thresholding method is predict the received signal strength and link reliability parameter for identifying intrusion in the network. The hill - climbing technique is able to attain the best solutions in a smaller period than other local search techniques. In CSHC, the optimal forwarderis selection by fitness function. This fitness function is computed based on sensor node lifetime, sensor link reliability, and buffer availability. In this app roach, the experimental results suggest that the CSHC for improving 35% throughput and minimizes the 23.52% packet lossescompared to the baseline approaches.
APA, Harvard, Vancouver, ISO, and other styles
31

Chrystyn, Henry. "Validation of the use of Bayesian Analysis in the Optimization of Gentamicin Therapy from the Commencement of Dosing." Drug Intelligence & Clinical Pharmacy 22, no. 1 (January 1988): 49–53. http://dx.doi.org/10.1177/106002808802200112.

Full text
Abstract:
A computer program based on the statistical technique of Bayesian analysis has been adapted to run on several microcomputers. The clinical application of this method for gentamicin has been validated in 13 patients with varying degrees of renal function by a comparison of the accuracy of this method to a predictive algorithm method and one using standard pharmacokinetic principles. Blood samples for serum gentamicin analysis were taken after the administraiton of an intravenous loading dose of gentamicin. The results produced by each method were used to predict the peak and trough values measured on day 3 of therapy. Of the three methods studied, Bayesian analysis, using a serum gentamicin concentration drawn four hours after the initial dose, was the least biased and the most precise method for predicting the observed levels. The mean prediction error of the Bayesian analysis method, using the four-hour sample, was −0.03 mg/L for the peak serum concentration and −0.07 mg/L for the trough level on day 3. Using this method the corresponding root mean squared prediction error was 0.60 mg/L and 0.36 mg/L for the peak and trough levels, respectively.
APA, Harvard, Vancouver, ISO, and other styles
32

Monego, Vinicius Schmidt, Juliana Aparecida Anochi, and Haroldo Fraga de Campos Velho. "South America Seasonal Precipitation Prediction by Gradient-Boosting Machine-Learning Approach." Atmosphere 13, no. 2 (January 31, 2022): 243. http://dx.doi.org/10.3390/atmos13020243.

Full text
Abstract:
Machine learning has experienced great success in many applications. Precipitation is a hard meteorological variable to predict, but it has a strong impact on society. Here, a machine-learning technique—a formulation of gradient-boosted trees—is applied to climate seasonal precipitation prediction over South America. The Optuna framework, based on Bayesian optimization, was employed to determine the optimal hyperparameters for the gradient-boosting scheme. A comparison between seasonal precipitation forecasting among the numerical atmospheric models used by the National Institute for Space Research (INPE, Brazil) as an operational procedure for weather/climate forecasting, gradient boosting, and deep-learning techniques is made regarding observation, with some showing better performance for the boosting scheme.
APA, Harvard, Vancouver, ISO, and other styles
33

Sjöstrand, Henrik, and Georg Schnabel. "Monte Carlo integral adjustment of nuclear data libraries – experimental covariances and inconsistent data." EPJ Web of Conferences 211 (2019): 07007. http://dx.doi.org/10.1051/epjconf/201921107007.

Full text
Abstract:
Integral experiments can be used to adjust nuclear data libraries. Here a Bayesian Monte Carlo method based on assigning weights to the different random files is used. If the experiments are inconsistent within them-self or with the nuclear data it is shown that the adjustment procedure can lead to undesirable results. Therefore, a technique to treat inconsistent data is presented. The technique is based on the optimization of the marginal likelihood which is approximated by a sample of model calculations. The sources to the inconsistencies are discussed and the importance to consider correlation between the different experiments is emphasized. It is found that the technique can address inconsistencies in a desirable way.
APA, Harvard, Vancouver, ISO, and other styles
34

Correa, Elon S., Alex A. Freitas, and Colin G. Johnson. "Particle Swarm for Attribute Selection in Bayesian Classification: An Application to Protein Function Prediction." Journal of Artificial Evolution and Applications 2008 (March 18, 2008): 1–12. http://dx.doi.org/10.1155/2008/876746.

Full text
Abstract:
The discrete particle swarm optimization (DPSO) algorithm is an optimization technique which belongs to the fertile paradigm of Swarm Intelligence. Designed for the task of attribute selection, the DPSO deals with discrete variables in a straightforward manner. This work empowers the DPSO algorithm by extending it in two ways. First, it enables the DPSO to select attributes for a Bayesian network algorithm, which is more sophisticated than the Naive Bayes classifier previously used by the original DPSO algorithm. Second, it applies the DPSO to a set of challenging protein functional classification data, involving a large number of classes to be predicted. The work then compares the performance of the DPSO algorithm against the performance of a standard Binary PSO algorithm on the task of selecting attributes on those data sets. The criteria used for this comparison are (1) maximizing predictive accuracy and (2) finding the smallest subset of attributes.
APA, Harvard, Vancouver, ISO, and other styles
35

Kefalas, Marios, Bas van Stein, Mitra Baratchi, Asteris Apostolidis, and Thomas Baeck. "End-to-End Pipeline for Uncertainty Quantification and Remaining Useful Life Estimation: An Application on Aircraft Engines." PHM Society European Conference 7, no. 1 (June 29, 2022): 245–60. http://dx.doi.org/10.36001/phme.2022.v7i1.3317.

Full text
Abstract:
Estimating the remaining useful life (RUL) of an asset lies at the heart of prognostics and health management (PHM) of many operations-critical industries such as aviation. Modern methods of RUL estimation adopt techniques from deep learning (DL). However, most of these contemporary techniques deliver only single-point estimates for the RUL without reporting on the confidence of the prediction. This practice usually provides overly confident predictions that can have severe consequences in operational disruptions or even safety. To address this issue, we propose a technique for uncertainty quantification (UQ) based on Bayesian deep learning (BDL). The hyperparameters of the framework are tuned using a novel bi-objective Bayesian optimization method with objectives the predictive performance and predictive uncertainty. The method also integrates the data pre-processing steps into the hyperparameter optimization (HPO) stage, models the RUL as a Weibull distribution, and returns the survival curves of the monitored assets to allow informed decision-making. We validate this method on the widely used C-MAPSS dataset against a single-objective HPO baseline that aggregates the two objectives through the harmonic mean (HM). We demonstrate the existence of trade-offs between the predictive performance and the predictive uncertainty and observe that the bi-objective HPO returns a larger number of hyperparameter configurations compared to the single-objective baseline. Furthermore, we see that with the proposed approach, it is possible to configure models for RUL estimation that exhibit better or comparable performance to the single-objective baseline when validated on the test sets.
APA, Harvard, Vancouver, ISO, and other styles
36

C. D, Anisha, and Arulanand N. "EMG BASED DIAGNOSIS OF MYOPATHY AND NEUROPATHY USING MACHINE LEARNING TECHNIQUES." International Journal of Engineering Technology and Management Sciences 4, no. 4 (July 28, 2020): 38–45. http://dx.doi.org/10.46647/ijetms.2020.v04i04.007.

Full text
Abstract:
Myopathy and Neuropathy are non-progressive and progressive neuromuscular disorders which weakens the muscles and nerves respectively. Electromyography (EMG) signals are bio signals obtained from the individual muscle cells. EMG based diagnosis for neuromuscular disorders is a safe and reliable method. Integrating the EMG signals with machine learning techniques improves the diagnostic accuracy. The proposed system performs analysis on the clinical raw EMG dataset which is obtained from the publicly available PhysioNet database. The two-channel raw EMG dataset of healthy, myopathy and neuropathy subjects are divided into samples. The Time Domain (TD) features are extracted from divided samples of each subject. The extracted features are annotated with the class label representing the state of the individual. The annotated features split into training and testing set in the standard ratio 70: 30. The comparative classification analysis on the complete annotated features set and prominent features set procured using Pearson correlation technique is performed. The features are scaled using standard scaler technique. The analysis on scaled annotated features set and scaled prominent features set is also implemented. The hyperparameter space of the classifiers are given by trial and error method. The hyperparameters of the classifiers are tuned using Bayesian optimization technique and the optimal parameters are obtained. and are fed to the tuned classifier. The classification algorithms considered in the analysis are Random Forest and Multi-Layer Perceptron Neural Network (MLPNN). The performance evaluation of the classifiers on the test data is computed using the Accuracy, Confusion Matrix, F1 Score, Precision and Recall metrics. The evaluation results of the classifiers states that Random Forest performs better than MLPNN wherein it provides an accuracy of 96 % with non-scaled Time Domain (TD) features and MLPNN outperforms better than Random Forest with an accuracy of 97% on scaled Time Domain (TD) features which is higher than the existing systems. The inferences from the evaluation results is that Bayesian optimization tuned classifiers improves the accuracy which provides a robust diagnostic model for neuromuscular disorder diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
37

Kolar, Davor, Dragutin Lisjak, Michał Pająk, and Mihael Gudlin. "Intelligent Fault Diagnosis of Rotary Machinery by Convolutional Neural Network with Automatic Hyper-Parameters Tuning Using Bayesian Optimization." Sensors 21, no. 7 (March 31, 2021): 2411. http://dx.doi.org/10.3390/s21072411.

Full text
Abstract:
Intelligent fault diagnosis can be related to applications of machine learning theories to machine fault diagnosis. Although there is a large number of successful examples, there is a gap in the optimization of the hyper-parameters of the machine learning model, which ultimately has a major impact on the performance of the model. Machine learning experts are required to configure a set of hyper-parameter values manually. This work presents a convolutional neural network based data-driven intelligent fault diagnosis technique for rotary machinery which uses model with optimized hyper-parameters and network structure. The proposed technique input raw three axes accelerometer signal as high definition 1-D data into deep learning layers with optimized hyper-parameters. Input is consisted of wide 12,800 × 1 × 3 vibration signal matrix. Model learning phase includes Bayesian optimization that optimizes hyper-parameters of the convolutional neural network. Finally, by using a Convolutional Neural Network (CNN) model with optimized hyper-parameters, classification in one of the 8 different machine states and 2 rotational speeds can be performed. This study accomplished the effective classification of different rotary machinery states in different rotational speeds using optimized convolutional artificial neural network for classification of raw three axis accelerometer signal input. Overall classification accuracy of 99.94% on evaluation set is obtained with the CNN model based on 19 layers. Additionally, more data are collected on the same machine with altered bearings to test the model for overfitting. Result of classification accuracy of 100% on second evaluation set has been achieved, proving the potential of using the proposed technique.
APA, Harvard, Vancouver, ISO, and other styles
38

Guizilini, Vitor, and Fabio Ramos. "Variational Hilbert regression for terrain modeling and trajectory optimization." International Journal of Robotics Research 38, no. 12-13 (April 22, 2019): 1375–87. http://dx.doi.org/10.1177/0278364919844586.

Full text
Abstract:
The ability to generate accurate terrain models is of key importance in a wide variety of robotics tasks, ranging from path planning and trajectory optimization to environment exploration and mining applications. This paper introduces a novel regression methodology for terrain modeling that can approximate arbitrarily complex functions based on a series of simple kernel calculations, using variational Bayesian inference. A sparse feature vector is used to efficiently project input points into a high-dimensional reproducing kernel Hilbert space, according to a set of inducing points automatically generated from clustering available data. Each inducing point maintains its own regression model in addition to individual kernel parameters, and the entire set is iteratively optimized as more data are collected in order to maximize a global variational lower bound. We also show how kernel and regression model parameters can be jointly learned, to achieve a better approximation of the underlying function. Experimental results show that the proposed methodology consistently outperforms current state-of-the-art techniques, while producing a continuous model with a fully probabilistic treatment of uncertainties, well-defined gradients, and highly scalable to large-scale datasets. As a practical application of the proposed terrain modeling technique, we explore the problem of trajectory optimization, deriving gradients that allow the efficient generation of continuous paths using standard optimization algorithms, minimizing a series of useful properties (i.e. distance traveled, changes in elevation, and terrain variance).
APA, Harvard, Vancouver, ISO, and other styles
39

Varmazyar, Maryam, Nicholas Haritos, Michael Kirley, and Tim Peterson. "A One Stage Damage Detection Technique Using Spectral Density Analysis and Parallel Genetic Algorithms." Key Engineering Materials 558 (June 2013): 1–11. http://dx.doi.org/10.4028/www.scientific.net/kem.558.1.

Full text
Abstract:
This paper describes a new global damage identification framework for the continuous/periodic monitoring of civil structures. In order to localize and estimate the severity of damage regions, a one-stage model-based Bayesian probabilistic damage detection approach is proposed. This method, which is based on the response power spectral density of the structure, enjoys the advantage of broadband frequency information and can be implemented on input-output as well as output-only damage identification studies. A parallel genetic algorithm is subsequently used to evolve the optimal model parameters introduced for different damage conditions. Given the complex search space and the need to perform multiple time-consuming objective function evaluations, a parallel meta-heuristic provides a robust optimization tool in this domain. It is shown that this approach is capable of detecting structural damage in both noisy and noise-free environments.
APA, Harvard, Vancouver, ISO, and other styles
40

Sengupta, P., and S. Chakraborty. "Model reduction technique for Bayesian model updating of structural parameters using simulated modal data." Proceedings of the 12th Structural Engineering Convention, SEC 2022: Themes 1-2 1, no. 1 (December 19, 2022): 1403–12. http://dx.doi.org/10.38208/acp.v1.670.

Full text
Abstract:
An attempt has been made to study the effectiveness of model reduction technique for Bayesian approach of model updating with incomplete modal data sets. The inverse problems in system identification require the solution of a family of plausible values of model parameters based on available data. Specifically, an iterative model reduction algorithm is proposed based on a non-linear optimization method to solve the transformation parameter such that no prior choices of response parameters are required. The modal ordinates synthesized at the unmeasured degrees of freedom (DOF) from the reduced order model are used for a better estimate of likelihood functions. The reduced-order model is subsequently implemented for updating of unknown structural parameters. The present study also synthesizes the mode shape ordinates at unmeasured DOF from the reduced order model. The efficiency of the proposed model reduction algorithm is further studied by adding noises of varying percentages to the measured modal data sets. The proposed methodology is illustrated numerically to update the stiffness parameters of an eight-story shear building model considering simulated datasets contaminated by Gaussian error as evidence. The capability of the proposed model reduction algorithm coupled with Markov Chain Monte Carlo (MCMC) algorithm is compared with the case where only MCMC algorithm is used to investigate their effectiveness in updating model parameters. The numerical study focuses on the effect of reduced number of measurements for various measurement configurations in estimating the variation of errors in determining the modal data. Subsequently, its effects in reducing the uncertainty of model updating parameters are investigated. The effectiveness of the proposed model reduction algorithm is tested for number of modes equal to the number of master DOFs and gradually decrease of mode numbers from the number of master DOFs.
APA, Harvard, Vancouver, ISO, and other styles
41

Jatoi, Munsif Ali, Nidal Kamel, Sayed Hyder Abbas Musavi, and José David López. "Bayesian Algorithm Based Localization of EEG Recorded Electromagnetic Brain Activity." Current Medical Imaging Formerly Current Medical Imaging Reviews 15, no. 2 (January 10, 2019): 184–93. http://dx.doi.org/10.2174/1573405613666170629112918.

Full text
Abstract:
Background: Electrical signals are generated inside human brain due to any mental or physical task. This causes activation of several sources inside brain which are localized using various optimization algorithms. Methods: Such activity is recorded through various neuroimaging techniques like fMRI, EEG, MEG etc. EEG signals based localization is termed as EEG source localization. The source localization problem is defined by two complementary problems; the forward problem and the inverse problem. The forward problem involves the modeling how the electromagnetic sources cause measurement in sensor space, while the inverse problem refers to the estimation of the sources (causes) from observed data (consequences). Usually, this inverse problem is ill-posed. In other words, there are many solutions to the inverse problem that explains the same data. This ill-posed problem can be finessed by using prior information within a Bayesian framework. This research work discusses source reconstruction for EEG data using a Bayesian framework. In particular, MSP, LORETA and MNE are compared. Results: The results are compared in terms of variational free energy approximation to model evidence and in terms of variance accounted for in the sensor space. The results are taken for real time EEG data and synthetically generated EEG data at an SNR level of 10dB. Conclusion: In brief, it was seen that MSP has the highest evidence and lowest localization error when compared to classical models. Furthermore, the plausibility and consistency of the source reconstruction speaks to the ability of MSP technique to localize active brain sources.
APA, Harvard, Vancouver, ISO, and other styles
42

Feng, Zhouquan, Yang Lin, Wenzan Wang, Xugang Hua, and Zhengqing Chen. "Probabilistic Updating of Structural Models for Damage Assessment Using Approximate Bayesian Computation." Sensors 20, no. 11 (June 4, 2020): 3197. http://dx.doi.org/10.3390/s20113197.

Full text
Abstract:
A novel probabilistic approach for model updating based on approximate Bayesian computation with subset simulation (ABC-SubSim) is proposed for damage assessment of structures using modal data. The ABC-SubSim is a likelihood-free Bayesian approach in which the explicit expression of likelihood function is avoided and the posterior samples of model parameters are obtained using the technique of subset simulation. The novel contributions of this paper are on three fronts: one is the introduction of some new stopping criteria to find an appropriate tolerance level for the metric used in the ABC-SubSim; the second one is the employment of a hybrid optimization scheme to find finer optimal values for the model parameters; and the last one is the adoption of an iterative approach to determine the optimal weighting factors related to the residuals of modal frequency and mode shape in the metric. The effectiveness of this approach is demonstrated using three illustrative examples.
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Fang, and Evercita C. Eugenio. "A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression." Statistical Methods in Medical Research 27, no. 4 (May 25, 2016): 1024–44. http://dx.doi.org/10.1177/0962280216650699.

Full text
Abstract:
Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.
APA, Harvard, Vancouver, ISO, and other styles
44

Rezapour, Mahdi, and Khaled Ksaibati. "Application of Bayesian Hierarchical Negative Binomial Finite Mixture Model for Cost-Benefit Analysis of Barriers Optimization, Accounting for Severe Heterogeneity." Algorithms 13, no. 11 (November 10, 2020): 288. http://dx.doi.org/10.3390/a13110288.

Full text
Abstract:
The Wyoming Department of Transportation (WYDOT) initiated a project to optimize the heights of barriers that are not satisfying the barrier design criteria, while prioritizing them based on an ability to achieve higher monetary benefits. The equivalent property damage only (EPDO) was used in this study to account for both aspects of crash frequency and severity. Data of this type are known to have overdispersion, that is having a variance greater than the mean. Thus, a negative binomial model was implemented to address the over-dispersion issue of the dataset. Another challenge of the dataset used in this study was the heterogeneity of the dataset. The data heterogeneity resulted from various factors such as data being aggregated across two highway systems, and the presence of two barrier types in the whole state. Thus, it is not practical to assign a subjective hierarchy such as a highway system or barrier types to address the issue of severe heterogeneity in the dataset. Under these conditions, a finite mixture model (FMM) was implemented to find a best distribution parameter to characterize the observations. With this technique, after the optimum number of mixtures was identified, those clusters were assigned to various observations. However, previous studies mostly employed just the finite mixture model (FMM), with various distributions, to account for unobserved heterogeneity. The problem with the FMM approach is that it results in a loss of information: for instance, it would come up with N number of equations, where each result would use only part of the whole dataset. On the other hand, some studies used a subjective hierarchy to account for the heterogeneity in the dataset, such as the effect of seasonality or highway system; however, those subjective hierarchies might not account for the optimum heterogeneity in the dataset. Thus, we implement a new methodology, the Bayesian Hierarchical Finite Mixture (BHFMM) to employ the FMM without losing information, while also accounting for the heterogeneity in the dataset, by considering objective and unbiased hierarchies. As the Bayesian technique has the shortcoming of labeling the observations due to label switching; the FMM parameters were estimated by maximum likelihood technique. Results of the identified model were converted to an equation for implementation of machine learning techniques. The heights were optimized to an optimal value and the EPDO was predicted based on the changes. The results of the cost–benefit analysis indicated that after spending about 4 million dollars, the WYDOT would not only recover the expenses, but could also expect to save more than $4 million additional dollars through traffic barrier crash reduction.
APA, Harvard, Vancouver, ISO, and other styles
45

Geara, Christelle, Rafic Faddoul, Alaa Chateauneuf, and Wassim Raphaël. "A predator-prey optimization for structural health monitoring problems." MATEC Web of Conferences 281 (2019): 01004. http://dx.doi.org/10.1051/matecconf/201928101004.

Full text
Abstract:
Monitoring a structure using permanent sensors has been one of the most interesting topics, especially with the increase of the number of aging structures. Such a technique requires the implementation of sensors on a structure to predict the condition states of the structural elements. However, due to the costs of sensors, one must judiciously install few sensors at some defined locations in order to maximize the probability of detecting potential damages. In this paper, we propose a methodology based on a genetic algorithm of type predator-prey with a Bayesian updating of the structural parameters, to optimize the number and location of the sensors to be placed. This methodology takes into consideration all uncertainties related to the degradation of the elements, the mechanical model and the accuracy of sensors. Starting with two initial populations representing the damages (prey) and the sensors (predator), the genetic algorithm evolves both populations in order to converge towards the optimal configuration of sensors, in terms of number and location. The proposed methodology is illustrated by a two-story concrete frame structure.
APA, Harvard, Vancouver, ISO, and other styles
46

Khan, Muhammad Attique, Naveera Sahar, Wazir Zada Khan, Majed Alhaisoni, Usman Tariq, Muhammad H. Zayyan, Ye Jin Kim, and Byoungchol Chang. "GestroNet: A Framework of Saliency Estimation and Optimal Deep Learning Features Based Gastrointestinal Diseases Detection and Classification." Diagnostics 12, no. 11 (November 7, 2022): 2718. http://dx.doi.org/10.3390/diagnostics12112718.

Full text
Abstract:
In the last few years, artificial intelligence has shown a lot of promise in the medical domain for the diagnosis and classification of human infections. Several computerized techniques based on artificial intelligence (AI) have been introduced in the literature for gastrointestinal (GIT) diseases such as ulcer, bleeding, polyp, and a few others. Manual diagnosis of these infections is time consuming, expensive, and always requires an expert. As a result, computerized methods that can assist doctors as a second opinion in clinics are widely required. The key challenges of a computerized technique are accurate infected region segmentation because each infected region has a change of shape and location. Moreover, the inaccurate segmentation affects the accurate feature extraction that later impacts the classification accuracy. In this paper, we proposed an automated framework for GIT disease segmentation and classification based on deep saliency maps and Bayesian optimal deep learning feature selection. The proposed framework is made up of a few key steps, from preprocessing to classification. Original images are improved in the preprocessing step by employing a proposed contrast enhancement technique. In the following step, we proposed a deep saliency map for segmenting infected regions. The segmented regions are then used to train a pre-trained fine-tuned model called MobileNet-V2 using transfer learning. The fine-tuned model’s hyperparameters were initialized using Bayesian optimization (BO). The average pooling layer is then used to extract features. However, several redundant features are discovered during the analysis phase and must be removed. As a result, we proposed a hybrid whale optimization algorithm for selecting the best features. Finally, the selected features are classified using an extreme learning machine classifier. The experiment was carried out on three datasets: Kvasir 1, Kvasir 2, and CUI Wah. The proposed framework achieved accuracy of 98.20, 98.02, and 99.61% on these three datasets, respectively. When compared to other methods, the proposed framework shows an improvement in accuracy.
APA, Harvard, Vancouver, ISO, and other styles
47

Chepiga, Timur, Petr Zhilyaev, Alexander Ryabov, Alexey P. Simonov, Oleg N. Dubinin, Denis G. Firsov, Yulia O. Kuzminova, and Stanislav A. Evlashin. "Process Parameter Selection for Production of Stainless Steel 316L Using Efficient Multi-Objective Bayesian Optimization Algorithm." Materials 16, no. 3 (January 25, 2023): 1050. http://dx.doi.org/10.3390/ma16031050.

Full text
Abstract:
Additive manufacturing is a modern technique to produce parts with a complex geometry. However, the choice of the printing parameters is a time-consuming and costly process. In this study, the parameter optimization for the laser powder bed fusion process was investigated. Using state-of-the art multi-objective Bayesian optimization, the set of the most-promising process parameters (laser power, scanning speed, hatch distance, etc.), which would yield parts with the desired hardness and porosity, was established. The Gaussian process surrogate model was built on 57 empirical data points, and through efficient sampling in the design space, we were able to obtain three points in the Pareto front in just over six iterations. The produced parts had a hardness ranging from 224–235 HV and a porosity in the range of 0.2–0.37%. The trained model recommended using the following parameters for high-quality parts: 58 W, 257 mm/s, 45 µm, with a scan rotation angle of 131 degrees. The proposed methodology greatly reduces the number of experiments, thus saving time and resources. The candidate process parameters prescribed by the model were experimentally validated and tested.
APA, Harvard, Vancouver, ISO, and other styles
48

González García, Ignacio, and Alfonso Mateos Caballero. "A Multi-Objective Bayesian Approach with Dynamic Optimization (MOBADO). A Hybrid of Decision Theory and Machine Learning Applied to Customs Fraud Control in Spain." Mathematics 9, no. 13 (June 29, 2021): 1529. http://dx.doi.org/10.3390/math9131529.

Full text
Abstract:
This paper studies the economically significant problem of the optimization of customs fraud control, which is a critical issue for many countries. The European Union (EU) alone handles 4693 tons of goods every minute (2018 figures). Even though 70% of goods are imported at zero tariff, the EU raised EUR 25.4 billions in 2018, and customs-related income transferred by member states to the EU accounts for nearly 13% of its overall budget. In this field, (a) the conflicting objectives are qualitative and cannot be reduced to a common measure (security and terrorism, health, drug market access control, taxes, etc.); (b) each submitted item has dozens of characteristics; (c) there are constraints; and (d) risk analysis systems have to make decisions in real time. Although the World Customs Organization has promoted the use of artificial intelligence to increase the precision of controls, the problem is very complex due to the data characteristics and interpretability, which is a requirement established by customs officers. In this paper, we propose a new Bayesian-based hybrid approach combining machine learning and multi-objective linear programming (MOLP), called multi-objective Bayesian with dynamic optimization (MOBADO). We demonstrate that it is possible to more than double (with a 237% increase) the precision of current inspection systems, freeing up almost 50% of human resources, and outperform past results with respect to each of the above objectives. MOBADO is an optimization technique that could be combined with any artificial intelligence approach capable of optimizing the quality of multi-objective risk analysis in real time.
APA, Harvard, Vancouver, ISO, and other styles
49

Doan, Quoc Hoan, Duc-Kien Thai, and Ngoc Long Tran. "A hybrid model for predicting missile impact damages based on k-nearest neighbors and Bayesian optimization." Journal of Science and Technology in Civil Engineering (STCE) - NUCE 14, no. 3 (August 19, 2020): 1–14. http://dx.doi.org/10.31814/stce.nuce2020-14(3)-01.

Full text
Abstract:
Due to the increase of missile performance, the safety design requirements of military and industrial reinforced concrete (RC) structures (i.e., bunkers, nuclear power plants, etc.) also increase. Estimating damage levels in the design stage becomes a crucial task and requires more accuracy. Thus, this study proposed a hybrid machine learning model which is based on k-nearest neighbors (KNN) and Bayesian optimization (BO), named as BO-KNN, for predicting the local damages of reinforced concrete (RC) panels under missile impact loading. In the proposed BO-KNN, the hyperparameters of the KNN were optimized by using the BO which is a well-established optimization algorithm. Accordingly, the KNN was trained on an experimental dataset that consists of 254 impact tests to predict four levels (or classes) of damages including perforation, scabbing, penetration, and no damage. Due to the unbalance of the number of tests in each damage class, an over-sampling technique called BorderlineSMOTE was employed as a balancing solution. The predictability of the proposed model was investigated by comparing with the benchmark models including non-optimized KNN, multilayer perceptron (MLP), and decision tree (DT). Accuracy, F1-score, and area under the receiver operating characteristic (ROC) curve (AUC) were utilized to evaluate the performance of these models. The implementation results showed that the proposed BO-KNN model outperformed the other benchmark models with the average class accuracy of 68.05%, F1-score = 0.641, and AUC = 85.8%. Thus, the proposed model can be introduced as a foundation for developing a tool for predicting the local damage of RC panels under the missile impact in the future. Keywords: impact damage; k-nearest neighbors; Bayesian optimization; oversampling; imbalanced data; RC panel.
APA, Harvard, Vancouver, ISO, and other styles
50

Hussien, Ahmed M., Jonghoon Kim, Abdulaziz Alkuhayli, Mohammed Alharbi, Hany M. Hasanien, Marcos Tostado-Véliz, Rania A. Turky, and Francisco Jurado. "Adaptive PI Control Strategy for Optimal Microgrid Autonomous Operation." Sustainability 14, no. 22 (November 11, 2022): 14928. http://dx.doi.org/10.3390/su142214928.

Full text
Abstract:
The present research produces a new technique for the optimum operation of an isolated microgrid (MGD) based on an enhanced block-sparse adaptive Bayesian algorithm (EBSABA). To update the proportional-integral (PI) controller gains online, the suggested approach considers the impact of the actuating error signal as well as its magnitude. To reach a compromise result between the various purposes, the Response Surface Methodology (RSMT) is combined with the sunflower optimization (SFO) and particle swarm optimization (PSO) algorithms. To demonstrate the success of the novel approach, a benchmark MGD is evaluated in three different Incidents: (1) removing the MGD from the utility (islanding mode); (2) load variations under islanding mode; and (3) a three-phase fault under islanding mode. Extensive simulations are run to test the new technique using the PSCAD/EMTDC program. The validity of the proposed optimizer is demonstrated by comparing its results with those obtained using the least mean and square root of exponential method (LMSRE) based adaptive control, SFO, and PSO methodologies. The study demonstrates the superiority of the proposed EBSABA over the LMSRE, SFO, and PSO approaches in the system’s transient reactions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography