Academic literature on the topic 'Network design problem; Bayesian optimization; Simulation-based optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Network design problem; Bayesian optimization; Simulation-based optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Network design problem; Bayesian optimization; Simulation-based optimization"

1

Yin, Ruyang, Jiping Xing, Pengli Mo, Nan Zheng, and Zhiyuan Liu. "BO-B&B: A hybrid algorithm based on Bayesian optimization and branch-and-bound for discrete network design problems." Electronic Research Archive 30, no. 11 (2022): 3993–4014. http://dx.doi.org/10.3934/era.2022203.

Full text
Abstract:
<abstract> <p>A discrete network design problem (DNDP) is conventionally formulated as an analytical bi-level programming problem to acquire an optimal network design strategy for an existing traffic network. In recent years, multimodal network design problems have benefited from simulation-based models. The nonconvexity and implicity of bi-level DNDPs make it challenging to obtain an optimal solution, especially for simulation-related models. Bayesian optimization (BO) has been proven to be an effective method for optimizing the costly black-box functions of simulation-based continuous network design problems. However, there are only discrete inputs in DNDPs, which cannot be processed using standard BO algorithms. To address this issue, we develop a hybrid method (BO-B&amp;B) that combines Bayesian optimization and a branch-and-bound algorithm to deal with discrete variables. The proposed algorithm exploits the advantages of the cutting-edge machine-learning parameter-tuning technique and the exact mathematical optimization method, thereby balancing efficiency and accuracy. Our experimental results show that the proposed method outperforms benchmarking discrete optimization heuristics for simulation-based DNDPs in terms of total computational time. Thus, BO-B&amp;B can potentially aid decision makers in mapping practical network design schemes for large-scale networks.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
2

Han, Lu, Xianjun Shi, and Yuyao Zhai. "Test optimization selection method based on NSGA-3 and improved Bayesian network model." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 39, no. 2 (April 2021): 414–22. http://dx.doi.org/10.1051/jnwpu/20213920414.

Full text
Abstract:
Most of the solutions to existing test selection problems are based on single-objective optimization algorithms and multi-signal models, which maybe lead to some problems such as rough index calculation and large solution set limitations. To solve these problems, a test optimization selection method based on NSGA-3 algorithm and Bayesian network model is proposed. Firstly, the paper describes the improved Bayesian network model, expounds the method of model establishment, and introduces the model's learning ability and processing ability on uncertain information. According to the constraints and objective functions established by the design requirements, NSGA-3 is used to calculate the test optimization selection scheme based on the improved Bayesian network model. Taking a certain component of the missile airborne radar as an example, the fault detection rate and isolation rate are selected as constraints, and the false alarm rate, misdiagnosis rate, test cost, and test quantity are the optimization goals. The method of this paper is used for test optimization selection. It has been verified that this method can effectively solve the problem of multi-objective test selection, and has guiding significance for testability design.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Xinyong, and Liwei Sun. "Optimization of Optical Machine Structure by Backpropagation Neural Network Based on Particle Swarm Optimization and Bayesian Regularization Algorithms." Materials 14, no. 11 (June 1, 2021): 2998. http://dx.doi.org/10.3390/ma14112998.

Full text
Abstract:
Fit of the highly nonlinear functional relationship between input variables and output response is important and challenging for the optical machine structure optimization design process. The backpropagation neural network method based on particle swarm optimization and Bayesian regularization algorithms (called BMPB) is proposed to solve this problem. A prediction model of the mass and first-order modal frequency of the supporting structure is developed using the supporting structure as an example. The first-order modal frequency is used as the constraint condition to optimize the lightweight design of the supporting structure’s mass. Results show that the prediction model has more than 99% accuracy in predicting the mass and the first-order modal frequency of the supporting structure, and converges quickly in the supporting structure’s mass-optimization process. The supporting structure results demonstrate the advantages of the method proposed in the article in terms of high accuracy and efficiency. The study in this paper provides an effective method for the optimized design of optical machine structures.
APA, Harvard, Vancouver, ISO, and other styles
4

Dong, Qiang, Ruiying Li, and Rui Kang. "System Resilience Evaluation and Optimization Considering Epistemic Uncertainty." Symmetry 14, no. 6 (June 8, 2022): 1182. http://dx.doi.org/10.3390/sym14061182.

Full text
Abstract:
Epistemic uncertainties, caused by data asymmetry and deficiencies, exist in resilience evaluation. Especially in the system design process, it is difficult to obtain enough data for system resilience evaluation and improvement. Mathematics methods, such as evidence theory and Bayesian theory, have been used in the resilience evaluation for systems with epistemic uncertainty. However, these methods are based on subjective information and may lead to an interval expansion problem in the calculation. Therefore, the problem of how to quantify epistemic uncertainty in the resilience evaluation is not well solved. In this paper, we propose a new resilience measure based on uncertainty theory, a new branch of mathematics that is viewed as appropriate for modeling epistemic uncertainty. In our method, resilience is defined as an uncertainty measure that is the belief degree of a system’s behavior after disruptions that can achieve the predetermined goal. Then, a resilience evaluation method is provided based on the operation law in uncertainty theory. To design a resilient system, an uncertain programming model is given, and a genetic algorithm is applied to find an optimal design to develop a resilient system with the minimal cost. Finally, road networks are used as a case study. The results show that our method can effectively reduce cost and ensure network resilience.
APA, Harvard, Vancouver, ISO, and other styles
5

Song, Wenxue. "Building Construction Design Based on Particle Swarm Optimization Algorithm." Journal of Control Science and Engineering 2022 (June 29, 2022): 1–8. http://dx.doi.org/10.1155/2022/7139230.

Full text
Abstract:
In order to take a scientific risk control strategy to reduce the safety risk of construction projects, a construction safety risk decision-making method based on particle swarm optimization algorithm was proposed. Through the analysis of prefabricated building construction safety risk factors, the combination of the Markov Chain and Bayesian networks method was used to estimate the probability of risk factors. The relationship between the various risk factors was described by conditional probability, and a safety risk loss-control investment double objective optimization model was built. The corresponding algorithm was designed and the R language programming was used to solve the problem. The experimental results showed that by taking a high degree of control over the risk factors of the investment strategy, when the constraint cost was RMB 200,000, the global optimal risk loss and the global optimal control cost were RMB 1,400,500 and 19,600, respectively. When the constraint cost was 280,000 yuan, the global optimal risk loss and global optimal control cost were 1.046 million yuan and 278.5 million yuan, respectively. When the constraint cost was 320,000 yuan, the global optimal risk loss and global optimal control cost were 910,100 yuan and 317,300, yuan respectively. It was concluded that, considering the risk correlation optimization model, a reasonable allocation strategy was adopted, combined with the actual situation, which performed a promoting function in improving the assembly building construction safety risk decision-making.
APA, Harvard, Vancouver, ISO, and other styles
6

Ozaki, Yoshihiko, Yuki Tanigaki, Shuhei Watanabe, Masahiro Nomura, and Masaki Onishi. "Multiobjective Tree-Structured Parzen Estimator." Journal of Artificial Intelligence Research 73 (April 8, 2022): 1209–50. http://dx.doi.org/10.1613/jair.1.13188.

Full text
Abstract:
Practitioners often encounter challenging real-world problems that involve a simultaneous optimization of multiple objectives in a complex search space. To address these problems, we propose a practical multiobjective Bayesian optimization algorithm. It is an extension of the widely used Tree-structured Parzen Estimator (TPE) algorithm, called Multiobjective Tree-structured Parzen Estimator (MOTPE). We demonstrate that MOTPE approximates the Pareto fronts of a variety of benchmark problems and a convolutional neural network design problem better than existing methods through the numerical results. We also investigate how the configuration of MOTPE affects the behavior and the performance of the method and the effectiveness of asynchronous parallelization of the method based on the empirical results.
APA, Harvard, Vancouver, ISO, and other styles
7

Yaloveha, Vladyslav, Andrii Podorozhniak, and Heorhii Kuchuk. "Convolutional neural network hyperparameter optimization applied to land cover classification." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 1 (February 23, 2022): 115–28. http://dx.doi.org/10.32620/reks.2022.1.09.

Full text
Abstract:
In recent times, machine learning algorithms have shown great performance in solving problems in different fields of study, including the analysis of remote sensing images, computer vision, natural language processing, medical issues, etc. A well-prepared input dataset can have a huge impact on the result metrics. However, a correctly selected hyperparameter combined with neural network architecture could highly increase the final metrics. Therefore, the hyperparameters optimization problem becomes a key issue in a deep learning algorithm. The process of finding a suitable hyperparameter combination could be performed manually or automatically. Manual search is based on previous research and requires enormous human efforts. However, there are many automated hyperparameter optimization methods have been successfully applied in practice. The automated hyperparameter tuning techniques are divided into two groups: black-box optimization techniques (such as Grid Search, Random Search) and multi-fidelity optimization techniques (HyperBand, BOHB). The most recent and promising among all approaches is BOHB which, which combines both Bayesian optimization and bandit-based methods, outperforms classical approaches, and can run asynchronously with given GPU resources and time budget that plays a vital role in the hyperparameter optimization process. The previous study proposed a convolutional deep learning neural network for solving land cover classification problems in the EuroSAT dataset. It was found that adding spectral indexes NDVI, NDWI, and GNDVI with RGB channels increased the result accuracy (from 64.72% to 84.19%) and F1 (from 63.89 % to 84.05%) score. However, the convolutional neural network architecture and hyperparameter combination were selected manually. The research optimizes convolutional neural network architecture and finds suitable hyperparameter combinations applied to land cover classification problems using multispectral images. The obtained results must increase result performance compared with the previous study and given budget constraints.
APA, Harvard, Vancouver, ISO, and other styles
8

Cook, Jared A., Ralph C. Smith, Jason M. Hite, Razvan Stefanescu, and John Mattingly. "Application and Evaluation of Surrogate Models for Radiation Source Search." Algorithms 12, no. 12 (December 12, 2019): 269. http://dx.doi.org/10.3390/a12120269.

Full text
Abstract:
Surrogate models are increasingly required for applications in which first-principles simulation models are prohibitively expensive to employ for uncertainty analysis, design, or control. They can also be used to approximate models whose discontinuous derivatives preclude the use of gradient-based optimization or data assimilation algorithms. We consider the problem of inferring the 2D location and intensity of a radiation source in an urban environment using a ray-tracing model based on Boltzmann transport theory. Whereas the code implementing this model is relatively efficient, extension to 3D Monte Carlo transport simulations precludes subsequent Bayesian inference to infer source locations, which typically requires thousands to millions of simulations. Additionally, the resulting likelihood exhibits discontinuous derivatives due to the presence of buildings. To address these issues, we discuss the construction of surrogate models for optimization, Bayesian inference, and uncertainty propagation. Specifically, we consider surrogate models based on Legendre polynomials, multivariate adaptive regression splines, radial basis functions, Gaussian processes, and neural networks. We detail strategies for computing training points and discuss the merits and deficits of each method.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Zhiqiang, Hongzhou Zhang, Shengjin Wang, Weijun Hong, Jianhui Ma, and Yanfeng He. "Reliability Evaluation of Public Security Face Recognition System Based on Continuous Bayesian Network." Mathematical Problems in Engineering 2020 (May 25, 2020): 1–9. http://dx.doi.org/10.1155/2020/6287394.

Full text
Abstract:
For the sake of measuring the reliability of actual face recognition system with continuous variables, after analyzing system structure, common failures, influencing factors of reliability, and maintenance data of a public security face recognition system in use, we propose a reliability evaluation model based on Continuous Bayesian Network. We design a Clique Tree Propagation algorithm to reason and solve the model, which is realized by R programs, and as a result, the reliability coefficient of the actual system is obtained. Subsequently, we verify the Continuous Bayesian Network by comparing its evaluation results with those of traditional Bayesian Network and Ground Truth. According to these evaluation results, we find out some weaknesses of the system and propose some optimization strategies by the way of finding the right remedies and filling in blanks. In this paper, we synthetically apply a variety of methods, such as qualitative analysis, quantitative analysis, theoretical analysis, and empirical analysis, to solve the unascertained causal reasoning problem. The evaluation method is reasonable and valid, the results are consistent with realities and objective, and the proposed strategies are very operable and targeted. This work is of theoretical significance to research on reliability theory. It is also of practical significance to the improvement of the system’s reliability and the ability of public order maintenance.
APA, Harvard, Vancouver, ISO, and other styles
10

Altabey, Wael A., Mohammad Noori, Zhishen Wu, Mohamed A. Al-Moghazy, and Sallam A. Kouritem. "Studying Acoustic Behavior of BFRP Laminated Composite in Dual-Chamber Muffler Application Using Deep Learning Algorithm." Materials 15, no. 22 (November 15, 2022): 8071. http://dx.doi.org/10.3390/ma15228071.

Full text
Abstract:
Over the last two decades, several experimental and numerical studies have been performed in order to investigate the acoustic behavior of different muffler materials. However, there is a problem in which it is necessary to perform large, important, time-consuming calculations particularly if the muffler was made from advanced materials such as composite materials. Therefore, this work focused on developing the concept of the indirect dual-chamber muffler made from a basalt fiber reinforced polymer (BFRP) laminated composite, which is a monitoring system that uses a deep learning algorithm to predict the acoustic behavior of the muffler material in order to save effort and time on muffler design optimization. Two types of deep neural networks (DNNs) architectures are developed in Python. The first DNN is called a recurrent neural network with long short-term memory blocks (RNN-LSTM), where the other is called a convolutional neural network (CNN). First, a dual-chamber laminated composite muffler (DCLCM) model is developed in MATLAB to provide the acoustic behavior datasets of mufflers such as acoustic transmission loss (TL) and the power transmission coefficient (PTC). The model training parameters are optimized by using Bayesian genetic algorithms (BGA) optimization. The acoustic results from the proposed method are compared with available experimental results in literature, thus validating the accuracy and reliability of the proposed technique. The results indicate that the present approach is efficient and significantly reduced the time and effort to select the muffler material and optimal design, where both models CNN and RNN-LSTM achieved accuracy above 90% on the test and validation dataset. This work will reinforce the mufflers’ industrials, and its design may one day be equipped with deep learning based algorithms.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Network design problem; Bayesian optimization; Simulation-based optimization"

1

Wang, Xinghua. "Discrete Optimization and Agent-Based Simulation for Regional Evacuation Network Design Problem." Thesis, 2012. http://hdl.handle.net/1969.1/148251.

Full text
Abstract:
Natural disasters and extreme events are often characterized by their violence and unpredictability, resulting in consequences that in severe cases result in devastating physical and ecological damage as well as countless fatalities. In August 2005, Hurricane Katrina hit the Southern coast of the United States wielding serious weather and storm surges. The brunt of Katrina’s force was felt in Louisiana, where the hurricane has been estimated to total more than $108 billion in damage and over 1,800 casualties. Hurricane Rita followed Katrina in September 2005 and further contributed $12 billion in damage and 7 fatalities to the coastal communities of Louisiana and Texas. Prior to making landfall, residents of New Orleans received a voluntary, and then a mandatory, evacuation order in an attempt to encourage people to move themselves out of Hurricane Katrina’s predicted destructive path. Consistent with current practice in nearly all states, this evacuation order did not include or convey any information to individuals regarding route selection, shelter availability and assignment, or evacuation timing. This practice leaves the general population free to determine their own routes, destinations and evacuation times independently. Such freedom often results in inefficient and chaotic utilization of the roadways within an evacuation region, quickly creating bottlenecks along evacuation routes that can slow individual egress and lead to significant and potentially dangerous exposure of the evacuees to the impending storm. One way to assist the over-burdened and over-exposed population during extreme event evacuation is to provide an evacuation strategy that gives specific information on individual route selection, evacuation timing and shelter destination assignment derived from effective, strategic pre-planning. For this purpose, we present a mixed integer linear program to devise effective and controlled evacuation networks to be utilized during extreme event egress. To solve our proposed model, we develop a solution methodology based on Benders Decomposition and test its performance through an experimental design using the Central Texas region as our case study area. We show that our solution methods are efficient for large-scale instances of realistic size and that our methods surpass the size and computational limitations currently imposed by more traditional approaches such as branch-and-cut. To further test our model under conditions of uncertain individual choice/behavior, we create an agent-based simulation capable of modeling varying levels of evacuee compliance to the suggested optimal routes and varying degrees of communication between evacuees and between evacuees and the evacuation authority. By providing evacuees with information on when to evacuate, where to evacuate and how to get to their prescribed destination, we are able to observe significant cost and time increases for our case study evacuation scenarios while reducing the potential exposure of evacuees to the hurricane through more efficient network usage. We provide discussion on scenario performance and show the trade-offs and benefits of alternative batch-time evacuation strategies using global and individual effectiveness measures. Through these experiments and the developed methodology, we are able to further motivate the need for a more coordinated and informative approach to extreme event evacuation.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Network design problem; Bayesian optimization; Simulation-based optimization"

1

Li, Xiangyang. "Inference Degradation in Information Fusion." In Methodological Advancements in Intelligent Information Technologies, 92–109. IGI Global, 2010. http://dx.doi.org/10.4018/978-1-60566-970-0.ch006.

Full text
Abstract:
Dynamic and active information fusion processes select the best sensor based on expected utility calculation in order to integrate the evidences acquires both accurately and timely. However, inference degradation happens when the same/similar sensors are selected repeatedly over time if the selection strategy is not well designed that considers the history of sensor engagement. This phenomenon decreases fusion accuracy and efficiency, in direct conflict to the objective of information integration with multiple sensors. This chapter tries to provide a mathematical scrutiny of this problem in the myopia planning popularly utilized in active information fusion. In evaluation it first introduces the common active information fusion context using security surveillance applications. It then examines the generic dynamic Bayesian network model for a mental state recognition task and analyzes experimentation results for the inference degradation. It also discusses the candidate solutions with some preliminary results. The inference degradation problem is not limited to the discussed task and may emerge in variants of sensor planning strategies even with more global optimization approach. This study provides common guidelines in information integration applications for information awareness and intelligent decision.
APA, Harvard, Vancouver, ISO, and other styles
2

Gong, Yansheng, and Wenfeng Jing. "Research on 1D-CNN Detection Methods of High-Speed Railway Catenary Dropper Faults Based on Acceleration Sensors." In Advances in Transdisciplinary Engineering. IOS Press, 2022. http://dx.doi.org/10.3233/atde220071.

Full text
Abstract:
Droppers are key components of high-speed railway overhead catenary systems, which are exposed to the external environment and are prone to breakage faults due to the impact of wind force and the pantograph on moving trains day after day. How to identify dropper breakage or relaxation faults through acceleration signals installed in the carrier cable and contact wire is a challenging problem. In this study, the experimental section of the Lanzhou-Xinjiang high-speed railway was simulated on the basis of the bow-network dynamic simulation model, in which the overhead catenary system was subjected to the force of pulsating wind alone or pulsating wind and the pantograph at the same time. In the experiment, we collected 10 channel signals from five acceleration sensors when two droppers were normal or broken. We established a 1D-CNN model of four categories and then determined the hyperparameters of the deep network structure and the important parameters of the network optimization scheme through the Bayesian optimization algorithm. Furthermore, we selected the lowest sensor number to identify dropper fracture faults by a large number of experiments according to mechanics principles. The experimental results show that the proposed methods have a higher identification accuracy rate, recall rate, and robustness than the traditional artificial feature extraction approaches. Therefore, the detection methods proposed provide an effective way to identify high-speed railway catenary dropper faults on the basis of acceleration sensors.
APA, Harvard, Vancouver, ISO, and other styles
3

Reineri, Massimo, Claudio Casetti, Carla-Fabiana Chiasserini, Marco Fiore, Oscar Trullols-Cruces, and Jose M. Barcelo-Ordinas. "RSU Deployment for Content Dissemination and Downloading in Intelligent Transportation Systems." In IT Policy and Ethics, 1798–821. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2919-6.ch079.

Full text
Abstract:
The focus of this chapter is twofold: information dissemination from infrastructure nodes deployed along the roads, the so-called Road-Side Units (RSUs), to passing-by vehicles, and content downloading by vehicular users through nearby RSUs. In particular, in order to ensure good performance for both content dissemination and downloading, the presented study addresses the problem of RSU deployment and reviews previous work that has dealt with such an issue. The RSU deployment problem is then formulated as an optimization problem, where the number of vehicles that come in contact with any RSU is maximized, possibly considering a minimum contact time to be guaranteed. Since such optimization problems turn out to be NP-hard, heuristics are proposed to efficiently approximate the optimal solution. The RSU deployment obtained through such heuristics is then used to investigate the performance of content dissemination and downloading through ns2 simulations. Simulation tests are carried out under various real-world vehicular environments, including a realistic mobility model, and considering that the IEEE 802.11p standard is used at the physical and medium access control layers. The performance obtained in realistic conditions is discussed with respect to the results obtained under the same RSU deployment, but in ideal conditions and protocol message exchange. Based on the obtained results, some useful hints on the network system design are provided.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Network design problem; Bayesian optimization; Simulation-based optimization"

1

Sharpe, Conner, Clinton Morris, Benjamin Goldsberry, Carolyn Conner Seepersad, and Michael R. Haberman. "Bayesian Network Structure Optimization for Improved Design Space Mapping for Design Exploration With Materials Design Applications." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67643.

Full text
Abstract:
Modern design problems present both opportunities and challenges, including multifunctionality, high dimensionality, highly nonlinear multimodal responses, and multiple levels or scales. These factors are particularly important in materials design problems and make it difficult for traditional optimization algorithms to search the space effectively, and designer intuition is often insufficient in problems of this complexity. Efficient machine learning algorithms can map complex design spaces to help designers quickly identify promising regions of the design space. In particular, Bayesian network classifiers (BNCs) have been demonstrated as effective tools for top-down design of complex multilevel problems. The most common instantiations of BNCs assume that all design variables are independent. This assumption reduces computational cost, but can limit accuracy especially in engineering problems with interacting factors. The ability to learn representative network structures from data could provide accurate maps of the design space with limited computational expense. Population-based stochastic optimization techniques such as genetic algorithms (GAs) are ideal for optimizing networks because they accommodate discrete, combinatorial, and multimodal problems. Our approach utilizes GAs to identify optimal networks based on limited training sets so that future test points can be classified as accurately and efficiently as possible. This method is first tested on a common machine learning data set, and then demonstrated on a sample design problem of a composite material subjected to a planar sound wave.
APA, Harvard, Vancouver, ISO, and other styles
2

Junyu, You, Ampomah William, and Sun Qian. "Optimization of Water-Alternating-CO2 Injection Field Operations Using a Machine-Learning-Assisted Workflow." In SPE Reservoir Simulation Conference. SPE, 2021. http://dx.doi.org/10.2118/203913-ms.

Full text
Abstract:
Abstract This paper will present a robust workflow to address multi-objective optimization (MOO) of CO2-EOR-sequestration projects with a large number of operational control parameters. Farnsworth Unit (FWU) field, a mature oil reservoir undergoing CO2 alternating water injection (CO2-WAG) enhanced oil recovery (EOR), will be used as a field case to validate the proposed optimization protocol. The expected outcome of this work would be a repository of Pareto-optimal solutions of multiple objective functions, including oil recovery, carbon storage volume, and project economics. FWU's numerical model is employed to demonstrate the proposed optimization workflow. Since using MOO requires computationally intensive procedures, machine-learning-based proxies are introduced to substitute for the high-fidelity model, thus reducing the total computation overhead. The vector machine regression combined with the Gaussian kernel (Gaussian -SVR) is utilized to construct proxies. An iterative self-adjusting process prepares the training knowledgebase to develop robust proxies and minimizes computational time. The proxies’ hyperparameters will be optimally designed using Bayesian Optimization to achieve better generalization performance. Trained proxies will be coupled with Multi-objective Particle Swarm Optimization (MOPSO) protocol to construct the Pareto-front solution repository. The outcomes of this workflow will be a repository containing Pareto-optimal solutions of multiple objectives considered in the CO2-WAG project. The proposed optimization workflow will be compared with another established methodology employing a multi-layer neural network to validate its feasibility in handling MOO with a large number of parameters to control. Optimization parameters used include operational variables that might be used to control the CO2-WAG process, such as the duration of the water/gas injection period, producer bottomhole pressure (BHP) control, and water injection rate of each well included in the numerical model. It is proven that the workflow coupling Gaussian -SVR proxies and the iterative self-adjusting protocol is more computationally efficient. The MOO process is made more rapid by squeezing the size of the required training knowledgebase while maintaining the high accuracy of the optimized results. The outcomes of the optimization study show promising results in successfully establishing the solution repository considering multiple objective functions. Results are also verified by validating the Pareto fronts with simulation results using obtained optimized control parameters. The outcome from this work could provide field operators an opportunity to design a CO2-WAG project using as many inputs as possible from the reservoir models. The proposed work introduces a novel concept that couples Gaussian -SVR proxies with a self-adjusting protocol to increase the computational efficiency of the proposed workflow and to guarantee the high accuracy of the obtained optimized results. More importantly, the workflow can optimize a large number of control parameters used in a complex CO2-WAG process, which greatly extends its utility in solving large-scale multi-objective optimization problems in various projects with similar desired outcomes.
APA, Harvard, Vancouver, ISO, and other styles
3

Safarkhani, Salar, Ilias Bilionis, and Jitesh H. Panchal. "Understanding the Effect of Task Complexity and Problem-Solving Skills on the Design Performance of Agents in Systems Engineering." In ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/detc2018-85941.

Full text
Abstract:
Systems engineering processes coordinate the efforts of many individuals to design a complex system. However, the goals of the involved individuals do not necessarily align with the system-level goals. Everyone, including managers, systems engineers, subsystem engineers, component designers, and contractors, is self-interested. It is not currently understood how this discrepancy between organizational and personal goals affects the outcome of complex systems engineering processes. To answer this question, we need a systems engineering theory that accounts for human behavior. Such a theory can be ideally expressed as a dynamic hierarchical network game of incomplete information. The nodes of this network represent individual agents and the edges the transfer of information and incentives. All agents decide independently on how much effort they should devote to a delegated task by maximizing their expected utility; the expectation is over their beliefs about the actions of all other individuals and the moves of nature. An essential component of such a model is the quality function, defined as the map between an agent’s effort and the quality of their job outcome. In the economics literature, the quality function is assumed to be a linear function of effort with additive Gaussian noise. This simplistic assumption ignores two critical factors relevant to systems engineering: (1) the complexity of the design task, and (2) the problem-solving skills of the agent. Systems engineers establish their beliefs about these two factors through years of job experience. In this paper, we encode these beliefs in clear mathematical statements about the form of the quality function. Our approach proceeds in two steps: (1) we construct a generative stochastic model of the delegated task, and (2) we develop a reduced order representation suitable for use in a more extensive game-theoretic model of a systems engineering process. Focusing on the early design stages of a systems engineering process, we model the design task as a function maximization problem and, thus, we associate the systems engineer’s beliefs about the complexity of the task with their beliefs about the complexity of the function being maximized. Furthermore, we associate an agent’s problem solving-skills with the strategy they use to solve the underlying function maximization problem. We identify two agent types: “naïve” (follows a random search strategy) and “skillful” (follows a Bayesian global optimization strategy). Through an extensive simulation study, we show that the assumption of the linear quality function is only valid for small effort levels. In general, the quality function is an increasing, concave function with derivative and curvature that depend on the problem complexity and agent’s skills.
APA, Harvard, Vancouver, ISO, and other styles
4

Tao, Siyu, Anton van Beek, Daniel W. Apley, and Wei Chen. "Bayesian Optimization for Simulation-Based Design of Multi-Model Systems." In ASME 2020 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/detc2020-22651.

Full text
Abstract:
Abstract We address the problem of simulation-based design using multiple interconnected expensive simulation models, each modeling a different subsystem. Our goal is to find the globally optimal design with minimal model evaluation costs. To our knowledge, the best existing approach is to treat the whole system as a single expensive model and apply an existing Bayesian optimization (BO) algorithm. This approach is likely inefficient due to the need to evaluate all the component models in each iteration. We propose a multi-model BO approach that dynamically and selectively evaluates one component model per iteration based on linked emulators for uncertainty quantification and the system knowledge gradient (KG) as acquisition function. Building on this, we resolve problems with constraints and feedback couplings that often occur in real complex engineering design by penalizing the objective emulator and reformulating the original problem into a decoupled one. The superior efficiency of our approach is demonstrated through solving an analytical problem and a multidisciplinary design problem of electronic packaging optimization.
APA, Harvard, Vancouver, ISO, and other styles
5

Pan, Guangyuan, Chen Qili, Fu Liping, Yu Ming, and Muresan Matthew. "Uncertainty estimation on road safety analysis using bayesian deep neural networks." In 6th International Conference on Road and Rail Infrastructure. University of Zagreb Faculty of Civil Engineering, 2021. http://dx.doi.org/10.5592/co/cetra.2020.1218.

Full text
Abstract:
Deep neural networks have been successfully used in many different areas of traffic engineering, such as crash prediction, intelligent signal optimization and real-time road surface condition monitoring. The benefits of deep neural networks are often uniquely suited to solve certain problems and can offer improvements in performance when compared to traditional methods. In collision prediction, uncertainty estimation is a critical area that can benefit from their application, and accurate information on the reliability of a model’s predictions can increase public confidence in those models. Applications of deep neural networks to this problem that consider these effects have not been studied previously. This paper develops a Bayesian deep neural network for crash prediction and examines the reliability of the model based on three key methods: layer-wise greedy unsupervised learning, Bayesian regularization and adapted marginalization. An uncertainty equation for the model is also proposed for this domain for the first time. To test the performance, eight years of car collision data collected from Highway 401, Canada, is used, and three experiments are designed.
APA, Harvard, Vancouver, ISO, and other styles
6

Jin, Yuan, Shan Li, and Olivier Jung. "Prediction of Flow Properties on Turbine Vane Airfoil Surface From 3D Geometry With Convolutional Neural Network." In ASME Turbo Expo 2019: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/gt2019-90811.

Full text
Abstract:
Abstract Nowadays, Computational Fluid Dynamics (CFD) simulations play an increasingly important role for turbine airfoil design. This high-fidelity approach is capable to provide accurate information of flow fields. Meanwhile, the calculation accuracy is always gained at the expense of numerical cost. This gap limits opportunities for design space exploration. To address this problem, surrogate models (also known as metamodels) are introduced to approximate high-fidelity CFD models. However, traditional surrogate models, such as Kriging or Radial Basis Function, construct response surface on a design space with limited dimensions. This prevents users from predicting the flow fields directly from the geometry and performing interactive design of airfoil. In the present work, we propose a Convolutional Neural Network (CNN) based surrogate model to predict flow properties on turbine vane airfoil surface from 3D airfoil profile defined by point cloud. The proposed CNN architecture adopts a symmetric expanding path that is similar to the so-called U-Net. The geometries in the training and testing dataset are generated via varying the parameters defined by the Free-Form Deformation approach. The corresponding flow fields are obtained through high-fidelity CFD simulations performed in a finite volume context. Furthermore, a gaussian process based Bayesian optimization technique is utilized to tune automatically the hyperparameters of the network. In this work, we trained the CNN based surrogate model with static pressure and temperature on the mean section of turbine vane airfoil surface. The trained model is able to predict in a reliable and efficient way the corresponding property directly from the 3D geometry, which allows engineers to agilely adjust their airfoil design.
APA, Harvard, Vancouver, ISO, and other styles
7

Cho, Hyunkyoo, K. K. Choi, and David Lamb. "Confidence-Based Method for Reliability-Based Design Optimization." In ASME 2014 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/detc2014-34644.

Full text
Abstract:
An accurate input probabilistic model is necessary to obtain a trustworthy result in the reliability analysis and the reliability-based design optimization (RBDO). However, the accurate input probabilistic model is not always available. Very often only insufficient input data are available in practical engineering problems. When only the limited input data are provided, uncertainty is induced in the input probabilistic model and this uncertainty propagates to the reliability output which is defined as the probability of failure. Then, the confidence level of the reliability output will decrease. To resolve this problem, the reliability output is considered to have a probability distribution in this paper. The probability of the reliability output is obtained as a combination of consecutive conditional probabilities of input distribution type and parameters using Bayesian approach. The conditional probabilities that are obtained under certain assumptions and Monte Carlo simulation (MCS) method is used to calculate the probability of the reliability output. Using the probability of the reliability output as constraint, a confidence-based RBDO (C-RBDO) problem is formulated. In the new probabilistic constraint of the C-RBDO formulation, two threshold values of the target reliability output and the target confidence level are used. For effective C-RBDO process, the design sensitivity of the new probabilistic constraint is derived. The C-RBDO is performed for a mathematical problem with different numbers of input data and the result shows that C-RBDO optimum designs incorporate appropriate conservativeness according to the given input data.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Qitian, Lei Jiang, Xiaofeng Gao, Xiaochun Yang, and Guihai Chen. "Feature Evolution Based Multi-Task Learning for Collaborative Filtering with Social Trust." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/538.

Full text
Abstract:
Social recommendation could address the data sparsity and cold-start problems for collaborative filtering by leveraging user trust relationships as auxiliary information for recommendation. However, most existing methods tend to consider the trust relationship as preference similarity in a static way and model the representations for user preference and social trust via a common feature space. In this paper, we propose TrustEV and take the view of multi-task learning to unite collaborative filtering for recommendation and network embedding for user trust. We design a special feature evolution unit that enables the embedding vectors for two tasks to exchange their features in a probabilistic manner, and further harness a meta-controller to globally explore proper settings for the feature evolution units. The training process contains two nested loops, where in the outer loop, we optimize the meta-controller by Bayesian optimization, and in the inner loop, we train the feedforward model with given feature evolution units. Experiment results show that TrustEV could make better use of social information and greatly improve recommendation MAE over state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
9

Ding, Y., and T. J. Nye. "Collaborative Agent Based Optimization of Draw Die Design." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-67839.

Full text
Abstract:
In this paper we consider the problem of automatically determining optimal drawbead sizes and blankholder forces when designing draw dies for stamped parts. A network of software agents, each implementing a different numerical optimization technique, was used in combination with metal forming simulation software to optimize process variables. Three test cases were used of varying complexity from a rectangular cup to the NUMISHEET’99 automobile front door panel simulation benchmark. It was found that the performance of each agent (and optimization technique) depended strongly on the complexity of the problem. More interestingly, for a given amount of computational effort, a network of collaborating agents using different optimization techniques always outperformed agents using a single technique in terms of both the best solution found and in the variance of the collection of best solutions.
APA, Harvard, Vancouver, ISO, and other styles
10

Romero, David A., Cristina H. Amon, and Susan Finger. "A Study of Covariance Functions for Multi-Response Metamodeling for Simulation-Based Design and Optimization." In ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/detc2008-50061.

Full text
Abstract:
The optimal design of complex systems in engineering requires the availability of mathematical models of system’s behavior as a function of a set of design variables; such models allow the designer to find the best solution to the design problem. However, system models (e.g. CFD analysis, physical prototypes) are usually time-consuming and expensive to evaluate, and thus unsuited for systematic use during design. Approximate models, or metamodels, of system behavior based on a limited set of data allow significant savings by reducing the resources devoted to modeling during the design process. In our work in engineering design based on multiple performance criteria, we propose the use of Multi-response Bayesian Surrogate Models (MRBSM) to model several aspects of system behavior jointly, instead of modeling each individually. By doing so, it is expected that the observed correlation among the response variables can be used to achieve better models with smaller data sets. In this work, we study the approximation capabilities of several covariance functions needed for multi-response metamodeling with MRBSM, performing a simulation study in which we compare MRBSM based on different covariance functions against metamodels built individually for each response. Our preliminary results indicate that MRBSM outperforms individual metamodels in 46% to 67% of the test cases, though the relative performance of the studied covariance functions is highly dependent on the sampling scheme used and the actual correlation among the observed response values.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography