Academic literature on the topic 'GENERIC DECISION META-MODEL'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'GENERIC DECISION META-MODEL.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "GENERIC DECISION META-MODEL"

1

Zlatev, Zlatko, Galina Veres, and Zoheir Sabeur. "Agile Data Fusion and Knowledge Base Architecture for Critical Decision Support." International Journal of Decision Support System Technology 5, no. 2 (April 2013): 1–20. http://dx.doi.org/10.4018/jdsst.2013040101.

Full text
Abstract:
This paper describes the architecture and deployment of a software platform for information fusion, knowledge hosting and critical decision support. The work has been carried out under the TRIDEC project (www.tridec-online.eu), focusing on geo-information fusion and collaborative decision making. Four technologies underpin the architecture: 1) A message oriented middleware, for distributed communications; 2) A leveraged hybrid storage solution, for efficient storage of heterogeneous datasets and semantic knowledge; 3) A generic data fusion container, for dynamic algorithms control; and 4) A single conceptual model and schema, as systems’ semantic meta-model. Deployment for industrial drilling operations is described. Agility is manifested with the ability to integrate data sources from a proprietary domain, dynamically discover new datasets and configure and task fusion algorithms to operate on them, aided by efficient information storage. The platform empowers decision support by enabling dynamic discovery of information and control of the fusion process across geo-distributed locations.
APA, Harvard, Vancouver, ISO, and other styles
2

LALLOUET, ARNAUD, and ANDREI LEGTCHENKO. "BUILDING CONSISTENCIES FOR PARTIALLY DEFINED CONSTRAINTS WITH DECISION TREES AND NEURAL NETWORKS." International Journal on Artificial Intelligence Tools 16, no. 04 (August 2007): 683–706. http://dx.doi.org/10.1142/s0218213007003503.

Full text
Abstract:
Partially Defined Constraints can be used to model the incomplete knowledge of a concept or a relation. Instead of only computing with the known part of the constraint, we propose to complete its definition by using Machine Learning techniques. Since constraints are actively used during solving for pruning domains, building a classifier for instances is not enough: we need a solver able to reduce variable domains. Our technique is composed of two steps: first we learn a classifier for each constraint projections and then we transform the classifiers into a propagator. The first contribution is a generic meta-technique for classifier improvement showing performances comparable to boosting. The second lies in the ability of using the learned concept in constraint-based decision or optimization problems. We presents results using Decision Trees and Artificial Neural Networks for constraint learning and propagation. It opens a new way of integrating Machine Learning in Decision Support Systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Lázaro, Elena, David Makowski, Joaquín Martínez-Minaya, and Antonio Vicent. "Comparison of Frequentist and Bayesian Meta-Analysis Models for Assessing the Efficacy of Decision Support Systems in Reducing Fungal Disease Incidence." Agronomy 10, no. 4 (April 13, 2020): 560. http://dx.doi.org/10.3390/agronomy10040560.

Full text
Abstract:
Diseases of fruit and foliage caused by fungi and oomycetes are generally controlled by the application of fungicides. The use of decision support systems (DSSs) may assist to optimize fungicide programs to enhance application on the basis of risk associated with disease outbreak. Case-by-case evaluations demonstrated the performance of DSSs for disease control, but an overall assessment of the efficacy of DSSs is lacking. A literature review was conducted to synthesize the results of 67 experiments assessing DSSs. Disease incidence data were obtained from published peer-reviewed field trials comparing untreated controls, calendar-based and DSS-based fungicide programs. Two meta-analysis generic models, a “fixed-effects” vs. a “random-effects” model within the framework of generalized linear models were evaluated to assess the efficacy of DSSs in reducing incidence. All models were fit using both frequentist and Bayesian estimation procedures and the results compared. Model including random effects showed better performance in terms of AIC or DIC and goodness of fit. In general, the frequentist and Bayesian approaches produced similar results. Odds ratio and incidence ratio values showed that calendar-based and DSS-based fungicide programs considerably reduced disease incidence compared to the untreated control. Moreover, calendar-based and DSS-based programs provided similar reductions in disease incidence, further supporting the efficacy of DSSs.
APA, Harvard, Vancouver, ISO, and other styles
4

Younis, Eman M. G., Someya Mohsen Zaki, Eiman Kanjo, and Essam H. Houssein. "Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion." Sensors 22, no. 15 (July 27, 2022): 5611. http://dx.doi.org/10.3390/s22155611.

Full text
Abstract:
Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detection enables many applications such as adaptive user interfaces, interactive games, and human robot interaction and many more. The availability of advanced technologies such as mobiles, sensors, and data analytics tools led to the ability to collect data from various sources, which enabled researchers to predict human emotions accurately. Most current research uses them in the lab experiments for data collection. In this work, we use direct and real time sensor data to construct a subject-independent (generic) multi-modal emotion prediction model. This research integrates both on-body physiological markers, surrounding sensory data, and emotion measurements to achieve the following goals: (1) Collecting a multi-modal data set including environmental, body responses, and emotions. (2) Creating subject-independent Predictive models of emotional states based on fusing environmental and physiological variables. (3) Assessing ensemble learning methods and comparing their performance for creating a generic subject-independent model for emotion recognition with high accuracy and comparing the results with previous similar research. To achieve that, we conducted a real-world study “in the wild” with physiological and mobile sensors. Collecting the data-set is coming from participants walking around Minia university campus to create accurate predictive models. Various ensemble learning models (Bagging, Boosting, and Stacking) have been used, combining the following base algorithms (K Nearest Neighbor KNN, Decision Tree DT, Random Forest RF, and Support Vector Machine SVM) as base learners and DT as a meta-classifier. The results showed that, the ensemble stacking learner technique gave the best accuracy of 98.2% compared with other variants of ensemble learning methods. On the contrary, bagging and boosting methods gave (96.4%) and (96.6%) accuracy levels respectively.
APA, Harvard, Vancouver, ISO, and other styles
5

Franz, Kamila, Jerzy Romanowski, Karin Johst, and Volker Grimm. "Porównawcza ocena programów analizy żywotności populacji (PVA) w rankingu scenariuszy przekształceń krajobrazu = A comparative assessment of PVA software packages applied to rank the landscape management scenarios." Przegląd Geograficzny 93, no. 3 (2021): 365–85. http://dx.doi.org/10.7163/przg.2021.3.3.

Full text
Abstract:
Because of the scale and speed of species extinctions conservationists require methods that facilitate decision making. Therefore, a wide range of habitat and population viability analysis (PVA) software has been developed. Given the diversity of available programs it is currently challenging to decide which program is the most appropriate for a particular problem and what has to be considered when interpreting and comparing results from different approaches. Previous comparisons of PVA software addressed more generic questions such as data requirements, assumptions and predictive accuracy. In contract, we focus on a more applied problem that is still unresolved: how do simple habitat models and PVA software packages affect the ranking of alternative management scenarios? We addressed this problem by comparing different packages (LARCH, META-X, VORTEX and RAMAS GIS). As a test case, we studied the impact of alternative landscape development scenarios (river regulation, grassland restoration, reforestation and renaturalisation) for the Vistula valley, Poland, on the natterjack toad (Bufo calamita). In this context we also aimed to assess whether the use of at least two different PVA packages can enable users to better understand the differences in model predictions, which would imply a greater awareness and critical use of the packages. Our model selection represents different approaches to population viability analysis, including habitat, local population and stochastic patch occupancy models. The models can be evaluated in regard to the complexity of parameters and to the way the landscape is handled. We used RAMAS GIS to create a habitat model (RAMASh) and a detailed spatially explicit stochastic metapopulation model (RAMASp) which combined served as a complete “virtual” dataset for parameterisation of other programs. As an example of a stochastic patch occupancy model, we selected the META-X software. For a more independent comparison we added VORTEX – another package that includes explicit population dynamics, similar to RAMAS. Additionally, we included the habitat model LARCH because this type of model is often used by policy makers. We compared the metapopulation structure produced by RAMASh and LARCH. Scenario ranking according to the predicted carrying capacity in both programs was exactly the same, because the quantitative results for each scenario were almost identical in both programs. However, the metapopulation structure showed big differences between the programs, especially in the number of small populations. The analyses of results of different PVA programs (RAMASp, VORTEX and META-X) showed that absolute values of viability measures partly differed among these programs. Slight differences in population growth rate in RAMASp and VORTEX were amplified by stochasticity and resulted in visibly lower values of final abundance in VORTEX than in RAMASp. Also the absolute values of intrinsic mean time to extinction showed some discrepancies in VORTEX and META-X. These results are in agreement with findings of previous PVA comparisons, which emphasizes that absolute values of viability measures produced by any single model should be treated with caution. Nevertheless, despite these differences the rankings of the scenarios were the same in all three programs. However the order of the scenarios was different than in habitat models. In addition, these rankings were robust to the choice of viability measure. Taken together, these results emphasize that scenario ranking delivered by PVA software is robust and thus very useful for conservation management. Furthermore, we recommend using at least two PVA software packages in parallel, as this forces user to scrutinize the simplifying assumptions of the underlying models and of the viability metrics used.
APA, Harvard, Vancouver, ISO, and other styles
6

Blanco Mejia, Sonia, Mark Messina, Siying S. Li, Effie Viguiliouk, Laura Chiavaroli, Tauseef A. Khan, Korbua Srichaikul, et al. "A Meta-Analysis of 46 Studies Identified by the FDA Demonstrates that Soy Protein Decreases Circulating LDL and Total Cholesterol Concentrations in Adults." Journal of Nutrition 149, no. 6 (April 22, 2019): 968–81. http://dx.doi.org/10.1093/jn/nxz020.

Full text
Abstract:
ABSTRACT Background Certain plant foods (nuts and soy protein) and food components (viscous fibers and plant sterols) have been permitted by the FDA to carry a heart health claim based on their cholesterol-lowering ability. The FDA is currently considering revoking the heart health claim for soy protein due to a perceived lack of consistent LDL cholesterol reduction in randomized controlled trials. Objective We performed a meta-analysis of the 46 controlled trials on which the FDA will base its decision to revoke the heart health claim for soy protein. Methods We included the 46 trials on adult men and women, with baseline circulating LDL cholesterol concentrations ranging from 110 to 201 mg/dL, as identified by the FDA, that studied the effects of soy protein on LDL cholesterol and total cholesterol (TC) compared with non-soy protein. Two independent reviewers extracted relevant data. Data were pooled by the generic inverse variance method with a random effects model and expressed as mean differences with 95% CI. Heterogeneity was assessed and quantified. Results Of the 46 trials identified by the FDA, 43 provided data for meta-analyses. Of these, 41 provided data for LDL cholesterol, and all 43 provided data for TC. Soy protein at a median dose of 25 g/d during a median follow-up of 6 wk decreased LDL cholesterol by 4.76 mg/dL (95% CI: −6.71, −2.80 mg/dL, P < 0.0001; I2 = 55%, P < 0.0001) and decreased TC by 6.41 mg/dL (95% CI: −9.30, −3.52 mg/dL, P < 0.0001; I2 = 74%, P < 0.0001) compared with non-soy protein controls. There was no dose–response effect or evidence of publication bias for either outcome. Inspection of the individual trial estimates indicated most trials (∼75%) showed a reduction in LDL cholesterol (range: −0.77 to −58.60 mg/dL), although only a minority of these were individually statistically significant. Conclusions Soy protein significantly reduced LDL cholesterol by approximately 3–4% in adults. Our data support the advice given to the general public internationally to increase plant protein intake. This trial was registered at clinicaltrials.gov as NCT03468127.
APA, Harvard, Vancouver, ISO, and other styles
7

Rezaei, Mahdi, Mohsen Akbarpour Shirazi, and Behrooz Karimi. "IoT-based framework for performance measurement." Industrial Management & Data Systems 117, no. 4 (May 8, 2017): 688–712. http://dx.doi.org/10.1108/imds-08-2016-0331.

Full text
Abstract:
Purpose The purpose of this paper is to develop an Internet of Things (IoT)-based framework for supply chain (SC) performance measurement and real-time decision alignment. The aims of the proposed model are to optimize the performance indicator based on integrated supply chain operations reference metrics. Design/methodology/approach The SC multi-dimensional structure is modeled by multi-objective optimization methods. The operational presented model considers important SC features thoroughly such as multi-echelons, several suppliers, several manufacturers and several products during multiple periods. A multi-objective mathematical programming model is then developed to yield the operational decisions with Pareto efficient performance values and solved using a well-known meta-heuristic algorithm, i.e., non-dominated sorting genetic algorithm II. Afterward, Technique for Order of Preference by Similarity to Ideal Solution method is used to determine the best operational solution based on the strategic decision maker’s idea. Findings This paper proposes a dynamic integrated solution for three main problems: strategic decisions in high level, operational decisions in low level and alignment of these two decision levels. Originality/value The authors propose a human intelligence-based process for high level decision and machine intelligence-based decision support systems for low level decision using a novel approach. High level and low level decisions are aligned by a machine intelligence model as well. The presented framework is based on change detection, event driven planning and real-time decision alignment.
APA, Harvard, Vancouver, ISO, and other styles
8

Mohamed, Marwa F., Mohamed Meselhy Eltoukhy, Khalil Al Ruqeishi, and Ahmad Salah. "An Adapted Multi-Objective Genetic Algorithm for Healthcare Supplier Selection Decision." Mathematics 11, no. 6 (March 22, 2023): 1537. http://dx.doi.org/10.3390/math11061537.

Full text
Abstract:
With the advancement of information technology and economic globalization, the problem of supplier selection is gaining in popularity. The impact of supplier selection decisions made were quick and noteworthy on the healthcare profitability and total cost of medical equipment. Thus, there is an urgent need for decision support systems that address the optimal healthcare supplier selection problem, as this problem is addressed by a limited number of studies. Those studies addressed this problem mathematically or by using meta-heuristics methods. The focus of this work is to advance the meta-heuristics methods by considering more objectives rather than the utilized objectives. In this context, the optimal supplier selection problem for healthcare equipment was formulated as a mathematical model to expose the required objectives and constraints for the sake of searching for the optimal suppliers. Subsequently, the problem is realized as a multi-objective problem, with the help of this proposed model. The model has three minimization objectives: (1) transportation cost; (2) delivery time; and (3) the number of damaged items. The proposed system includes realistic constraints such as device quality, usability, and service quality. The model also takes into account capacity limits for each supplier. Next, it is proposed to adapt the well-known non-dominated sorting genetic algorithm (NSGA)-III algorithm to choose the optimal suppliers. The results of the adapted NSGA-III have been compared with several heuristic algorithms and two meta-heuristic algorithms (i.e., particle swarm optimization and NSGA-II). The obtained results show that the adapted NSGA-III outperformed the methods of comparison.
APA, Harvard, Vancouver, ISO, and other styles
9

Bansal, Ankita, and Sourabh Jajoria. "Cross-Project Change Prediction Using Meta-Heuristic Techniques." International Journal of Applied Metaheuristic Computing 10, no. 1 (January 2019): 43–61. http://dx.doi.org/10.4018/ijamc.2019010103.

Full text
Abstract:
Changes in software systems are inevitable. Identification of change-prone modules can help developers to focus efforts and resources on them. In this article, the authors conduct various intra-project and cross-project change predictions. The authors use distributional characteristics of dataset to generate rules which can be used for successful change prediction. The authors analyze the effectiveness of meta-heuristic decision trees in generating rules for successful cross-project change prediction. The employed meta-heuristic algorithms are hybrid decision tree genetic algorithms and oblique decision trees with evolutionary learning. The authors compare the performance of these meta-heuristic algorithms with C4.5 decision tree model. The authors observe that the accuracy of C4.5 decision tree is 73.33%, whereas the accuracy of the hybrid decision tree genetic algorithm and oblique decision tree are 75.00% and 75.56%, respectively. These values indicate that distributional characteristics are helpful in identifying suitable training set for cross-project change prediction.
APA, Harvard, Vancouver, ISO, and other styles
10

SAKALLI, Umit Sami, and Irfan ATABAS. "Ant Colony Optimization and Genetic Algorithm for Fuzzy Stochastic Production-Distribution Planning." Applied Sciences 8, no. 11 (October 24, 2018): 2042. http://dx.doi.org/10.3390/app8112042.

Full text
Abstract:
In this paper, a tactical Production-Distribution Planning (PDP) has been handled in a fuzzy and stochastic environment for supply chain systems (SCS) which has four echelons (suppliers, plants, warehouses, retailers) with multi-products, multi-transport paths, and multi-time periods. The mathematical model of fuzzy stochastic PDP is a NP-hard problem for large SCS because of the binary variables which determine the transportation paths between echelons of the SCS and cannot be solved by optimization packages. In this study, therefore, two new meta-heuristic algorithms have been developed for solving fuzzy stochastic PDP: Ant Colony Optimization (ACO) and Genetic Algorithm (GA). The proposed meta-heuristic algorithms are designed for route optimization in PDP and integrated with the GAMS optimization package in order to solve the remaining mathematical model which determines the other decisions in SCS, such as procurement decisions, production decisions, etc. The solution procedure in the literature has been extended by aggregating proposed meta-heuristic algorithms. The ACO and GA algorithms have been performed for test problems which are randomly generated. The results of the test problem showed that the both ACO and GA are capable to solve the NP-hard PDP for a big size SCS. However, GA produce better solutions than the ACO.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "GENERIC DECISION META-MODEL"

1

PRAKASH, DEEPIKA. "ELICTING INFORMATION REQUIREMENTS FOR DATA WAREHOUSES." Thesis, 2016. http://dspace.dtu.ac.in:8080/jspui/handle/repository/14844.

Full text
Abstract:
Data Warehouse support in terms of Requirements Engineering models and techniques has been extensively provided for operational level of decision making. However, it is increasingly being recognized that there are other forms of decision making that exist in an organization. These are strategic in nature and ‘above’ operational decision making. This thesis addresses the issue of providing decision making support to both strategic and operational decision making in the same DW system. The solution starts by defining two broad categories of decisions for which decision support is needed, one for policy enforcement rule (PER) formulation decisions and the other for operational decisions. Both kinds of decisions are structured based on a generic decision meta-model developed here. The process starts by developing two Data Warehouses, one for policy enforcement rules and the other for operational decisions. In order to identify the needed information for supporting decision making, a set of generic techniques for eliciting information is proposed. This information is stored in the DW. Again, the structure of information for the two DWs is based on a generic information meta-model developed here. The two DWs are integrated upstream in the requirements engineering phase using an integration life cycle proposed in this thesis. It is argued that there is a need for integration following the problems of inconsistency and loss of business control that can occur. This is due to common information and differing refresh times between the two Data Warehouses. Further, three tools were developed to provide computer support for arriving at information for (a) PER, (b) operational decisions and (c) integrating information. This process was validated using AYUSH policies.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "GENERIC DECISION META-MODEL"

1

Arjomandi Rad, Mohammad, Dag Raudberget, and Roland Stolt. "Data-Driven and Real-Time Prediction Models for Highly Iterative Product Development Processes." In Advances in Transdisciplinary Engineering. IOS Press, 2022. http://dx.doi.org/10.3233/atde220676.

Full text
Abstract:
Some high-level technical products are associated with transdisciplinary simulation-driven design processes. Therefore, their design process involves many stakeholders and is prone to frequent changes, leading to a highly iterative process with a long lead time. Despite the decades of statistical approximations and metamodeling techniques on prediction models, companies are still striving to achieve fully automated real-time predictions in early design phases. The literature study shows a gap in existing methods such as not being fully real-time or suffering from high dimensionality. This paper presents a generic model for the development process of such described products and motivation for such modeling through a series of semi-structured interviews with an automotive sub-supplier company. The proposed process model points to the digital verification in every design loop as the bottleneck which is then confirmed by interviewees. As alternative solutions to overcome the problems, a method for data-driven and real-time prediction models is presented to enable the designer to foresee the consequence of their decision in the design phase. To evaluate the method, two examples of such real-time meta-modeling techniques, developed in an ongoing research project are discussed. The proposed examples confirm that the framework can reduce lead time spent on digital verification and therefore accelerate the design process in such products.
APA, Harvard, Vancouver, ISO, and other styles
2

Bansal, Ankita, and Sourabh Jajoria. "Cross-Project Change Prediction Using Meta-Heuristic Techniques." In Research Anthology on Multi-Industry Uses of Genetic Programming and Algorithms, 279–99. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-8048-6.ch015.

Full text
Abstract:
Changes in software systems are inevitable. Identification of change-prone modules can help developers to focus efforts and resources on them. In this article, the authors conduct various intra-project and cross-project change predictions. The authors use distributional characteristics of dataset to generate rules which can be used for successful change prediction. The authors analyze the effectiveness of meta-heuristic decision trees in generating rules for successful cross-project change prediction. The employed meta-heuristic algorithms are hybrid decision tree genetic algorithms and oblique decision trees with evolutionary learning. The authors compare the performance of these meta-heuristic algorithms with C4.5 decision tree model. The authors observe that the accuracy of C4.5 decision tree is 73.33%, whereas the accuracy of the hybrid decision tree genetic algorithm and oblique decision tree are 75.00% and 75.56%, respectively. These values indicate that distributional characteristics are helpful in identifying suitable training set for cross-project change prediction.
APA, Harvard, Vancouver, ISO, and other styles
3

Mengersen, Kerrie, Christopher H. Schmid, Michael D. Jennions, and Jessica Gurevitch. "Statistical Models and Approaches to Inference." In Handbook of Meta-analysis in Ecology and Evolution. Princeton University Press, 2013. http://dx.doi.org/10.23943/princeton/9780691137285.003.0008.

Full text
Abstract:
This chapter provides an introduction and overview of the three statistical components of the meta-analysis: (1) the statistical model that describes how the study-specific estimates of interest will be combined; (2) the key statistical approaches for meta-analysis; and (3) the corresponding estimates, inferences, and decisions that arise from a meta-analysis. First, it describes common statistical models used in ecological meta-analyses and the relationships between these models, showing how they are all variations of the same general structure. It then discusses the three main approaches to analysis and inference, again with the aim of providing a general understanding of these methods. Finally, it briefly considers a number of statistical considerations which arise in meta-analysis. In order to illustrate the concepts described, the chapter considers the Lepidoptera mating example described in Appendix 8.1. This is a meta-analysis of 25 studies of the association between male mating history and female fecundity in Lepidoptera.
APA, Harvard, Vancouver, ISO, and other styles
4

Sümer, Ömer, Fabio Hellmann, Alexander Hustinx, Tzung-Chien Hsieh, Elisabeth André, and Peter Krawitz. "Few-Shot Meta-Learning for Recognizing Facial Phenotypes of Genetic Disorders." In Caring is Sharing – Exploiting the Value in Data for Health and Innovation. IOS Press, 2023. http://dx.doi.org/10.3233/shti230312.

Full text
Abstract:
Computer vision has useful applications in precision medicine and recognizing facial phenotypes of genetic disorders is one of them. Many genetic disorders are known to affect faces’ visual appearance and geometry. Automated classification and similarity retrieval aid physicians in decision-making to diagnose possible genetic conditions as early as possible. Previous work has addressed the problem as a classification problem; however, the sparse label distribution, having few labeled samples, and huge class imbalances across categories make representation learning and generalization harder. In this study, we used a facial recognition model trained on a large corpus of healthy individuals as a pre-task and transferred it to facial phenotype recognition. Furthermore, we created simple baselines of few-shot meta-learning methods to improve our base feature descriptor. Our quantitative results on GestaltMatcher Database (GMDB) show that our CNN baseline surpasses previous works, including GestaltMatcher, and few-shot meta-learning strategies improve retrieval performance in frequent and rare classes.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "GENERIC DECISION META-MODEL"

1

Hardy, Joerg H. "Ethical Algorithms in Human-Robot-Interaction. A Proposal." In 4th International Conference on Machine Learning and Soft Computing. Academy and Industry Research Collaboration Center (AIRCC), 2023. http://dx.doi.org/10.5121/csit.2023.130214.

Full text
Abstract:
Autonomous robots will need to form relationships with humans that are built on reliability and (social) trust. The source of reliability and trust in human relationships is (human) ethical competence, which includes the capability of moral decision-making. As autonomous robots cannot act with the ethical competence of human agents, a kind of human-like ethical competence has to be implemented into autonomous robots (AI-systems of various kinds) by way of ethical algorithms. In this paper I suggest a model of the general logical form of (human) meta-ethical arguments that can be used as a pattern for the programming of ethical algorithms for autonomous robots.
APA, Harvard, Vancouver, ISO, and other styles
2

Park, Junheung, Kyoung-Yun Kim, and Raj Sohmshetty. "A Prediction Modeling Framework: Toward Integration of Noisy Manufacturing Data and Product Design." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-46236.

Full text
Abstract:
In many design and manufacturing applications, data inconsistency or noise is common. These data can be used to create opportunities and/or support critical decisions in many applications, for example, welding quality prediction for material selection and quality monitoring applications. Typical approaches to deal with these data issues are to remove or alter them before constructing any model or conducting any analysis to draw decisions. However, these approaches are limited especially when each data carries important value to extract additional information about the nature of the given problem. In the literature, with the presence of noise in data, bootstrap aggregating has shown an improvement in the prediction accuracy. In order to achieve such an improvement, a bagging model has to be carefully constructed. The base learning algorithm, number of base learning algorithms, and parameters for the base learning algorithms are crucial design parameters in that aspect. Evolutionary algorithms such as genetic algorithm and particle swarm optimization have shown promising results in determining good parameters for different learning algorithms such as multilayer perceptron neural network and support vector regression. However, the computational cost of an evolutionary computation algorithm is usually high as they require a large number of candidate solution evaluations. This requirement even more increases when bagging is involved rather than a single learning algorithm. To reduce such high computational cost, a metamodeling approach is introduced to particle swarm optimization. The meta-modeling approach reduces the number of fitness function evaluations in the particle swarm optimization process and therefore the overall computational cost can be reduced. In this paper, we propose a prediction modeling framework whose aim is to construct a bagging model to improve the prediction accuracy on noisy data. The proposed framework is tested on an artificially generated noisy dataset. The quality of final solutions obtained by the proposed framework is reasonable compared to particle swarm optimization without meta-modeling. In addition, using the proposed framework, the largest improvement in the computational time is about 42 percent.
APA, Harvard, Vancouver, ISO, and other styles
3

Sehili, Youcef, Khaled Loubar, Lyes Tarabet, Cerdoun Mahfoudh, and Clément Lacroix. "Meta-Model Optimization of Dual-Fuel Engine Performance and Emissions Using Emulsified Diesel with Varying Water Percentages and Injection Timing." In 16th International Conference on Engines & Vehicles. 400 Commonwealth Drive, Warrendale, PA, United States: SAE International, 2023. http://dx.doi.org/10.4271/2023-24-0032.

Full text
Abstract:
<div class="section abstract"><div class="htmlview paragraph">As emission restrictions become more stringent and conventional fuel supplies become more limited, dual-fuel engines are emerging as a promising solution that offers both environmental and economic benefits. However, the performance of these engines is often hampered by the issue of knocking, which can negatively impact their overall operation, and also by the increase in NOx emissions at high load. This work investigates the use of pilot injection properties by combining the use of emulsified diesel of different water percentages with injection timing to reduce both knock intensity and NOx emission rate. Specifically, a dual fuel operation case at full load with high enrichment of the primary fuel (natural gas) with hydrogen is considered in order to create conditions for high knocking and high NOx emission rates. The online optimization principle is used for the creation of the meta-model, utilizing the Radial Basis Functions technique (RBF), and the search for the optimum in parallel using the Non-Dominated Sorting Genetic Algorithm (NSGA-II) to handle two objective functions: the minimization of the knock intensity and NOx emissions, and the maximization of the engine thermal efficiency, based on two decision variables: the volume percentage of water in the emulsified diesel (0-30%) and the injection time of this pilot fuel (5-30° CA BTDC). The evaluation of the cases is provided by a CFD calculation model (Converge©) after validation by experimental results. The results indicate that the amount of water contained in the diesel and the injection time have a significant influence on the knock intensity (a decrease of 74%) and the rate of pollutant emissions (a decrease of 61%). The Pareto front summarizes the non-dominated cases according to the two objective functions and indicates that increasing the percentage of water and delaying the pilot injection decrease both the intensity of the knocking and the NOx emissions but penalizes the thermal efficiency of the engine. Therefore, choosing the optimums is crucial in achieving a compromise between the two objective functions.</div></div>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography