Journal articles on the topic 'Expert systems (Computer science) Validation'

To see the other types of publications on this topic, follow the link: Expert systems (Computer science) Validation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Expert systems (Computer science) Validation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Harrison, Patrick R., and P. Ann Ratcliffe. "Towards standards for the validation of expert systems." Expert Systems with Applications 2, no. 4 (January 1991): 251–58. http://dx.doi.org/10.1016/0957-4174(91)90033-b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chang, C. L., J. B. Combs, and R. A. Stachowitz. "A report on the Expert Systems Validation Associate (EVA)." Expert Systems with Applications 1, no. 3 (January 1990): 217–30. http://dx.doi.org/10.1016/0957-4174(90)90003-d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sinabell, Irina, and Elske Ammenwerth. "Agile, Easily Applicable, and Useful eHealth Usability Evaluations: Systematic Review and Expert-Validation." Applied Clinical Informatics 13, no. 01 (January 2022): 67–79. http://dx.doi.org/10.1055/s-0041-1740919.

Full text
Abstract:
Abstract Background Electronic health (eHealth) usability evaluations of rapidly developed eHealth systems are difficult to accomplish because traditional usability evaluation methods require substantial time in preparation and implementation. This illustrates the growing need for fast, flexible, and cost-effective methods to evaluate the usability of eHealth systems. To address this demand, the present study systematically identified and expert-validated rapidly deployable eHealth usability evaluation methods. Objective Identification and prioritization of eHealth usability evaluation methods suitable for agile, easily applicable, and useful eHealth usability evaluations. Methods The study design comprised a systematic iterative approach in which expert knowledge was contrasted with findings from literature. Forty-three eHealth usability evaluation methods were systematically identified and assessed regarding their ease of applicability and usefulness through semi-structured expert interviews with 10 European usability experts and systematic literature research. The most appropriate eHealth usability evaluation methods were selected stepwise based on the experts' judgements of their ease of applicability and usefulness. Results Of these 43 eHealth usability evaluation methods identified as suitable for agile, easily applicable, and useful eHealth usability evaluations, 10 were recommended by the experts based on their usefulness for rapid eHealth usability evaluations. The three most frequently recommended eHealth usability evaluation methods were Remote User Testing, Expert Review, and Rapid Iterative Test and Evaluation Method. Eleven usability evaluation methods, such as Retrospective Testing, were not recommended for use in rapid eHealth usability evaluations. Conclusion We conducted a systematic review and expert-validation to identify rapidly deployable eHealth usability evaluation methods. The comprehensive and evidence-based prioritization of eHealth usability evaluation methods supports faster usability evaluations, and so contributes to the ease-of-use of emerging eHealth systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Coenen, Frans, Barry Eaglestone, and Mick Ridley. "Verification, validation, and integrity issues in expert and database systems: Two perspectives." International Journal of Intelligent Systems 16, no. 3 (2001): 425–47. http://dx.doi.org/10.1002/1098-111x(200103)16:3<425::aid-int1016>3.0.co;2-c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chambers, T. L., and A. R. Parkinson. "Knowledge Representation and Conversion for Hybrid Expert Systems." Journal of Mechanical Design 120, no. 3 (September 1, 1998): 468–74. http://dx.doi.org/10.1115/1.2829175.

Full text
Abstract:
Many different knowledge representations, such as rules and frames, have been proposed for use with engineering expert systems. Every knowledge representation has certain inherent strengths and weaknesses. A knowledge engineer can exploit the advantages, and avoid the pitfalls, of different common knowledge representations if the knowledge can be mapped from one representation to another as needed. This paper derives the mappings between rules, logic diagrams, decision tables and decision trees using the calculus of truth-functional logic. The mappings for frames have also been derived by Chambers and Parkinson (1995). The logical mappings between these representations are illustrated through a simple example, the limitations of the technique are discussed, and the utility of the technique for the rapid-prototyping and validation of engineering expert systems is introduced. The technique is then applied to three engineering applications, showing great improvements in the resulting knowledge base.
APA, Harvard, Vancouver, ISO, and other styles
6

Irawan, Roma, Subro Albi, Umar ., Arsil ., Anton Komaini, and Didik Rilastiyo Budi. "Improving Instrument Test Passing and Controlling based Digital Futsal Athletes: Quantitative Study." Webology 19, no. 1 (January 20, 2022): 916–26. http://dx.doi.org/10.14704/web/v19i1/web19063.

Full text
Abstract:
The problem in this research is that there is notest instrument passing and control designed according to the situation and condition of futsal athletes who follow digital developments in the West Sumatran environment, especially Padang State University. The purpose of this study was to make a prototype of a test instrument passing and control digital-based futsal athletes. This research is development research with a model design adapted from Borg & Gall. The research subjects were Futsal Athletes from the Faculty of Sport Science, Padang State University, and six experts, namely two Evaluation and Measurement Test experts, one Futsal Expert, and three IT Experts. The development of the test instrument for passing and control futsal athletes was carried out by testing the futsal athletes of Sport Science Faculty, Universitas Negeri Padang, Indonesia using a small group test of 30 people and a large group test totaling 177 people on futsal athletes in the city of Padang, where the method used was the expert validity method with an assessment using a questionnaire instrument and the method test and retest to test the reliability of the tool which was analyzed using the correlational r formula. The process of developing the test instrument for passing and control futsal athletes was carried out through the first stages, namely looking for potential problems, data collection, product design, design validation, design revision, product testing, product revision, usage trial, and product revision. Then the expert validation test was carried out with a questionnaire assessment so that 90% validity was obtained in the "Very Good / Decent" category and the validation level by the large group judge was 0.996 the "very high" category and the small group judge validation level was 0.997 "very high" with a small group reliability test of 0.997 and the large group of 0.996 with both reliability categories "High", the practicality value was 91% and the effectiveness value was 88%. Thus, it can be concluded that the test instrument is passing and control digital-based futsal athletes used as a measuring tool to measure passing and control.
APA, Harvard, Vancouver, ISO, and other styles
7

GU, JIFA, and XIJIN TANG. "META-SYNTHESIS SYSTEM APPROACH TO KNOWLEDGE SCIENCE." International Journal of Information Technology & Decision Making 06, no. 03 (September 2007): 559–72. http://dx.doi.org/10.1142/s0219622007002629.

Full text
Abstract:
Meta-synthesis system approach (MSA) is proposed to tackle with open complex giant systems (OCGS) problems by Chinese system scientists since the late 1980s. Its essential idea can be simplified as from confident qualitative hypothesis to vigorous quantitative validation. To apply this approach, the synthesis of human expert opinions and emergent knowing, machines' powerful computing capacity and the available knowledge and cases are specifically emphasized from the perspective of systems engineering practice. Then the MSA practice may bring new understandings, knowledge and even paradigms about messy and unknown issues, which are under exploration in knowledge science research. In this paper, MSA to knowledge science is addressed. After brief introduction of meta-synthesis approach, a working flow of MSA during problem solving process is addressed and leads to meta-synthetic view toward knowledge science, especially on knowledge creation. Next comes brief introduction to a test for demonstrating the MSA to a macroeconomic problem, which shows a new paradigm to macroeconomic problem solving, a kind of knowledge creation which is different from general macroeconomic problem solving.
APA, Harvard, Vancouver, ISO, and other styles
8

Suen, Ching Y., Peter D. Grogono, Raijan Shinghal, and François Coallier. "Verifying, validating, and measuring the performance of expert systems." Expert Systems with Applications 1, no. 2 (January 1990): 93–102. http://dx.doi.org/10.1016/0957-4174(90)90019-q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chatzinikolaou, A., and C. Angeli. "Modelling for an expert system and a parameter validation method." Expert Systems 19, no. 5 (November 2002): 285–94. http://dx.doi.org/10.1111/1468-0394.00215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ghosh, Pronab, Asif Karim, Syeda Tanjila Atik, Saima Afrin, and Mohd Saifuzzaman. "Expert cancer model using supervised algorithms with a LASSO selection approach." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 3 (June 1, 2021): 2631. http://dx.doi.org/10.11591/ijece.v11i3.pp2631-2639.

Full text
Abstract:
One of the most critical issues of the mortality rate in the medical field in current times is breast cancer. Nowadays, a large number of men and women is facing cancer-related deaths due to the lack of early diagnosis systems and proper treatment per year. To tackle the issue, various data mining approaches have been analyzed to build an effective model that helps to identify the different stages of deadly cancers. The study successfully proposes an early cancer disease model based on five different supervised algorithms such as logistic regression (henceforth LR), decision tree (henceforth DT), random forest (henceforth RF), Support vector machine (henceforth SVM), and K-nearest neighbor (henceforth KNN). After an appropriate preprocessing of the dataset, least absolute shrinkage and selection operator (LASSO) was used for feature selection (FS) using a 10-fold cross-validation (CV) approach. Employing LASSO with 10-fold cross-validation has been a novel steps introduced in this research. Afterwards, different performance evaluation metrics were measured to show accurate predictions based on the proposed algorithms. The result indicated top accuracy was received from RF classifier, approximately 99.41% with the integration of LASSO. Finally, a comprehensive comparison was carried out on Wisconsin breast cancer (diagnostic) dataset (WBCD) together with some current works containing all features.
APA, Harvard, Vancouver, ISO, and other styles
11

Shen, Tao, Chan Gao, Yukari Nagai, and Wei Ou. "Deriving Design Knowledge Graph for Complex Sociotechnical Systems Using the AIA Design Thinking." Mobile Information Systems 2021 (November 12, 2021): 1–11. http://dx.doi.org/10.1155/2021/6416061.

Full text
Abstract:
The AIA design thinking has been validated in complex design tasks, which includes three overlapping design thinking fields and uses the knowledge field theory as a theoretical mechanism of knowledge flow among design thinking fields. Meanwhile, the design of complex sociotechnical systems highly relies on multidisciplinary knowledge and design methods. Despite the emergence of knowledge management techniques (ontology, expert system, text mining, etc.), designers continue to store knowledge in unstructured ways. To facilitate the integration of knowledge graph and design thinking, we introduce an integrated approach to structure design knowledge graph with the AIA design thinking, which organizes existing design knowledge through Agent (concept)-Interaction (relation)-Adaptation (concept) framework. The approach uses an optimized convolutional neural network to accomplish two tasks: building concept graph from text and stimulating design thinking information processing for complex sociotechnical system tasks. Based on our knowledge graph, the validation experiment demonstrates the advantages of promoting the designer’s extension of idea space and idea quality.
APA, Harvard, Vancouver, ISO, and other styles
12

Casal-Guisande, Manuel, Alberto Comesaña-Campos, Alejandro Pereira, José-Benito Bouza-Rodríguez, and Jorge Cerqueiro-Pequeño. "A Decision-Making Methodology Based on Expert Systems Applied to Machining Tools Condition Monitoring." Mathematics 10, no. 3 (February 6, 2022): 520. http://dx.doi.org/10.3390/math10030520.

Full text
Abstract:
The workers operating and supervising machining tools are often in charge of monitoring a high number of parameters of the machining process, and they usually make use of, among others, cutting sound signals, for following-up and assessing that process. The interpretation of those signals is closely related to the operational conditions of the machine and to the work environment itself, because such signals are sensitive to changes in the process’ input parameters. Additionally, they could be considered as a valid indicator for detecting working conditions that either negatively affect the tools’ lifespan, or might even put the machine operators themselves at risk. In light of those circumstances, this work deals with the proposal and conceptual development of a new methodology for monitoring the work conditions of machining tools, based on expert systems that incorporate a reinforcement strategy into their knowledge base. By means of the combination of sound-processing techniques, together with the use of fuzzy-logic inference engines and hierarchization methods based on vague fuzzy numbers, it will be possible to determine existing undesirable behaviors in the machining tools, thus reducing errors, accidents and harmful failures, with consequent savings in time and costs. Aiming to show the potential for the use of this methodology, a concept test has been developed, implemented in the form of a short case study. The results obtained, even if they require more extensive validation, suggest that the methodology would allow for improving the performance and operation of machining tools, as well as the ergonomic conditions of the workplace.
APA, Harvard, Vancouver, ISO, and other styles
13

Chih-Hung Wu and Shie-Jue Lee. "Enhanced high-level Petri nets with multiple colors for knowledge verification/validation of rule-based expert systems." IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 27, no. 5 (October 1997): 760–73. http://dx.doi.org/10.1109/3477.623230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Thomas, M., R. W. Fitzpatrick, and G. S. Heinson. "An expert system to predict intricate saline - sodic subsoil patterns in upland South Australia." Soil Research 47, no. 6 (2009): 602. http://dx.doi.org/10.1071/sr08244.

Full text
Abstract:
Digital soil mapping (DSM) offers apparent benefits over more labour-intensive and costly traditional soil survey. Large cartographic scale (e.g. 1 : 10 000 scale) soil maps are rare in Australia, especially in agricultural areas where they are needed to support detailed land evaluation and targeted land management decisions. We describe a DSM expert system using environmental correlation that applies a priori knowledge from a key area (128 ha) soil–landscape with a regionally repeating toposequence to predict the distribution of saline–sodic subsoil patterns in the surrounding upland farming region (2275 ha) in South Australia. Our predictive framework comprises interrelated and iterative steps, including: (i) consolidating a priori knowledge of the key area soil–landscape; (ii) refining existing mentally held and graphic soil–landscape models; (iii) selecting suitable environmental covariates compatible with geographic information systems (GIS) by interrogation via 3D visualisation using a GIS; (iv) transforming the existing soil–landscape models to a computer model; (v) applying the computer model to the environmental variables using the expert system; (vi) performing the predictive mapping; and (vii) validation. The environmental covariates selected include: digital terrain attributes of slope gradient, topographic wetness index and plan curvature, and airborne gamma-radiometric K%. We apply selected soil profile physiochemical data from a prior soil survey to validate mapping. Results showed that we correctly predicted the saline–sodic subsoils in 10 of 11 reference profiles in the region.
APA, Harvard, Vancouver, ISO, and other styles
15

Ismail, Nurmaisarah, Sazilah Salam, Siti Nurul Mahfuzah Mohamad, Bambang Pudjoatmodjo, Norazlina Shafie, Rashidah Lip, Mohd Adili Norasikin, and Faaizah Shahbodin. "Mobile game model for monitoring Malaysian food calories intake using image recognition." Bulletin of Electrical Engineering and Informatics 12, no. 3 (June 1, 2023): 1839–48. http://dx.doi.org/10.11591/eei.v12i3.4916.

Full text
Abstract:
Two important problems related to food consumption were reported in Malaysia: Malaysia was the sixth rank in Asia for the highest adult obesity rate; and the United Nation reported that Malaysian consumed an average of 2,910 calories per day. An imbalanced diet and high intake of calorie-dense food problems that need attention to reduce obesity. These problems affect national economies by lowering productivity, increasing disability, raising health care expenses, and shortening life spans. Although, there are food calorie tracking applications available, however, existing apps are less engaging and to recognize Malaysian food due to its not versatile databases. This can be solved using game technologies. Hence, this study will propose mobile game model as a solution to the underlying problems. There are 4 phases in the method: expert validation, initial model, expert verification, and final model. The proposed parameters were validated by dietitians, and nutritionists. The model was verified by game experts. A low fidelity prototype was developed based on the proposed model to assist the expert verification process. The model was finalized based on the expert’s feedback. The proposed game model resolves the limited recognition of Malaysian food and monitoring the food calories intake in an engaging way.
APA, Harvard, Vancouver, ISO, and other styles
16

Reis, Thoralf, Tim Funke, Sebastian Bruchhaus, Florian Freund, Marco X. Bornschlegl, and Matthias L. Hemmje. "Supporting Meteorologists in Data Analysis through Knowledge-Based Recommendations." Big Data and Cognitive Computing 6, no. 4 (September 28, 2022): 103. http://dx.doi.org/10.3390/bdcc6040103.

Full text
Abstract:
Climate change means coping directly or indirectly with extreme weather conditions for everybody. Therefore, analyzing meteorological data to create precise models is gaining more importance and might become inevitable. Meteorologists have extensive domain knowledge about meteorological data yet lack practical data analysis skills. This paper presents a method to bridge this gap by empowering the data knowledge carriers to analyze the data. The proposed system utilizes symbolic AI, a knowledge base created by experts, and a recommendation expert system to offer suiting data analysis methods or data pre-processing to meteorologists. This paper systematically analyzes the target user group of meteorologists and practical use cases to arrive at a conceptual and technical system design implemented in the CAMeRI prototype. The concepts in this paper are aligned with the AI2VIS4BigData Reference Model and comprise a novel first-order logic knowledge base that represents analysis methods and related pre-processings. The prototype implementation was qualitatively and quantitatively evaluated. This evaluation included recommendation validation for real-world data, a cognitive walkthrough, and measuring computation timings of the different system components.
APA, Harvard, Vancouver, ISO, and other styles
17

Karimi, Tooraj, Mohammad Reza Sadeghi Moghadam, and Amirhosein Mardani. "Designing an inference engine for assessment of researchers’ maturity using rough set theory." Kybernetes 47, no. 7 (August 6, 2018): 1435–55. http://dx.doi.org/10.1108/k-07-2017-0245.

Full text
Abstract:
Purpose This paper aims to design an expert system that gets data from researchers and determines their maturity level. This system can be used for determining researchers’ support programs as well as a tool for researchers in research-based organizations. Design/methodology/approach This study focuses on designing the inference engine as a component of an expert system. To do so, rough set theory is used to design rule models. Various complete, discretizing and reduction algorithms are used in this paper, and different models were run. Findings The proposed inference engine has the validity of 99.8 per cent, and the most important attributes to determine the maturity level of researchers in this model are “commitment to research” and “attention to research plan timeline”. Research limitations/implications To accurately determine researchers’ maturity model, solely referring to documents and self-reports may reduce the validation. More validation could be reached through using assessment centers for determining capabilities of samples and observations in each maturity level. Originality/value The assessment system for the professional maturity of researchers is an appropriate tool for funders to support researchers. This system helps the funders to rank, validate and direct researchers. Furthermore, it is a valid criterion for researchers to evaluate and improve their abilities. There is not any expert system to assess the researches in literature, and all models, frameworks and software are conceptual or self-assessment.
APA, Harvard, Vancouver, ISO, and other styles
18

Fernandes Costa, Tássio, Álvaro Sobrinho, Lenardo Chaves e Silva, Leandro Dias da Silva, and Angelo Perkusich. "Coloured Petri Nets-Based Modeling and Validation of Insulin Infusion Pump Systems." Applied Sciences 12, no. 3 (January 29, 2022): 1475. http://dx.doi.org/10.3390/app12031475.

Full text
Abstract:
Safety and effectiveness are crucial quality attributes for insulin infusion pump systems. Therefore, regulatory agencies require the quality evaluation and approval of such systems before the market to decrease the risk of harm, motivating the usage of a formal Model-Based Approach (MBA) to improve quality. Nevertheless, using a formal MBA increases costs and development time because it requires expert knowledge and thorough analyses of behaviors. We aim to assist the quality evaluation of such systems in a cost-effective and time-efficient manner, providing re-usable project artifacts by applying our proposed approach (named MBA with CPN—MBA/CPN). We defined a Coloured Petri nets MBA and a case study on a commercial insulin infusion pump system to verify and validate a reference model (as a component of MBA/CPN), describing quality assessment scenarios. We also conducted an empirical evaluation to verify the productivity and reusability of modelers when using the reference model. Such a model is relevant to reason about behaviors and quality evaluation of such concurrent and complex systems. During the empirical evaluation, using the reference model, 66.7% of the 12 interviewed modelers stated no effort, while 8.3% stated low effort, 16.7% medium effort, and 8.3% considerable effort. Based on the modelers’ knowledge, we implemented a web-based application to assist them in re-using our proposed approach, enabling simulation-based training. Although a reduced number of modelers experimented with our approach, such an evaluation provided insights to improve the MBA/CPN. Given the empirical evaluation and the case study results, MBA/CPN showed to be relevant to assess the quality of insulin infusion pump systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Verbeek, Rob, and Sietse Overbeek. "A Critical Heuristics Approach for Approximating Fairness in Method Engineering." International Journal of Information Technologies and Systems Approach 15, no. 1 (January 2022): 1–17. http://dx.doi.org/10.4018/ijitsa.289995.

Full text
Abstract:
Information system (IS) development projects often fail because of unclear wishes and needs of concerned parties, or because the developed IS or the used system development method (SDM) is not fully supported by concerned parties. In this study it is investigated how stakeholders that are concerned with the SDM are identified and involved in the engineering of such methods. The Critical Systems Heuristics (CSH) method can be used to identify stakeholders in method engineering, along with their concerns. CSH is meta-modelled and reviewed in twelve interviews with practitioners in software development, system engineering, and consultancy in order to evaluate its applicability in an organizational context. Subsequent modifications made to the contemporary CSH method are validated in an expert validation session. The resulting evolved CSH method enables method engineers to take into account their challenges and contexts, and the method can be instantiated for organizations that engineer methods for internal or external use.
APA, Harvard, Vancouver, ISO, and other styles
20

Teimourpour, Babak, Vahid Eslami, Maghsoud Mohammadi, and Milad Padidarfard. "A Conceptual Model for the Creation of a Process-Oriented Knowledge Map (POK-Map) and Implementation in an Electric Power Distribution Company." Interdisciplinary Journal of Information, Knowledge, and Management 11 (2016): 001–16. http://dx.doi.org/10.28945/2340.

Full text
Abstract:
Helping a company organize and capture the knowledge used by its employees and business processes is a daunting task. In this work we examine several proposed methodologies and synthesize them into a new methodology that we demonstrate through a case study of an electric power distribution company. This is a practical research study. First, the research approach for creating the knowledge map is process-oriented and the processes are considered as the main elements of the model. This research was done in four stages: literature review, model editing, model validation and case study. The Delphi method was used for the research model validation. Some of the important outputs of this research were mapping knowledge flows, determining the level of knowledge assets, expert-area knowledge map, preparing knowledge meta-model, and updating the knowledge map according to the company’s processes. Besides identifying, auditing and visualizing tacit and explicit knowledge, this knowledge mapping enables us to analyze the knowledge areas’ situation and subsequently help us to improve the processes and overall performance. So, a process map does knowledge mapping in a clear and accurate frame. Once the knowledge is used in processes, it creates value.
APA, Harvard, Vancouver, ISO, and other styles
21

OLEARY, D. "A probability of fuzzy events approach to validating expert systems in a multiple agent environment." Expert Systems with Applications 7, no. 2 (April 1994): 169–74. http://dx.doi.org/10.1016/0957-4174(94)90035-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Graessler, Iris, and Julian Hentze. "The new V-Model of VDI 2206 and its validation." at - Automatisierungstechnik 68, no. 5 (May 27, 2020): 312–24. http://dx.doi.org/10.1515/auto-2020-0015.

Full text
Abstract:
AbstractSince 2016, a new version of the VDI (German Association of Engineers) Guideline 2206 has been developed by the Technical Committee VDI GMA 4.10 “Interdisciplinary Product Creation”. This article presents the revision results of the VDI Guideline 2206:2004 “Design methodology for mechatronic systems”. The core content of the guideline is an updated and enhanced V-Model for Mechatronic and Cyber-Physical Systems. The inherent concern logic of the V-Model represents the logical sequence of tasks. Its key advantage lies in staying independent from the chosen form of project organization. This way, the V-Model can be applied in classically managed projects as well as in agile projects. In addition, the article describes how the revision was performed and which potentials were tapped. Based on the identified potentials, the new V-Model is derived, explained and illustrated. New contents such as the introduction of checkpoints and the integration of requirements engineering are explained in detail. Furthermore, the pursued scientific procedure and the results of the International Validation Workshop with 25 experts from science and industry are proposed.
APA, Harvard, Vancouver, ISO, and other styles
23

GOH, LIANG, and NIKOLA KASABOV. "AN INTEGRATED FEATURE SELECTION AND CLASSIFICATION METHOD TO SELECT MINIMUM NUMBER OF VARIABLES ON THE CASE STUDY OF GENE EXPRESSION DATA." Journal of Bioinformatics and Computational Biology 03, no. 05 (October 2005): 1107–36. http://dx.doi.org/10.1142/s0219720005001533.

Full text
Abstract:
This paper introduces a novel generic approach for classification problems with the objective of achieving maximum classification accuracy with minimum number of features selected. The method is illustrated with several case studies of gene expression data. Our approach integrates filter and wrapper gene selection methods with an added objective of selecting a small set of non-redundant genes that are most relevant for classification with the provision of bins for genes to be swapped in the search for their biological relevance. It is capable of selecting relatively few marker genes while giving comparable or better leave-one-out cross-validation accuracy when compared with gene ranking selection approaches. Additionally, gene profiles can be extracted from the evolving connectionist system, which provides a set of rules that can be further developed into expert systems. The approach uses an integration of Pearson correlation coefficient and signal-to-noise ratio methods with an adaptive evolving classifier applied through the leave-one-out method for validation. Datasets of gene expression from four case studies are used to illustrate the method. The results show the proposed approach leads to an improved feature selection process in terms of reducing the number of variables required and an increased in classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
24

Ricardo, Angel R., Israel F. Benítez, Guillermo González, and José R. Nuñez. "Multi-agent system for steel manufacturing process." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 3 (June 1, 2022): 2441. http://dx.doi.org/10.11591/ijece.v12i3.pp2441-2453.

Full text
Abstract:
<span>This work was carried out in the company ACINOX Las Tunas, Cuba, to design an integrated automation architecture based on intelligent agents for control, monitoring, and decision-making in the production process that guarantees an improvement in planning and management of the process in the steelwork plant. The great differences of technologies and systems of each steel mill and the multiple restrictions, methods, and techniques, within a wide dynamic strongly concatenated, do not generalize automation systems feasibly. In our research, we use international research results and the experience of the plant technologists to create three levels of distributed intelligent architecture: business, production planning-control, and steel manufacturing. Each level manages to integrate and balance the particular and general interests for efficient decision-making combined between hierarchy and heterarchy in this steelwork plant, which will be reflected in a reduction of at least 99% of the time used for decision-making concerning the current system, which can lead to a decrease in refractory costs, energy consumption, and production cost. The effectiveness of the solution is demonstrated with scenario validation and expert evaluation.</span>
APA, Harvard, Vancouver, ISO, and other styles
25

Fritz, Melanie, and Gerhard Schiefer. "Market Monitoring in Dynamic Supply Networks and Chains: an Internet-Based Support System for the Agri-Food Sector." Journal on Chain and Network Science 2, no. 2 (December 1, 2002): 93–100. http://dx.doi.org/10.3920/jcns2002.x021.

Full text
Abstract:
Enterprises in dynamic supply networks and chains with changing business partnerships are involved in complex dependency structures and interrelationships. Consequently, awareness of developments in the competitive and market environment is of paramount importance for economic success. The research presented in this paper deals with the conception of Internet-based support systems which supply actors in dynamic supply networks and chains with individualised information about their competitive and market environment. The paper builds on the situation in the agri-food sector but the discussions of the theoretical background and the principal system design generally apply to dynamic network scenarios. The paper presents a conceptual approach for a market monitoring system, which integrates results of research from a variety of scientific fields, utilises focused rule-based expert knowledge, and implements research-supported automation potentials. The design allows for information quality, efficiency and economic feasibility. First empirical results from experimental studies support the concept. The paper concludes with the discussion of a research outline for further validation initiatives in experimental and sector environments.
APA, Harvard, Vancouver, ISO, and other styles
26

Vivares, Jorge A., William Sarache, and Jorge E. Hurtado. "A maturity assessment model for manufacturing systems." Journal of Manufacturing Technology Management 29, no. 5 (August 13, 2018): 746–67. http://dx.doi.org/10.1108/jmtm-07-2017-0142.

Full text
Abstract:
PurposeAssessment of manufacturing systems provides a baseline for manufacturing strategy (MS) formulation. The purpose of this paper is to develop and propose a maturity assessment model for manufacturing systems (MAMMS). The MAMMS provides a maturity index, in order to establish manufacturing system performance on five possible levels: preinfantile, infantile, industry average, adult, and world class manufacturing.Design/methodology/approachThree main steps were taken: MAMMS design; maturity-level assessment in two companies; and MAMMS validation. Based on an action-research process, several research tools, such as surveys, expert panels, and immersion in two manufacturing companies, were used.FindingsBy integrating 79 variables into a maturity index, the maturity level for two manufacturing companies was obtained. Considering three main components (competitive priorities, manufacturing levers, and manufacturing’s strategic role), the analyzed companies showed a performance at the average industry level. According to participants, the MAMMS is a valuable tool to support decision making in MS.Practical implicationsEmpirical evidence supports the relevance of the proposed MAMMS and its practical usefulness. In particular, the maturity index identifies strengths and weaknesses in the manufacturing system, providing a baseline from which to deploy MS.Originality/valueThe literature review shows a lack of contributions regarding maturity models, particularly, the non-existence of maturity assessment models for manufacturing systems. The proposed MAMMS is a valuable tool to support decision making in MS. Also, this paper contributes to understanding the action-research paradigm, for further research in operations management.
APA, Harvard, Vancouver, ISO, and other styles
27

Hidayat, Deden Sumirat, and Dana Indra Sensuse. "Knowledge Management Model for Smart Campus in Indonesia." Data 7, no. 1 (January 10, 2022): 7. http://dx.doi.org/10.3390/data7010007.

Full text
Abstract:
The application of smart campuses (SC), especially at higher education institutions (HEI) in Indonesia, is very diverse, and does not yet have standards. As a result, SC practice is spread across various areas in an unstructured and uneven manner. KM is one of the critical components of SC. However, the use of KM to support SC is less clearly discussed. Most implementations and assumptions still consider the latest IT application as the SC component. As such, this study aims to identify the components of the KM model for SC. This study used a systematic literature review (SLR) technique with PRISMA procedures, an analytical hierarchy process, and expert interviews. SLR is used to identify the components of the conceptual model, and AHP is used for model priority component analysis. Interviews were used for validation and model development. The results show that KM, IoT, and big data have the highest trends. Governance, people, and smart education have the highest trends. IT is the highest priority component. The KM model for SC has five main layers grouped in phases of the system cycle. This cycle describes the organization’s intellectual ability to adapt in achieving SC indicators. The knowledge cycle at HEIs focuses on education, research, and community service.
APA, Harvard, Vancouver, ISO, and other styles
28

ELLIOTT, PHILIP H. "Predictive Microbiology and HACCP." Journal of Food Protection 59, no. 13 (December 1, 1996): 48–53. http://dx.doi.org/10.4315/0362-028x-59.13.48.

Full text
Abstract:
ABSTRACT While both predictive microbiology and hazard analysis critical control point (HACCP) programs are still in the developmental stages as food-safety tools, predictive models are available that are potentially useful in the development and maintenance of HACCP systems. When conducting a HACCP study, models can be used to assess the risk (probability) and determine the consequence of a microbiological hazard in food. The risk of a hazard is reduced and controlled within the HACCP framework by assigning critical control points (CCPs) to the food process. By using predictive models, ranges and combinations of process parameters can be established as critical limits for CCPs. This has the advantage of providing more processing options while maintaining a degree of safety equivalent to that of a single set of critical limits. Validation testing of individual CCPs can be reduced if the CCP models were developed with a similar food type. Microbiological as well as mechanical and human reliability models may be used to establish sets of rules for rule-based expert computer systems in an effort to automate the development of HACCP plans and evaluate the status of process deviations. Models can also be used in combination with sensors and microprocessors for real-time process control. Since HACCP is a risk-reduction tool, then predictive microbiological models are tools used to aid in the decision-making processes of risk assessment and in describing process parameters necessary to achieve an acceptable level of risk.
APA, Harvard, Vancouver, ISO, and other styles
29

Petrov, Konstantin, Igor Kobzev, Oleksandr Orlov, Victor Kosenko, Alisa Kosenko, and Yana Vanina. "Devising a method for identifying the model of multi-criteria expert estimation of alternatives." Eastern-European Journal of Enterprise Technologies 4, no. 3(112) (August 31, 2021): 56–65. http://dx.doi.org/10.15587/1729-4061.2021.238020.

Full text
Abstract:
An approach to constructing mathematical models of individual multicriterial estimation was proposed based on information about the ordering relations established by the expert for a set of alternatives. Structural identification of the estimation model using the additive utility function of alternatives was performed within axiomatics of the multi-attribute utility theory (MAUT). A method of parametric identification of the model based on the ideas of the theory of comparative identification has been developed. To determine the model parameters, it was proposed to use the midpoint method that has resulted in the possibility of obtaining a uniform stable solution of the problem. It was shown that in this case, the problem of parametric identification of the estimation model can be reduced to a standard linear programming problem. The scalar multicriterial estimates of alternatives obtained on the basis of the synthesized mathematical model make it possible to compare them among themselves according to the degree of efficiency and, thus, choose "the best" or rank them. A significant advantage of the proposed approach is the ability to use only non-numerical information about the decisions already made by experts to solve the problem of identifying the model parameters. This enables partial reduction of the degree of expert’s subjective influence on the outcome of decision-making and reduces the cost of the expert estimation process. A method of verification of the estimation model based on the principles of cross-validation has been developed. The results of computer modeling were presented. They confirmed the effectiveness of using the proposed method of parametric model identification to solve problems related to automation of the process of intelligent decision making.
APA, Harvard, Vancouver, ISO, and other styles
30

Ray, Soumya, Kamta Nath Mishra, and Sandip Dutta. "Sensitive Data Identification and Security Assurance in Cloud and IoT based Networks." International Journal of Computer Network and Information Security 14, no. 5 (October 8, 2022): 11–27. http://dx.doi.org/10.5815/ijcnis.2022.05.02.

Full text
Abstract:
Sensitive data identification is a vital strategy in any distributed system. However, in the case of non-appropriate utilization of the system, sensitive data security can be at risk. Therefore, sensitive data identification and its security validation are mandatory. The paper primarily focuses on novel sensitive data recognition methodologies. Further, the sensitivity score of the attributes distinguishes non-sensitive attributes, and domain expert plays an important role in this process. The designing of the security assurance Algo and their corresponding decision tables make the system more robust and reliable. The result section is validated with the help of graphical representation, which clearly makes the authenticity of the research work. In summary, the authors may say that the sensitive data identification and security assurance of the proposed system is automated and work optimally in a cloud-based system.
APA, Harvard, Vancouver, ISO, and other styles
31

Zekri, Firas, Afef Samet Ellouze, and Rafik Bouaziz. "A Fuzzy-Based Customisation of Healthcare Knowledge to Support Clinical Domestic Decisions for Chronically Ill Patients." Journal of Information & Knowledge Management 19, no. 04 (November 27, 2020): 2050029. http://dx.doi.org/10.1142/s021964922050029x.

Full text
Abstract:
The development of customised healthcare systems is becoming an important issue in the healthcare industry due to the rapid increase in the number of chronically ill patients. These systems aim to deliver effective care to patients having chronic diseases through customised services. However, knowledge bases need also to be customised since systems are confronted with huge amount of personalised and imprecise medical knowledge. Therefore, we propose in this paper a new system to customise medical knowledge according to progressive disease phases and pathological cases. A rule management process first customises rules according to the specificities of every disease phase, and then matches a private knowledge base with each enrolled patient. This base contains only the patient’s customised knowledge. After reasoning, another customisation process is carried out by the component, Result Manager, which ensures the validation of the system outcomes by the pathological case experts, before being recommended. This will better ensure the recommendation of the generated results to the non-professional users. In addition, Result Manager offers fuzzy semantic queries to the experts. In conclusion, our new decision support system makes medical aid decisions not only addressed to physicians, but also to chronically ill patients and persons regarded as caregivers.
APA, Harvard, Vancouver, ISO, and other styles
32

Matveev, Yuri, Anton Matveev, Olga Frolova, Elena Lyakso, and Nersisson Ruban. "Automatic Speech Emotion Recognition of Younger School Age Children." Mathematics 10, no. 14 (July 6, 2022): 2373. http://dx.doi.org/10.3390/math10142373.

Full text
Abstract:
This paper introduces the extended description of a database that contains emotional speech in the Russian language of younger school age (8–12-year-old) children and describes the results of validation of the database based on classical machine learning algorithms, such as Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP). The validation is performed using standard procedures and scenarios of the validation similar to other well-known databases of children’s emotional acting speech. Performance evaluation of automatic multiclass recognition on four emotion classes “Neutral (Calm)—Joy—Sadness—Anger” shows the superiority of SVM performance and also MLP performance over the results of perceptual tests. Moreover, the results of automatic recognition on the test dataset which was used in the perceptual test are even better. These results prove that emotions in the database can be reliably recognized both by experts and automatically using classical machine learning algorithms such as SVM and MLP, which can be used as baselines for comparing emotion recognition systems based on more sophisticated modern machine learning methods and deep neural networks. The results also confirm that this database can be a valuable resource for researchers studying affective reactions in speech communication during child-computer interactions in the Russian language and can be used to develop various edutainment, health care, etc. applications.
APA, Harvard, Vancouver, ISO, and other styles
33

Semenkov, Kirill, Vitaly Promyslov, Alexey Poletykin, and Nadir Mengazetdinov. "Validation of Complex Control Systems with Heterogeneous Digital Models in Industry 4.0 Framework." Machines 9, no. 3 (March 14, 2021): 62. http://dx.doi.org/10.3390/machines9030062.

Full text
Abstract:
The precise evaluation of the system design and characteristics is a challenge for experts and engineers. This paper considers the problem of the development and application of a digital twin to assess cyberphysical systems. We analyze the details of digital twin applications at different lifecycle stages. The work reviews and summarizes properties of models concerning the digital and physical components of a cyberphysical system (CPS). The other issue of a CPS is increasing cybersecurity threat for objects, so special attention is paid to the heterogeneous digital twin usage scenarios to improve CPS cybersecurity. The article also details the heterogeneous digital twin’s implementation for a real upper-level control system of a nuclear power plant. The presented heterogeneous digital twin combines virtual machines with real equipment, namely hardware-in-the-loop (HiL) components. The achievements and drawbacks of the implemented model, including single timescale maintaining challenges, are discussed.
APA, Harvard, Vancouver, ISO, and other styles
34

Tayefeh Mahmoudi, Maryam, Kambiz Badie, and Mahmood Kharrat. "Text organization via projection from researcher‐space onto text‐space." Kybernetes 37, no. 8 (September 17, 2008): 1151–64. http://dx.doi.org/10.1108/03684920810884955.

Full text
Abstract:
PurposeThe purpose of this paper is to propose a new approach for organizing texts for researchers based on projection from researcher space (consisting of reasoning ability and research ability) onto text space (consisting of text features).Design/methodology/approachThe projection from researcher space onto text space is performed on the grounds of the differences between the sets of essential text features which are consistent, respectively, for the reasoning ability (as researcher's existing ability) and the research ability (as researcher's desired ability) using the concept of a dependency graph.FindingsThrough the projection from researcher space onto text space, one can expect to find an effective text which can help a person with certain reasoning ability acquire a certain research ability in the related domain.Research limitations/implicationsAcquisition of reasoning ability (ies) may call for a comprehensive questionnaire or test protocol whose validation by the expert may not necessarily be convenient. Appropriate questionnaires/test protocols, as well as knowledge‐based models for the text features at different layers (to assure derivation of more reliable features), is suggested as future work.Practical implicationsA salient benefit of the proposed approach is its flexibility in responding to a wide range of users with different models. It thus can be used as an efficient tool for online e‐learning and e‐research purposes in cyber‐learning environment.Originality/valueThe originality of the paper mostly lies in the concept of projection from researcher space onto text space as an approach for provision of appropriate text features; and the application of dependency graph as a potential means for identifying those text features whose prerequisites exist in the research abilities of the researcher.
APA, Harvard, Vancouver, ISO, and other styles
35

Saneleuterio-Temporal, Elia. "Los estereotipos de género en las producciones audiovisuales: diseño y validación de la tabla de análisis EG_5x4." Pixel-Bit, Revista de Medios y Educación, no. 64 (2022): 27–54. http://dx.doi.org/10.12795/pixelbit.90777.

Full text
Abstract:
Los productos culturales de la educación no formal, y en concreto los difundidos por televisión y plataformas streaming, incluidas no ficción y animación infantil, proponen entre sus figuraciones personajes estereotipados en su construcción de género. Según investigaciones previas, identificarlos y combatirlos favorece la erradicación de ideas sexistas relacionadas con violencias machistas. Por ello, el objetivo que se planteó fue diseñar y validar una tabla de análisis de estereotipos sexuales presentes en cualquier producto audiovisual. El proceso siguió un protocolo de 3 fases (Garrido et al., 2015): 1) Se revisó la literatura publicada, se extrajeron rasgos relacionados con estereotipos de género, se completó la redacción de los ítems y se redistribuyeron, en parejas, en 5 categorías: corporal, actitudinal, social, afectivo-sexual, audiovisual. 2) El borrador fue sometido a un proceso de validación experta por 27 juezas y 8 jueces, cuyas aportaciones cuantitativas determinaron un índice de validez de constructo de .71. 3) Con sus sugerencias cualitativas se ajustaron contenido y claridad. El resultado es el diseño de la tabla EG_5x4 para el análisis de estereotipos de género en películas, series u otros productos audiovisuales, que se presenta, se discute y se compara con otras existentes, para su utilización en ulteriores investigaciones. PALABRAS CLAVES · KEYWORDS Materiales Audiovisuales; Coeducación; Estereotipos de Género; Instrumentos de Medición; Educación No Formal Audio-visual Materials; Coeducation; Gender Stereotypes; Measuring Instruments; Nonformal Education
APA, Harvard, Vancouver, ISO, and other styles
36

Mahlool, Dhurgham Hassan, and Mohamed Hamzah Abed. "Distributed brain tumor diagnosis using a federated learning environment." Bulletin of Electrical Engineering and Informatics 11, no. 6 (December 1, 2022): 3313–21. http://dx.doi.org/10.11591/eei.v11i6.4131.

Full text
Abstract:
In the last few years, a very huge development has occurred in medical techniques using artificial intelligence tools, especially in the diagnosis field. One of the essential things is brain tumor (BT) detection and diagnosis. This kind of disease needs an expert physician to decide on the treatment or surgical operation based on magnetic resonance imaging (MRI) images; therefore, the researchers focus on such kind of medical images analysis and understanding to help the specialist to make a decision. in this work, a new environment has been investigated based on the deep learning method and distributed federated learning (FL) algorithm. The proposed model has been evaluated based on cross-validation techniques using two different standard datasets, BT-small-2c, and BT-large-3c. The achieved classification accuracy was 0.82 and 0.96 consecutively. The proposed classification model provides an active and effective system for assessing BT classification with high reliability and accurate clinical findings.
APA, Harvard, Vancouver, ISO, and other styles
37

Kim, Gyeongmin, Minseok Kim, and Jaechoon Jo. "Enhancing Code Similarity with Augmented Data Filtering and Ensemble Strategies." JOIV : International Journal on Informatics Visualization 6, no. 3 (September 30, 2022): 676. http://dx.doi.org/10.30630/joiv.6.3.1259.

Full text
Abstract:
Although COVID-19 has severely affected the global economy, information technology (IT) employees managed to perform most of their work from home. Telecommuting and remote work have promoted a demand for IT services in various market sectors, including retail, entertainment, education, and healthcare. Consequently, computer and information experts are also in demand. However, producing IT, experts is difficult during a pandemic owing to limitations, such as the reduced enrollment of international students. Therefore, researching increasing software productivity is essential; this study proposes a code similarity determination model that utilizes augmented data filtering and ensemble strategies. This algorithm is the first automated development system for increasing software productivity that addresses the current situation—a worldwide shortage of software dramatically improves performance in various downstream natural language processing tasks (NLP). Unlike general-purpose pre-trained language models (PLMs), CodeBERT and GraphCodeBERT are PLMs that have learned both natural and programming languages. Hence, they are suitable as code similarity determination models. The data filtering process consists of three steps: (1) deduplication of data, (2) deletion of intersection, and (3) an exhaustive search. The best mating (BM) 25 and length normalization of BM25 (BM25L) algorithms were used to construct positive and negative pairs. The performance of the model was evaluated using the 5-fold cross-validation ensemble technique. Experiments demonstrate the effectiveness of the proposed method quantitatively. Moreover, we expect this method to be optimal for increasing software productivity in various NLP tasks.
APA, Harvard, Vancouver, ISO, and other styles
38

Ornoy, Eitan, and Shai Cohen. "Tool for a real-time automatic assessment of vocal proficiency." Journal of Music, Technology and Education 14, no. 1 (April 1, 2022): 69–91. http://dx.doi.org/10.1386/jmte_00034_1.

Full text
Abstract:
Over the years, a growing number of researchers have been developing models that would automatically generate assessments of music performances. Yet the number and usage of automatic singing evaluation systems is still rather rudimentary, addressing, for the most part, a limited amount of performance features and lacking verification. This study reports on a newly designed automatic singing assessment tool based on a score-based model and its validation. Short music segments (N = 2640) were gathered via recordings made by music education students (N = 55) of a specially inscribed vocal music excerpt. Recorded data evaluation was generated by a specially devised automatic tool as well as by three human experts, addressing pitch intonation (examined for its overall display, single note accuracy and interval manifestation), dynamics transmission and vocal resonation quality. Findings indicated a higher rating given by the experts in regard to pitch intonation and vocal resonation. However, a similitude was found for the dynamics transmission scoring, and a correlation was found for pitch intonation and the dynamics transmission scoring level: in both performance parameters, the higher the experts’ gradings were, the higher the gradings provided by the automatic tool. Results attest to the automatic tools’ qualification as an aid for human judgement of singing proficiency. The tool could assist investigations in various musical domains, such as music pedagogy, music performance or music perception research.
APA, Harvard, Vancouver, ISO, and other styles
39

Hosseini, Ehsan, and Mohammad Hossein Rezvani. "E-customer loyalty in gamified trusted store platforms: a case study analysis in Iran." Bulletin of Electrical Engineering and Informatics 10, no. 5 (October 1, 2021): 2899–909. http://dx.doi.org/10.11591/eei.v10i5.3165.

Full text
Abstract:
Customer satisfaction, trust, and loyalty are the three most fundamental elements of e-marketing. Previous researchers have noted that satisfaction is a key factor in commanding loyalty. However, the relationship between satisfaction, trust, and loyalty is strongly dependent on the type of platform provided by digital stores. On the other hand, gamification in E-businesses has grown rapidly in recent years. In this context, it is necessary to explore the effects of gamification on e-customer satisfaction and loyalty. In this paper, it is argued that customer satisfaction alone cannot inspire loyalty. Simply speaking, customers’ satisfaction with gamified services can lead to developing trust and, in turn, loyalty. This research also presents a thorough review of the effect of store-related motivational factors, such as gamification, on trust. These factors include moderator and mediator variables. The hypotheses of this study are considered in the context of one of the largest online retail stores in Iran which enjoys a large market share in the Middle East. To this end, Lawshe content validity ratio is utilized and expert opinions are applied to the proposed model. Evaluation results, obtained through the smartPLS, established the robustness of our modeling in terms of reliability analysis, significance level analysis, discriminant validity analysis, coefficient of determination, model fitting, and cross-validation.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhou, Yilu, and Yuan Xue. "ACRank: a multi-evidence text-mining model for alliance discovery from news articles." Information Technology & People 33, no. 5 (June 22, 2020): 1357–80. http://dx.doi.org/10.1108/itp-06-2018-0272.

Full text
Abstract:
PurposeStrategic alliances among organizations are some of the central drivers of innovation and economic growth. However, the discovery of alliances has relied on pure manual search and has limited scope. This paper proposes a text-mining framework, ACRank, that automatically extracts alliances from news articles. ACRank aims to provide human analysts with a higher coverage of strategic alliances compared to existing databases, yet maintain a reasonable extraction precision. It has the potential to discover alliances involving less well-known companies, a situation often neglected by commercial databases.Design/methodology/approachThe proposed framework is a systematic process of alliance extraction and validation using natural language processing techniques and alliance domain knowledge. The process integrates news article search, entity extraction, and syntactic and semantic linguistic parsing techniques. In particular, Alliance Discovery Template (ADT) identifies a number of linguistic templates expanded from expert domain knowledge and extract potential alliances at sentence-level. Alliance Confidence Ranking (ACRank)further validates each unique alliance based on multiple features at document-level. The framework is designed to deal with extremely skewed, noisy data from news articles.FindingsIn evaluating the performance of ACRank on a gold standard data set of IBM alliances (2006–2008) showed that: Sentence-level ADT-based extraction achieved 78.1% recall and 44.7% precision and eliminated over 99% of the noise in news articles. ACRank further improved precision to 97% with the top20% of extracted alliance instances. Further comparison with Thomson Reuters SDC database showed that SDC covered less than 20% of total alliances, while ACRank covered 67%. When applying ACRank to Dow 30 company news articles, ACRank is estimated to achieve a recall between 0.48 and 0.95, and only 15% of the alliances appeared in SDC.Originality/valueThe research framework proposed in this paper indicates a promising direction of building a comprehensive alliance database using automatic approaches. It adds value to academic studies and business analyses that require in-depth knowledge of strategic alliances. It also encourages other innovative studies that use text mining and data analytics to study business relations.
APA, Harvard, Vancouver, ISO, and other styles
41

Subramaniam, Sulochana, Jamil Kanfoud, and Tat-Hean Gan. "Zero-Defect Manufacturing and Automated Defect Detection Using Time of Flight Diffraction (TOFD) Images." Machines 10, no. 10 (September 21, 2022): 839. http://dx.doi.org/10.3390/machines10100839.

Full text
Abstract:
Ultrasonic time-of-flight diffraction (TOFD) is a non-destructive testing (NDT) technique for weld inspection that has gained popularity in the industry, due to its ability to detect, position, and size defects based on the time difference of the echo signal. Although the TOFD technique provides high-speed data, ultrasonic data interpretation is typically a manual and time-consuming process, thereby necessitating a trained expert. The main aim of this work is to develop a fully automated defect detection and data interpretation approach that enables predictive maintenance using signal and image processing. Through this research, the characterization of weld defects was achieved by identifying the region of interest from A-scan signals, followed by segmentation. The experimental results were compared with samples of known defect size for validation; it was found that this novel method is capable of automatically measuring the defect size with considerable accuracy. It is anticipated that using such a system will significantly increase inspection speed, cost, and safety.
APA, Harvard, Vancouver, ISO, and other styles
42

Ramanauskaitė, Simona, Neringa Urbonaitė, Šarūnas Grigaliūnas, Saulius Preidys, Vaidotas Trinkūnas, and Algimantas Venčkauskas. "Educational Organization’s Security Level Estimation Model." Applied Sciences 11, no. 17 (August 31, 2021): 8061. http://dx.doi.org/10.3390/app11178061.

Full text
Abstract:
During the pandemic, distance learning gained its necessity. Most schools and universities were forced to use e-learning tools. The fast transition to distance learning increased the digitalization of the educational system and influenced the increase of security incident numbers as there was no time to estimate the security level change by incorporating new e-learning systems. Notably, preparation for distance learning was accompanied by several limitations: lack of time, lack of resources to manage the information technologies and systems, lack of knowledge on information security management, and security level modeling. In this paper, we propose a security level estimation model for educational organizations. This model takes into account distance learning specifics and allows quantitative estimation of an organization’s security level. It is based on 49 criteria values, structured into an AHP (Analytic Hierarchy Process) tree, and arranged to final security level metric by incorporating experts’ opinion-based criteria importance coefficients. The research proposed a criteria tree and obtained experts’ opinions lead to educational organization security level evaluation model, resulting in one quantitative metric. It can be used to model different situations and find the better alternative in case of security level, without external security experts usage. Use case analysis results and their similarity to security experts’ evaluation are presented in this paper as validation of the proposed model. It confirms the model meets experts-based information security level ranking, therefore, can be used for simpler security modeling in educational organizations.
APA, Harvard, Vancouver, ISO, and other styles
43

Büyüközkan, Gülçin, and Ali Görener. "Evaluation of product development partners using an integrated AHP-VIKOR model." Kybernetes 44, no. 2 (February 2, 2015): 220–37. http://dx.doi.org/10.1108/k-01-2014-0019.

Full text
Abstract:
Purpose – Today, customers are generally perceived to be demanding higher quality and better performing products, in shorter and more predictable development cycle-times and at a lower cost. These market pressures drive firms to collaborate with possible partners in product development (PD) processes. However, the selection of a suitable partner for an effective PD is not an easy decision and is associated with complexity. The purpose of this paper is to propose an integrated multi-criteria decision-making (MCDM) approach to effectively evaluate PD partners. Design/methodology/approach – The proposed evaluation procedure consists of several steps. First, based on a literature review and expert validation, the strategic main and sub-criteria of the PD partner selection process that companies consider the most important are identified. After constructing the evaluation criteria hierarchy, the criteria weights are calculated by applying the Analytic Hierarchy Process (AHP) method. The VIKOR (a compromise ranking) method is used to obtain the final partner ranking results. A case study is given to demonstrate the potential of the methodology. In the last part of the study, a sensitivity analysis is performed to determine the influence of criteria weights on the decision making process. Findings – The PD partner evaluation model contains three main criteria, namely, partner, collaboration and PD-oriented criteria, with 13 sub-criteria. The market position, competency of the partner, compatibility, technical expertise and complementarity are found as the most considerable evaluation criteria for the ABC case company. Results of the sensitivity analysis from different cases demonstrate that the integrated AHP-VIKOR model is quite sensitive to the weights assigned to the evaluation criteria. This finding underlines the importance of forming a capable, qualified group of experts for the decision-making procedure. The results of the empirical study show that the proposed evaluation framework is practical for solving partner selection problems. Originality/value – Partner selection is critical to the success of a collaborative PD process. The main contribution of this paper is the definition and development of an effective evaluation framework to guide managers for suitable PD partner selection. In our knowledge, there exists no study in the literature that combines the established AHP VIKOR model for PD partner selection problem. This study can be useful to researchers to better understand PD partner selection problem theoretically, as well as to organizations in designing better satisfying PD partner evaluation systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Ranjbarfard, Mina, and Zeynab Hatami. "Critical Success Factors for Implementing Business Intelligence Projects (A BI Implementation Methodology Perspective)." Interdisciplinary Journal of Information, Knowledge, and Management 15 (2020): 175–202. http://dx.doi.org/10.28945/4607.

Full text
Abstract:
Aim/Purpose: The purpose of this paper is to identify Critical Success Factors (CSFs) for Business Intelligence (BI) implementation projects by studying the existing BI project implementation methodologies and to compare these methodologies based on the identified CSFs. Background: The implementation of BI project has become one of the most important technological and organizational innovations in modern organizations. The BI project implementation methodology provides a framework for demonstrating knowledge, ideas and structural techniques. It is defined as a set of instructions and rules for implementing BI projects. Identifying CSFs of BI implementation project can help the project team to concentrate on solving prior issues and needed resources. Methodology: Firstly, the literature review was conducted to find the existing BI project implementation methodologies. Secondly, the content of the 13 BI project implementation methodologies was analyzed by using thematic analysis method. Thirdly, for examining the validation of the 20 identified CSFs, two questionnaires were distributed among BI experts. The gathered data of the first questionnaire was analyzed by content validity ratio (CVR) and 11 of 20 CSFs were accepted as a result. The gathered data of the second questionnaire was analyzed by fuzzy Delphi method and the results were the same as CVR. Finally, 13 raised BI project implementation methodologies were compared based on the 11 validated CSFs. Contribution: This paper contributes to the current theory and practice by identifying a complete list of CSFs for BI projects implementation; comparison of existing BI project implementation methodologies; determining the completeness degree of existing BI project implementation methodologies and introducing more complete ones; and finding the new CSF “Expert assessment of business readiness for successful implementation of BI project” that was not expressed in previous studies. Findings: The CSFs that should be considered in a BI project implementation include: “Obvious BI strategy and vision”, “Business requirements definition”, “Business readiness assessment”, “BI performance assessment”, “Establishing BI alignment with business goals”, “Management support”, “IT support for BI”, “Creating data resources and source data quality”, “Installation and integration BI programs”, “BI system testing”, and “BI system support and maintenance”. Also, all the 13 BI project implementation methodologies can be divided into four groups based on their completeness degree. Recommendations for Practitioners: The results can be used to plan BI project implementation and help improve the way of BI project implementation in the organizations. It can be used to reduce the failure rate of BI implementation projects. Furthermore, the 11 identified CSFs can give a better understanding of the BI project implementation methodologies. Recommendation for Researchers: The results of this research helped researchers and practitioners in the field of business intelligence to better understand the methodology and approaches available for the implementation and deployment of BI systems and thus use them. Some methodologies are more complete than other studied methodologies. Therefore, organizations that intend to implement BI in their organization can select these methodologies according to their goals. Thus, Findings of the study can lead to reduce the failure rate of implementation projects. Future Research: Future researchers may add other BI project implementation methodologies and repeat this research. Also, they can divide CSFs into three categories including required before BI project implementation, required during BI project implementation and required after BI project implementation. Moreover, researchers can rank the BI project implementation CSFs. As well, Critical Failure Factors (CFFs) need to be explored by studying the failed implementations of BI projects. The identified CSFs probably affect each other. So, studying the relationship between them can be a topic for future research.
APA, Harvard, Vancouver, ISO, and other styles
45

Echavarren, Alfonso Urquiza. "A Dynamic Approach to Introduce Competency Frameworks." International Journal of Human Capital and Information Technology Professionals 2, no. 1 (January 2011): 18–32. http://dx.doi.org/10.4018/jhcitp.2011010102.

Full text
Abstract:
Although a wide consensus exists about potential business benefits derived from Competency based HR management practices, reality shows that in practice, Competency Management deployment cases are scarce and difficult to implement. This HR business related problem directly affects IT Software industries, both in HRMS applications development and consultancy related services. Market indicators reflect ‘unbalance’ between potential organizational benefits and actual applications deployment. In this context, defining useful, business-oriented Competency Frameworks has become an important challenge for many organizations willing to progress along through continuous HRM improvement processes. This paper addresses the major issues underlying this Competency Management unbalance. A new business-oriented approach proposing an alternative, scope extended methodology is outlined in this publication, after field validation and wide acceptance from experts in functional HR management and IT Systems professionals from various large size organizations. Therefore, the findings resulting from this research work have both theoretical and practical implications in helping IT management in defining efficient HRMS Competency based applications and deployment strategies.
APA, Harvard, Vancouver, ISO, and other styles
46

Martín-Moncunill, David, Miguel-Ángel Sicilia-Urban, Elena García-Barriocanal, and Salvador Sánchez-Alonso. "Evaluating the degree of domain specificity of terms in large terminologies." Online Information Review 39, no. 3 (June 8, 2015): 326–45. http://dx.doi.org/10.1108/oir-02-2015-0052.

Full text
Abstract:
Purpose – Large terminologies usually contain a mix of terms that are either generic or domain specific, which makes the use of the terminology itself a difficult task that may limit the positive effects of these systems. The purpose of this paper is to systematically evaluate the degree of domain specificity of the AGROVOC controlled vocabulary terms as a representative of a large terminology in the agricultural domain and discuss the generic/specific boundaries across its hierarchy. Design/methodology/approach – A user-oriented study with domain-experts in conjunction with quantitative and systematic analysis. First an in-depth analysis of AGROVOC was carried out to make a proper selection of terms for the experiment. Then domain-experts were asked to classify the terms according to their domain specificity. An evaluation was conducted to analyse the domain-experts’ results. Finally, the resulting data set was automatically compared with the terms in SUMO, an upper ontology and MILO, a mid-level ontology; to analyse the coincidences. Findings – Results show the existence of a high number of generic terms. The motivation for several of the unclear cases is also depicted. The automatic evaluation showed that there is not a direct way to assess the specificity degree of a term by using SUMO and MILO ontologies, however, it provided additional validation of the results gathered from the domain-experts. Research limitations/implications – The “domain-analysis” concept has long been discussed and it could be addressed from different perspectives. A resume of these perspectives and an explanation of the approach followed in this experiment is included in the background section. Originality/value – The authors propose an approach to identify the domain specificity of terms in large domain-specific terminologies and a criterion to measure the overall domain specificity of a knowledge organisation system, based on domain-experts analysis. The authors also provide a first insight about using automated measures to determine the degree to which a given term can be considered domain specific. The resulting data set from the domain-experts’ evaluation can be reused as a gold standard for further research about these automatic measures.
APA, Harvard, Vancouver, ISO, and other styles
47

Mohammed, Ahmed Burhan, Ahmad Abdullah Mohammed Al-Mafrji, Moumena Salah Yassen, and Ahmad H. Sabry. "Developing plastic recycling classifier by deep learning and directed acyclic graph residual network." Eastern-European Journal of Enterprise Technologies 2, no. 10 (116) (April 30, 2022): 42–49. http://dx.doi.org/10.15587/1729-4061.2022.254285.

Full text
Abstract:
Recycling is one of the most important approaches to safeguard the environment since it aims to reduce waste in landfills while conserving natural resources. Using deep Learning networks, this group of wastes may be automatically classified on the belts of a waste sorting plant. However, a basic set of connected layers may not be adequate to give satisfactory accuracy for such multi output classifier tasks. To optimize the gradient flow and enable deeper training for network design with multi label classifier, this study suggests a residual-based deep learning convolutional neural network. For network training, ten classes have been explored. The Directed Acyclic Graph (DAG) is a structure with hidden layers that have inputs, outputs, and other layers. The DAG network's residual-based architecture features shortcut connections that bypass some levels of the network, allowing gradients of network parameters to travel freely among the network output layers for deeper training. The methodology includes: 1) preparing the data and creating an augmented image data store; 2) defining the main serially-connected branches of the network architecture; 3) defining the residual interconnections that bypass the main branch layers; 4) defining layers, and finally; 5) creating a residual-based deeper layer graph. The concept is to split down the multiclass classification problem into minor binary states, where every classifier performs as an expert by concentrating on discriminating between only two labels, improving total accuracy. The results achieve (2.861 %) training error and (9.76 %) a validation error. The training results of this classifier are evaluated by finding the training error, validation error, and showing the confusion matrix of validation data
APA, Harvard, Vancouver, ISO, and other styles
48

Harrison, S. R. "Validation of agricultural expert systems." Agricultural Systems 35, no. 3 (January 1991): 265–85. http://dx.doi.org/10.1016/0308-521x(91)90159-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Sekerogiu, Boran, Yoney Kirsal Ever, Kamil Dimililer, and Fadi Al-Turjman. "Comparative Evaluation and Comprehensive Analysis of Machine Learning Models for Regression Problems." Data Intelligence 4, no. 3 (2022): 620–52. http://dx.doi.org/10.1162/dint_a_00155.

Full text
Abstract:
Abstract Artificial intelligence and machine learning applications are of significant importance almost in every field of human life to solve problems or support human experts. However, the determination of the machine learning model to achieve a superior result for a particular problem within the wide real-life application areas is still a challenging task for researchers. The success of a model could be affected by several factors such as dataset characteristics, training strategy and model responses. Therefore, a comprehensive analysis is required to determine model ability and the efficiency of the considered strategies. This study implemented ten benchmark machine learning models on seventeen varied datasets. Experiments are performed using four different training strategies 60:40, 70:30, and 80:20 hold-out and five-fold cross-validation techniques. We used three evaluation metrics to evaluate the experimental results: mean squared error, mean absolute error, and coefficient of determination (R2 score). The considered models are analyzed, and each model's advantages, disadvantages, and data dependencies are indicated. As a result of performed excess number of experiments, the deep Long-Short Term Memory (LSTM) neural network outperformed other considered models, namely, decision tree, linear regression, support vector regression with a linear and radial basis function kernels, random forest, gradient boosting, extreme gradient boosting, shallow neural network, and deep neural network. It has also been shown that cross-validation has a tremendous impact on the results of the experiments and should be considered for the model evaluation in regression studies where data mining or selection is not performed.
APA, Harvard, Vancouver, ISO, and other styles
50

Ruin, Thomas, Eric Levrat, Benoît Iung, and Antoine Despujols. "Complex maintenance programs quantification (CMPQ) to better control production systems." Journal of Manufacturing Technology Management 25, no. 4 (April 29, 2014): 491–509. http://dx.doi.org/10.1108/jmtm-04-2013-0042.

Full text
Abstract:
Purpose – The purpose of this paper is to develop a methodology for supporting complex maintenance programs quantification (CMPQ) for industrial systems. The methodology is based on a generic formalization of static and behavioral expert knowledge both on the target system and on the maintenance one. The formalization is carried out first by means of system modelling language (SysML) diagrams to model knowledge concepts and second by the transformation of these concepts into Altarica data flow (ADF) language for developing stochastic simulation. Design/methodology/approach – An industrial case study (ARE system) proposed by the electricite de France (EDF) company is used initially to show a real problem statement on CMPQ. It allows highlighting key scientific issues considered as the basis for methodology development. Main issues are related to static and dynamic knowledge formalization justifying the choice of SysML and ADF languages. The added value of this methodology is finally shown on the same case study serving as benchmark. Findings – This paper demonstrates the suitability of using of SysML language for modelling the CMPQ knowledge and then of ADF language in building executable model implementing simulation as needed for assessing key performance indicators of CMPQ. ADF is based on formal mode automaton. Mapping rules are developed to ensure correspondence between the concepts of these two languages. Research limitations/implications – Additional industrial validations of the methodology should be performed to really evaluate its benefits. Practical implications – This work was made possible thanks to a partnership with the EDF Company (French energy supplier). The results are therefore directly usable at practical industrial levels. Originality/value – The CMPQ methodology proposed is fully generic leading to offering a library of atomic ADF components (COTS) which can be instantiated to develop executable model with regards to each specific application. It allows to favor reusability and makes easier the model development above all for a user who knows nothing about the language.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography