Articoli di riviste sul tema "Automaton inference"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Automaton inference.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Automaton inference".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Richetin, M., e M. Naranjo. "Inference of Automata by dialectic learning". Robotica 3, n. 3 (settembre 1985): 159–63. http://dx.doi.org/10.1017/s0263574700009085.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
SUMMARYAn algorithm for the inference of the external behaviour model of an automaton is given. It uses a sequential learning procedure based on induction-contradiction-correction concepts. The induction is a generalization of relationships between automaton state properties, and the correction consists in a more and more accurate discrimination of the automaton state properties. These properties are defined from the input/output contradictory sequences which are discovered after the observed contradictions between successive predictions and observations.
2

HÖGBERG, JOHANNA. "A randomised inference algorithm for regular tree languages". Natural Language Engineering 17, n. 2 (21 marzo 2011): 203–19. http://dx.doi.org/10.1017/s1351324911000064.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractWe present a randomised inference algorithm for regular tree languages. The algorithm takes as input two disjoint finite nonempty sets of trees 𝒫 and 𝒩 and outputs a nondeterministic finite tree automaton that accepts every tree in 𝒫 and rejects every tree in 𝒩. The output automaton typically represents a nontrivial generalisation of the examples given in 𝒫 and 𝒩. To obtain compact output automata, we use a heuristics similar to bisimulation minimisation. The algorithm has time complexity of $\ordo{\negsize \cdot \possize^2}$, where n𝒩 and n𝒫 are the size of 𝒩 and 𝒫, respectively. Experiments are conducted on a prototype implementation, and the empirical results appear to second the theoretical results.
3

Wieczorek, Wojciech, Tomasz Jastrzab e Olgierd Unold. "Answer Set Programming for Regular Inference". Applied Sciences 10, n. 21 (30 ottobre 2020): 7700. http://dx.doi.org/10.3390/app10217700.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We propose an approach to non-deterministic finite automaton (NFA) inductive synthesis that is based on answer set programming (ASP) solvers. To that end, we explain how an NFA and its response to input samples can be encoded as rules in a logic program. We then ask an ASP solver to find an answer set for the program, which we use to extract the automaton of the required size. We conduct a series of experiments on some benchmark sets, using the implementation of our approach. The results show that our method outperforms, in terms of CPU time, a SAT approach and other exact algorithms on all benchmarks.
4

Grachev, Petr, Sergey Muravyov, Andrey Filchenkov e Anatoly Shalyto. "Automata generation based on recurrent neural networks and automated cauterization selection". Information and Control Systems, n. 1 (19 febbraio 2020): 34–43. http://dx.doi.org/10.31799/1684-8853-2020-1-34-43.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Intoduction: The regular inference problem is to synthesize deterministic finite-state automata by a list of words which are examplesand counterexamples of some unknown regular language. This problem is one of the main in the theory of formal languages and relatedfields. One of the most successful solutions to this problem is training a recurrent neural network on word classification and clusteringthe vectors in the space of RNN inner weights. However, it is not guaranteed that a consistent automaton can be constructed based onthe clustering results. More complex models require more memory, training time and training samples. Purpose: Creating a brand newgrammar inference algorithm which would use modern machine learning methods. Methods: A recurrent neural network with an errorfunction proposed by the authors was used for classification. For clustering, the method of joint selection and tuning of hyperparameterwas used. Results: Ten different datasets were used for testing the models, corresponding to ten different regular grammars and tenautomata. According to the test results, the developed model successfully synthesize automata with no more than five input charactersand states. For four grammars, out of the seven successfully inferred ones, the constructed automaton was minimal. For three datasets,an automaton could not be built, either because of an insufficient number of clusters in the proposed partition, or because of the inabilityto build a consistent automaton for this partition. Discussion: Applying the algorithm of search for maximum likelihood between theclusters of vector and the corresponding states in order to resolve structural conflicts may expand the scope of the model.
5

Topper, Noah, George Atia, Ashutosh Trivedi e Alvaro Velasquez. "Active Grammatical Inference for Non-Markovian Planning". Proceedings of the International Conference on Automated Planning and Scheduling 32 (13 giugno 2022): 647–51. http://dx.doi.org/10.1609/icaps.v32i1.19853.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Planning in finite stochastic environments is canonically posed as a Markov decision process where the transition and reward structures are explicitly known. Reinforcement learning (RL) lifts the explicitness assumption by working with sampling models instead. Further, with the advent of reward machines, we can relax the Markovian assumption on the reward. Angluin's active grammatical inference algorithm L* has found novel application in explicating reward machines for non-Markovian RL. We propose maintaining the assumption of explicit transition dynamics, but with an implicit non-Markovian reward signal, which must be inferred from experiments. We call this setting non-Markovian planning, as opposed to non-Markovian RL. The proposed approach leverages L* to explicate an automaton structure for the underlying planning objective. We exploit the environment model to learn an automaton faster and integrate it with value iteration to accelerate the planning. We compare against recent non-Markovian RL solutions which leverage grammatical inference, and establish complexity results that illustrate the difference in runtime between grammatical inference in planning and RL settings.
6

Di, Chong, Fangqi Li, Shenghong Li e Jianwei Tian. "Bayesian inference based learning automaton scheme in Q-model environments". Applied Intelligence 51, n. 10 (10 marzo 2021): 7453–68. http://dx.doi.org/10.1007/s10489-021-02230-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

CHTOUROU, MOHAMED, MAHER BEN JEMAA e RAOUF KETATA. "A learning-automaton-based method for fuzzy inference system identification". International Journal of Systems Science 28, n. 9 (luglio 1997): 889–96. http://dx.doi.org/10.1080/00207729708929451.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Senthil Kumar, K., e D. Malathi. "Context Free Grammar Identification from Positive Samples". International Journal of Engineering & Technology 7, n. 3.12 (20 luglio 2018): 1096. http://dx.doi.org/10.14419/ijet.v7i3.12.17768.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In grammatical inference one aims to find underlying grammar or automaton which explains the target language in some way. Context free grammar which represents type 2 grammar in Chomsky hierarchy has many applications in Formal Language Theory, pattern recognition, Speech recognition, Machine learning , Compiler design and Genetic engineering etc. Identification of unknown Context Free grammar of the target language from positive examples is an extensive area in Grammatical Inference/ Grammar induction. In this paper we propose a novel method which finds the equivalent Chomsky Normal form.
9

Kosala, Raymond, Hendrik Blockeel, Maurice Bruynooghe e Jan Van den Bussche. "Information extraction from structured documents using k-testable tree automaton inference". Data & Knowledge Engineering 58, n. 2 (agosto 2006): 129–58. http://dx.doi.org/10.1016/j.datak.2005.05.002.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Tîrnăucă, Cristina. "A Survey of State Merging Strategies for DFA Identification in the Limit". Triangle, n. 8 (29 giugno 2018): 121. http://dx.doi.org/10.17345/triangle8.121-136.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Identication of deterministic nite automata (DFAs) has an extensive history, both in passive learning and in active learning. Intractability results by Gold [5] and Angluin [1] show that nding the smallest automaton consistent with a set of accepted and rejected strings is NP-complete. Nevertheless, a lot of work has been done on learning DFAs from examples within specic heuristics, starting with Trakhtenbrot and Barzdin's algorithm [15], rediscovered and applied to the discipline of grammatical inference by Gold [5]. Many other algorithms have been developed, the convergence of most of which is based on characteristic sets: RPNI (Regular Positive and Negative Inference) by J. Oncina and P. García [11, 12], Traxbar by K. Lang [8], EDSM (Evidence Driven State Merging), Windowed EDSM and Blue- Fringe EDSM by K. Lang, B. Pearlmutter and R. Price [9], SAGE (Self-Adaptive Greedy Estimate) by H. Juillé [7], etc. This paper provides a comprehensive study of the most important state merging strategies developed so far.
11

Jastrzab, Tomasz, Zbigniew J. Czech e Wojciech Wieczorek. "Parallel Algorithms for Minimal Nondeterministic Finite Automata Inference". Fundamenta Informaticae 178, n. 3 (15 gennaio 2021): 203–27. http://dx.doi.org/10.3233/fi-2021-2004.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The goal of this paper is to develop the parallel algorithms that, on input of a learning sample, identify a regular language by means of a nondeterministic finite automaton (NFA). A sample is a pair of finite sets containing positive and negative examples. Given a sample, a minimal NFA that represents the target regular language is sought. We define the task of finding an NFA, which accepts all positive examples and rejects all negative ones, as a constraint satisfaction problem, and then propose the parallel algorithms to solve the problem. The results of comprehensive computational experiments on the variety of inference tasks are reported. The question of minimizing an NFA consistent with a learning sample is computationally hard.
12

Kubota, Naoyuki, Yusuke Nojima, Fumio Kojima, Toshio Fukuda e Susumu Shibata. "Path Planning and Control for a Flexible Transfer System". Journal of Robotics and Mechatronics 12, n. 2 (20 aprile 2000): 103–9. http://dx.doi.org/10.20965/jrm.2000.p0103.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We studied intelligent control of self-organizing manufacturing system (SOMS) composed of modules that self-organize based on time-series information from other modules and environment. Modules create output through interaction with other modules. We discuss intelligent control and path planning in a manufacturing line of conveyer units and machining centers. Genetic algorithm are applied to conveyor pallet path planning in global decision making and learning automaton is applied to local conveyer decision making. We use simplified fuzzy inference to control pallets providing interval, verifying its feasibility by simulation.
13

Haneef, Farah, e Muddassar A. Sindhu. "DLIQ: A Deterministic Finite Automaton Learning Algorithm through Inverse Queries". Information Technology and Control 51, n. 4 (12 dicembre 2022): 611–24. http://dx.doi.org/10.5755/j01.itc.51.4.31394.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Automaton learning has attained a renewed interest in many interesting areas of software engineering including formal verification, software testing and model inference. An automaton learning algorithm typically learns the regular language of a DFA with the help of queries. These queries are posed by the learner (Learning Algorithm) to a Minimally Adequate Teacher (MAT). The MAT can generally answer two types of queries asked by the learning algorithm; membership queries and equivalence queries. Learning algorithms can be categorized into two broad categories: incremental and complete learning algorithms. Likewise, these can be designed for 1-bit learning or k-bit learning. Existing automaton learning algorithms have polynomial (atleast cubic) time complexity in the presence of a MAT. Therefore, sometimes these algorithms even become fail to learn large complex software systems. In this research work, we have reduced the complexity of the Deterministic Finite Automaton (DFA) learning into lower bounds (from cubic to square form). For this, we introduce an efficient complete DFA learning algorithm through Inverse Queries (DLIQ) based on the concept of inverse queries introduced by John Hopcroft for state minimization of a DFA. The DLIQ algorithm takes O(|Ps||F|+|Σ|N) complexity in the presence of a MAT which is also equipped to answer inverse queries. We give a theoretical analysis of the proposed algorithm along with providing a proof correctness and termination of the DLIQ algorithm. We also compare the performance of DLIQ with ID algorithm by implementing an evaluation framework. Our results depict that DLIQ is more efficient than ID algorithm in terms of time complexity.
14

Meddah, Ishak, e Belkadi Khaled. "Discovering Patterns using Process Mining". International Journal of Rough Sets and Data Analysis 3, n. 4 (ottobre 2016): 21–31. http://dx.doi.org/10.4018/ijrsda.2016100102.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Process mining provides an important bridge between data mining and business process analysis, his techniques allow for extracting information from event logs. In general, there are two steps in process mining, correlation definition or discovery and then process inference or composition. Firstly, the authors' work consists to mine small patterns from a log traces of two applications; SKYPE, and VIBER, those patterns are the representation of the execution traces of a business process. In this step, the authors use existing techniques; The patterns are represented by finite state automaton or their regular expression; The final model is the combination of only two types of small patterns whom are represented by the regular expressions (ab)* and (ab*c)*. Secondly, the authors compute these patterns in parallel, and then combine those small patterns using the composition rules, they have two parties the first is the mine, they discover patterns from execution traces and the second is the combination of these small patterns. The patterns mining and the composition is illustrated by the automaton existing techniques. The Execution traces are the different actions effected by users in the SKYPE and VIBER. The results are general and precise. It minimizes the execution time and the loss of information.
15

Meddah, Ishak H. A., Nour Elhouda Remil e Hadja Nebia Meddah. "Novel Approach for Mining Patterns". International Journal of Applied Evolutionary Computation 12, n. 1 (gennaio 2021): 27–42. http://dx.doi.org/10.4018/ijaec.2021010103.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Process mining techniques allow for extracting information from event logs. In general, there are two steps in process mining, correlation definition or discovery and then process inference or composition. Firstly, the work consists to mine small patterns from a log traces; those patterns are the representation of the traces execution from a log file of a business process. In this step, the authors use existing techniques. The patterns are represented by finite state automaton or their regular expression. The final model is the combination of only two types of small patterns that are represented by the regular expressions. Secondly, they compute these patterns in parallel and then combine those small patterns using the MapReduce framework. They have two parties the first is the map step. They mine patterns from execution traces, and the second is the combination of these small patterns as reduce step. The results are promising; they show that the approach is scalable, general, and precise. It minimizes the execution time by the use of the MapReduce framework.
16

Iyer, Padmavathi, e Amirreza Masoumzadeh. "Learning Relationship-Based Access Control Policies from Black-Box Systems". ACM Transactions on Privacy and Security 25, n. 3 (31 agosto 2022): 1–36. http://dx.doi.org/10.1145/3517121.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Access control policies are crucial in securing data in information systems. Unfortunately, often times, such policies are poorly documented, and gaps between their specification and implementation prevent the system users, and even its developers, from understanding the overall enforced policy of a system. To tackle this problem, we propose the first of its kind systematic approach for learning the enforced authorizations from a target system by interacting with and observing it as a black box. The black-box view of the target system provides the advantage of learning its overall access control policy without dealing with its internal design complexities. Furthermore, compared to the previous literature on policy mining and policy inference, we avoid exhaustive exploration of the authorization space by minimizing our observations. We focus on learning relationship-based access control (ReBAC) policy, and show how we can construct a deterministic finite automaton (DFA) to formally characterize such an enforced policy. We theoretically analyze our proposed learning approach by studying its termination, correctness, and complexity. Furthermore, we conduct extensive experimental analysis based on realistic application scenarios to establish its cost, quality of learning, and scalability in practice.
17

Miguel, Juan Cristian Daniel, Andrés Chimuris Gimenez, Nicolás Garrido, Matias Bassi, Gabriela Velazquez e Marisa Panizzi. "State of the art on the conceptual modeling of serious games through a systematic mapping of the literature". Journal of Computer Science and Technology 22, n. 2 (17 ottobre 2022): e13. http://dx.doi.org/10.24215/16666038.22.e13.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Serious games are those games whose objective is to stimulate learning or the acquisition of knowledge or a skill. Currently, there is a trend in the market towards the generation of this type of games. Given the importance of conceptualizing the domain of a problem and its solution, this paper presents the results of a systematic mapping of the literature, Systematic Mapping Study (SMS), with the purpose of identifying the state of the art and discovering the existing contributions regarding the conceptual modeling of serious games. A search was carried out in Scopus, IEEE Xplore and ACM digital libraries from January 2010 to June 2021. Of a total of 558 articles identified, 31 primary studies were analyzed. It was evidenced that the use of UML prevails for the modeling of serious games, mainly for class and activity diagrams, together with other languages ​​such as UP4EG, DSML, Deterministic Finite Automaton (DFA), Discrete Event System Specification (DEVS) and Fuzzy Inference Systems (FIS). Thirty percent of the primary studies propose a framework and another 30% propose a development methodology. Most of these frameworks do not specify how to perform conceptual modeling.
18

Álvez, Javier, Montserrat Hermo, Paqui Lucio e German Rigau. "Automatic white-box testing of first-order logic ontologies". Journal of Logic and Computation 29, n. 5 (26 febbraio 2019): 723–51. http://dx.doi.org/10.1093/logcom/exz001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractFormal ontologies are axiomatizations in a logic-based formalism. The development of formal ontologies is generating considerable research on the use of automated reasoning techniques and tools that help in ontology engineering. One of the main aims is to refine and to improve axiomatizations for enabling automated reasoning tools to efficiently infer reliable information. Defects in the axiomatization cannot only cause wrong inferences, but can also hinder the inference of expected information, either by increasing the computational cost of or even preventing the inference. In this paper, we introduce a novel, fully automatic white-box testing framework for first-order logic (FOL) ontologies. Our methodology is based on the detection of inference-based redundancies in the given axiomatization. The application of the proposed testing method is fully automatic since (i) the automated generation of tests is guided only by the syntax of axioms and (ii) the evaluation of tests is performed by automated theorem provers (ATPs). Our proposal enables the detection of defects and serves to certify the grade of suitability—for reasoning purposes—of every axiom. We formally define the set of tests that are (automatically) generated from any axiom and prove that every test is logically related to redundancies in the axiom from which the test has been generated. We have implemented our method and used this implementation to automatically detect several non-trivial defects that were hidden in various FOL ontologies. Throughout the paper we provide illustrative examples of these defects, explain how they were found and how each proof—given by an ATP—provides useful hints on the nature of each defect. Additionally, by correcting all the detected defects, we have obtained an improved version of one of the tested ontologies: Adimen-SUMO.
19

Akbayrak, Semih, Ivan Bocharov e Bert de Vries. "Extended Variational Message Passing for Automated Approximate Bayesian Inference". Entropy 23, n. 7 (26 giugno 2021): 815. http://dx.doi.org/10.3390/e23070815.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Variational Message Passing (VMP) provides an automatable and efficient algorithmic framework for approximating Bayesian inference in factorized probabilistic models that consist of conjugate exponential family distributions. The automation of Bayesian inference tasks is very important since many data processing problems can be formulated as inference tasks on a generative probabilistic model. However, accurate generative models may also contain deterministic and possibly nonlinear variable mappings and non-conjugate factor pairs that complicate the automatic execution of the VMP algorithm. In this paper, we show that executing VMP in complex models relies on the ability to compute the expectations of the statistics of hidden variables. We extend the applicability of VMP by approximating the required expectation quantities in appropriate cases by importance sampling and Laplace approximation. As a result, the proposed Extended VMP (EVMP) approach supports automated efficient inference for a very wide range of probabilistic model specifications. We implemented EVMP in the Julia language in the probabilistic programming package ForneyLab.jl and show by a number of examples that EVMP renders an almost universal inference engine for factorized probabilistic models.
20

Meddah, Ishak H. A., e Nour El Houda REMIL. "Parallel and Distributed Pattern Mining". International Journal of Rough Sets and Data Analysis 6, n. 3 (luglio 2019): 1–17. http://dx.doi.org/10.4018/ijrsda.2019070101.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The treatment of large data is difficult and it looks like the arrival of the framework MapReduce is a solution of this problem. This framework can be used to analyze and process vast amounts of data. This happens by distributing the computational work across a cluster of virtual servers running in a cloud or a large set of machines. Process mining provides an important bridge between data mining and business process analysis. Its techniques allow for extracting information from event logs. Generally, there are two steps in process mining, correlation definition or discovery and the inference or composition. First of all, their work mines small patterns from log traces. Those patterns are the representation of the traces execution from a log file of a business process. In this step, the authors use existing techniques. The patterns are represented by finite state automaton or their regular expression; and the final model is the combination of only two types of different patterns whom are represented by the regular expressions (ab)* and (ab*c)*. Second, they compute these patterns in parallel, and then combine those small patterns using the Hadoop framework. They have two steps; the first is the Map Step through which they mine patterns from execution traces, and the second one is the combination of these small patterns as a reduce step. The results show that their approach is scalable, general and precise. It minimizes the execution time by the use of the Hadoop framework.
21

Bar-Haim, Roy, Ido Dagan e Jonathan Berant. "Knowledge-Based Textual Inference via Parse-Tree Transformations". Journal of Artificial Intelligence Research 54 (9 settembre 2015): 1–57. http://dx.doi.org/10.1613/jair.4584.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Textual inference is an important component in many applications for understanding natural language. Classical approaches to textual inference rely on logical representations for meaning, which may be regarded as "external" to the natural language itself. However, practical applications usually adopt shallower lexical or lexical-syntactic representations, which correspond closely to language structure. In many cases, such approaches lack a principled meaning representation and inference framework. We describe an inference formalism that operates directly on language-based structures, particularly syntactic parse trees. New trees are generated by applying inference rules, which provide a unified representation for varying types of inferences. We use manual and automatic methods to generate these rules, which cover generic linguistic structures as well as specific lexical-based inferences. We also present a novel packed data-structure and a corresponding inference algorithm that allows efficient implementation of this formalism. We proved the correctness of the new algorithm and established its efficiency analytically and empirically. The utility of our approach was illustrated on two tasks: unsupervised relation extraction from a large corpus, and the Recognizing Textual Entailment (RTE) benchmarks.
22

Meddah, Ishak H. A., e Khaled Belkadi. "Parallel Distributed Patterns Mining Using Hadoop MapReduce Framework". International Journal of Grid and High Performance Computing 9, n. 2 (aprile 2017): 70–85. http://dx.doi.org/10.4018/ijghpc.2017040105.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The treatment of large data is proving more difficult in different axes, but the arrival of the framework MapReduce is a solution of this problem. With it we can analyze and process vast amounts of data. It does this by distributing the computational work across a cluster of virtual servers running in a cloud or large set of machines while process mining provides an important bridge between data mining and business process analysis. The process mining techniques allow for extracting information from event logs. In general, there are two steps in process mining: correlation definition or discovery and process inference or composition. Firstly, the authors' work consists to mine small patterns from a log traces. Those patterns are the representation of the traces execution from a log file of a business process. In this step, they use existing techniques. The patterns are represented by finite state automaton or their regular expression. The final model is the combination of only two types of small patterns whom are represented by the regular expressions (ab)* and (ab*c)*. Secondly, the authors compute these patterns in parallel, and then combine those small patterns using the MapReduce framework. They have two parties: the first is the Map Step in which they mine patterns from execution traces; the second is the combination of these small patterns as reduce step. The authors' results are promising in that they show that their approach is scalable, general, and precise. It minimizes the execution time by the use of the MapReduce framework.
23

Chauhan, Uttam, Shrusti Shah, Dharati Shiroya, Dipti Solanki, Zeel Patel, Jitendra Bhatia, Sudeep Tanwar, Ravi Sharma, Verdes Marina e Maria Simona Raboaca. "Modeling Topics in DFA-Based Lemmatized Gujarati Text". Sensors 23, n. 5 (1 marzo 2023): 2708. http://dx.doi.org/10.3390/s23052708.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Topic modeling is a machine learning algorithm based on statistics that follows unsupervised machine learning techniques for mapping a high-dimensional corpus to a low-dimensional topical subspace, but it could be better. A topic model’s topic is expected to be interpretable as a concept, i.e., correspond to human understanding of a topic occurring in texts. While discovering corpus themes, inference constantly uses vocabulary that impacts topic quality due to its size. Inflectional forms are in the corpus. Since words frequently appear in the same sentence and are likely to have a latent topic, practically all topic models rely on co-occurrence signals between various terms in the corpus. The topics get weaker because of the abundance of distinct tokens in languages with extensive inflectional morphology. Lemmatization is often used to preempt this problem. Gujarati is one of the morphologically rich languages, as a word may have several inflectional forms. This paper proposes a deterministic finite automaton (DFA) based lemmatization technique for the Gujarati language to transform lemmas into their root words. The set of topics is then inferred from this lemmatized corpus of Gujarati text. We employ statistical divergence measurements to identify semantically less coherent (overly general) topics. The result shows that the lemmatized Gujarati corpus learns more interpretable and meaningful subjects than unlemmatized text. Finally, results show that lemmatization curtails the size of vocabulary decreases by 16% and the semantic coherence for all three measurements—Log Conditional Probability, Pointwise Mutual Information, and Normalized Pointwise Mutual Information—from −9.39 to −7.49, −6.79 to −5.18, and −0.23 to −0.17, respectively.
24

Jadkar, Vinayak, Mayur Khandate, Vedant Gampawar, Pritesh Bhutada e Prof Leena Deshpande. "Robotic Process Automation for Stock Selection Process and Price Prediction Model using Machine Learning Techniques". International Journal on Recent and Innovation Trends in Computing and Communication 10, n. 7 (31 luglio 2022): 50–57. http://dx.doi.org/10.17762/ijritcc.v10i7.5569.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Among these last few years, we have seen a tremendous increase in the participation in financial markets as well as there are more robotic process automation jobs emerging in recent years. We can clearly see the scope and increased requirement in both these domains. In the stock market, predicting the stock prices/direction and making profits is the main goal whereas in rpa, tasks which are done on a regular basis are converted into automated or semi-automated form. In this paper we have tried to apply both things into the picture such as developing a price prediction model using machine learning techniques and automating the stock selecting process through technical screeners depending on user requirements. Stacked LSTM and Bi-directional LSTM ML techniques are used and for automation part powerful rpa tool Automation Anywhere has been used. Factors such as evaluation metrics and graph plots are compared for models and advantages, and disadvantages are discussed for using systems with RPA and without RPA practices. Price prediction plots have been analyzed for stocks of different sectors with highest market capitalization and results/analysis and inferences have been stated.
25

Ge, Hui, Keyan Gao, Shaoqiong Li, Wei Wang, Qiang Chen, Xialv Lin, Ziyi Huan, Xuemei Su e Xu Yang. "An Automatic Approach Designed for Inference of the Underlying Cause-of-Death of Citizens". International Journal of Environmental Research and Public Health 18, n. 5 (2 marzo 2021): 2414. http://dx.doi.org/10.3390/ijerph18052414.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
It is very important to have a comprehensive understanding of the health status of a country’s population, which helps to develop corresponding public health policies. Correct inference of the underlying cause-of-death for citizens is essential to achieve a comprehensive understanding of the health status of a country’s population. Traditionally, this relies mainly on manual methods based on medical staff’s experiences, which require a lot of resources and is not very efficient. In this work, we present our efforts to construct an automatic method to perform inferences of the underlying causes-of-death for citizens. A sink algorithm is introduced, which could perform automatic inference of the underlying cause-of-death for citizens. The results show that our sink algorithm could generate a reasonable output and outperforms other stat-of-the-art algorithms. We believe it would be very useful to greatly enhance the efficiency of correct inferences of the underlying causes-of-death for citizens.
26

Meddah, Ishak H. A., Khaled Belkadi e Mohamed Amine Boudia. "Parallel Mining Small Patterns from Business Process Traces". International Journal of Software Science and Computational Intelligence 8, n. 1 (gennaio 2016): 32–45. http://dx.doi.org/10.4018/ijssci.2016010103.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Hadoop MapReduce has arrived to solve the problem of treatment of big data, also the parallel treatment, with this framework the authors analyze, process a large size of data. It based for distributing the work in two big steps, the map and the reduce steps in a cluster or big set of machines. They apply the MapReduce framework to solve some problems in the domain of process mining how provides a bridge between data mining and business process analysis, this technique consists to mine lot of information from the process traces; In process mining, there are two steps, correlation definition and the process inference. The work consists in first time of mining patterns whom are the work flow of the process from execution traces, those patterns present the work or the history of each party of the process, the authors' small patterns are represented in this work by finite state automaton or their regular expression, the authors have only two patterns to facilitate the process, the general presentation of the process is the combination of the small mining patterns. The patterns are represented by the regular expressions (ab)* and (ab*c)*. Secondly, they compute the patterns, and combine them using the Hadoop MapReduce framework, in this work they have two general steps, first the Map step, they mine small patterns or small models from business process, and the second is the combination of models as reduce step. The authors use the business process of two web applications, the SKYPE, and VIBER applications. The general result shown that the parallel distributed process by using the Hadoop MapReduce framework is scalable, and minimizes the execution time.
27

Zhang, Sheng, Rachel Rudinger, Kevin Duh e Benjamin Van Durme. "Ordinal Common-sense Inference". Transactions of the Association for Computational Linguistics 5 (dicembre 2017): 379–95. http://dx.doi.org/10.1162/tacl_a_00068.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Humans have the capacity to draw common-sense inferences from natural language: various things that are likely but not certain to hold based on established discourse, and are rarely stated explicitly. We propose an evaluation of automated common-sense inference based on an extension of recognizing textual entailment: predicting ordinal human responses on the subjective likelihood of an inference holding in a given context. We describe a framework for extracting common-sense knowledge from corpora, which is then used to construct a dataset for this ordinal entailment task. We train a neural sequence-to-sequence model on this dataset, which we use to score and generate possible inferences. Further, we annotate subsets of previously established datasets via our ordinal annotation protocol in order to then analyze the distinctions between these and what we have constructed.
28

Rahozin, D. V. "Models of concurrent program running in resource constrained environment". PROBLEMS IN PROGRAMMING, n. 2-3 (settembre 2020): 149–56. http://dx.doi.org/10.15407/pp2020.02-03.149.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The paper considers concurrent program modeling using resource constrained automatons. Several software samples are considered: real time operational systems, video processing including object recognition, neural network inference, common linear systems solving methods for physical processes modeling. The source code annotating and automatic extraction of program resource constraints with the help of profiling software are considered, this enables the modeling for concurrent software behavior with minimal user assistance.
29

Labayen, Mikel, Laura Medina, Fernando Eizaguirre, José Flich e Naiara Aginako. "HPC Platform for Railway Safety-Critical Functionalities Based on Artificial Intelligence". Applied Sciences 13, n. 15 (7 agosto 2023): 9017. http://dx.doi.org/10.3390/app13159017.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The automation of railroad operations is a rapidly growing industry. In 2023, a new European standard for the automated Grade of Automation (GoA) 2 over European Train Control System (ETCS) driving is anticipated. Meanwhile, railway stakeholders are already planning their research initiatives for driverless and unattended autonomous driving systems. As a result, the industry is particularly active in research regarding perception technologies based on Computer Vision (CV) and Artificial Intelligence (AI), with outstanding results at the application level. However, executing high-performance and safety-critical applications on embedded systems and in real-time is a challenge. There are not many commercially available solutions, since High-Performance Computing (HPC) platforms are typically seen as being beyond the business of safety-critical systems. This work proposes a novel safety-critical and high-performance computing platform for CV- and AI-enhanced technology execution used for automatic accurate stopping and safe passenger transfer railway functionalities. The resulting computing platform is compatible with the majority of widely-used AI inference methodologies, AI model architectures, and AI model formats thanks to its design, which enables process separation, redundant execution, and HW acceleration in a transparent manner. The proposed technology increases the portability of railway applications into embedded systems, isolates crucial operations, and effectively and securely maintains system resources.
30

Alves, Luís Q., Raquel Ruivo, Miguel M. Fonseca, Mónica Lopes-Marques, Pedro Ribeiro e L. Filipe C. Castro. "PseudoChecker: an integrated online platform for gene inactivation inference". Nucleic Acids Research 48, W1 (25 maggio 2020): W321—W331. http://dx.doi.org/10.1093/nar/gkaa408.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract The rapid expansion of high-quality genome assemblies, exemplified by ongoing initiatives such as the Genome-10K and i5k, demands novel automated methods to approach comparative genomics. Of these, the study of inactivating mutations in the coding region of genes, or pseudogenization, as a source of evolutionary novelty is mostly overlooked. Thus, to address such evolutionary/genomic events, a systematic, accurate and computationally automated approach is required. Here, we present PseudoChecker, the first integrated online platform for gene inactivation inference. Unlike the few existing methods, our comparative genomics-based approach displays full automation, a built-in graphical user interface and a novel index, PseudoIndex, for an empirical evaluation of the gene coding status. As a multi-platform online service, PseudoChecker simplifies access and usability, allowing a fast identification of disruptive mutations. An analysis of 30 genes previously reported to be eroded in mammals, and 30 viable genes from the same lineages, demonstrated that PseudoChecker was able to correctly infer 97% of loss events and 95% of functional genes, confirming its reliability. PseudoChecker is freely available, without login required, at http://pseudochecker.ciimar.up.pt.
31

Androshchuk, Аlexander, Serhii Yevseiev, Victor Melenchuk, Olga Lemeshko e Vladimir Lemeshko. "IMPROVEMENT OF PROJECT RISK ASSESSMENT METHODS OF IMPLEMENTATION OF AUTOMATED INFORMATION COMPONENTS OF NON-COMMERCIAL ORGANIZATIONAL AND TECHNICAL SYSTEMS". EUREKA: Physics and Engineering 1 (31 gennaio 2020): 48–55. http://dx.doi.org/10.21303/2461-4262.2020.001131.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The results of a study using the methodological apparatus of the theory of fuzzy logic and automation tools for analyzing input data for risk assessment of projects for the implementation of automated information components of organizational and technical systems are presented. Based on the model of logistics projects for motor transport units, the method for assessing the risks of projects implementing automated information components of non-commercial organizational and technical systems has been improved. To do this, let’s analyze the peculiarities of implementing ERP projects as commercial ones and investigate the specifics of the activities of state institutions, when successful tasks, and not economic indicators, lay the foundation for the assessment. It is considered that it is possible to formulate a system of risk assessment indicators for reducing the effectiveness of projects for implementing automated information systems in non-commercial organizational and technical systems. A meaningful interpretation of the fuzzy approach is carried out regarding the formalization of the risk assessment process for projects of automated information systems of public institutions. A tree of fuzzy inference is constructed based on the results of a study of the description of indicators and expert assessments on the risk assessment of the implementation of the project of such an automated information system. The improved method differs from the known ones by the use of hierarchical fuzzy inference, which makes it possible to quantify, reduce the time to evaluate project risks and improve the quality of decisions. An increase in the number of input variables leads to an increase in complexity (an increase in the number of rules) for constructing a fuzzy inference system. The construction of a hierarchical system of fuzzy inference and knowledge bases can reduce complexity (the number of rules). The development of a software module based on the algorithm of the method as part of corporate automated information systems of non-commercial organizational and technical systems will reduce the time for risk assessment of projects for the implementation of automated information systems.
32

Guo, Caiping, Linhua Zhang e Jiahui Peng. "A Novel Fuzzy Logic Guided Method for Automatic gEUD-based Inverse Treatment Planning". International Journal of Circuits, Systems and Signal Processing 15 (7 giugno 2021): 525–32. http://dx.doi.org/10.46300/9106.2021.15.58.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Generalized equivalent uniform dose (gEUD) -based hybrid objective functions are widely used in intensity modulated radiotherapy (IMRT). To improve its efficiency, a novel fuzzy logic guided inverse planning method was developed for the automatic parameters optimization of the gEUD-based radiotherapy optimization. Simple inference rules were formulated according to the knowledge of the treatment planner. Then they automatically and iteratively guide the parameters modification according to the percentage of deviation between the current dose and the prescribed dose. weighting factors and prescribed dose were automatically adjusted by developed fuzzy inference system (FIS). The performance of the FIS was tested on ten prostate cancer cases. Experimental results indicate that proposed automatic method can yield comparable or better plans than manual method. The fuzzy logic guided automatic inverse planning method of parameters optimization can significantly improve the efficiency of the method of manually adjusting parameters, and contributes to the development of fully automated planning.
33

Wang, Chenlan, Chongjie Zhang e X. Jessie Yang. "Automation reliability and trust: A Bayesian inference approach". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, n. 1 (settembre 2018): 202–6. http://dx.doi.org/10.1177/1541931218621048.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Research shows that over repeated interactions with automation, human operators are able to learn how reliable the automation is and update their trust in automation. The goal of the present study is to investigate if this learning and inference process approximately follow the principle of Bayesian probabilistic inference. First, we applied Bayesian inference to estimate human operators’ perceived system reliability and found high correlations between the Bayesian estimates and the perceived reliability for the majority of the participants. We then correlated the Bayesian estimates with human operators’ reported trust and found moderate correlations for a large portion of the participants. Our results suggest that human operators’ learning and inference process for automation reliability can be approximated by Bayesian inference.
34

Alatawi, Haifa. "Empirical evidence on scalar implicature processing at the behavioural and neural levels". International Review of Pragmatics 11, n. 1 (3 gennaio 2019): 1–21. http://dx.doi.org/10.1163/18773109-201810011.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract The Default hypothesis on implicature processing suggests that a rapid, automatic mechanism is used to process utterances such as “some of his family are attending the wedding” to infer that “not all of them are attending”, an inference subject to cancellation if additional contextual information is provided (e.g. “actually, they are all attending”). In contrast, the Relevance hypothesis suggests that only context-dependent inferences are computed and this process is cognitively effortful. This article reviews findings on behavioural and neural processing of scalar implicatures to clarify the cognitive effort involved.
35

SUTCLIFFE, GEOFF. "SEMANTIC DERIVATION VERIFICATION: TECHNIQUES AND IMPLEMENTATION". International Journal on Artificial Intelligence Tools 15, n. 06 (dicembre 2006): 1053–70. http://dx.doi.org/10.1142/s0218213006003119.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Automated Theorem Proving (ATP) systems are complex pieces of software, and thus may have bugs that make them unsound. In order to guard against unsoundness, the derivations output by an ATP system may be semantically verified by trusted ATP systems that check the required semantic properties of each inference step. Such verification needs to be augmented by structural verification that checks that inferences have been used correctly in the context of the overall derivation. This paper describes techniques for semantic verification of derivations, and reports on their implementation and testing in the GDV verifier.
36

Berzins, V. "Lightweight inference for automation efficiency". Science of Computer Programming 42, n. 1 (gennaio 2002): 61–74. http://dx.doi.org/10.1016/s0167-6423(01)00031-4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Bublikov, A. V., N. S. Pryadko e Yu A. Papaika. "System of fuzzy automatic control of coal massif cutting by a shearer drum". Technical mechanics 2021, n. 3 (12 ottobre 2021): 99–110. http://dx.doi.org/10.15407/itm2021.03.099.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Up to now, automatic control of the shearer speed has been performed to keep the actual speed at an operator-specified level or to keep the actual power at a stable level without overheating or overturning. However, the problem of control of coal seam cutting by the upper drum of a shearer in the case of a variable angle of drum – coal seam contact has yet to be studied. The aim of this work is to develop a method for synthesizing a system of fuzzy automatic control of coal massif cutting by a shearer drum based on an information criterion for the power efficiency of coal cutting with cutters. In this work, based on an information criterion for the power efficiency of coal cutting with cutters, a fuzzy inference algorithm is constructed for a system of automatic control of coal massif cutting by a shearer drum. In doing so, the parameters of the output linguistic variable term membership functions of the system and fuzzy operations are determined according to the recommendations of the classical Mamdani fuzzy inference algorithm using substantiated fuzzy production rules. The fuzzy inference algorithm constructed in this work is tested for efficiency based on the fraction of effective control actions generated by the fuzzy automatic control system. Using simulation, the efficiency of drum rotation speed control with the use of the proposed fuzzy inference algorithm is compared with that with the use of an uncontrolled shearer cutting drive. The study of the generation of control actions involving the upper shearer drum rotation speed showed that effective control actions were generated in the overwhelming majority of cases (about 93%). The proposed method forms a theoretical basis for the solution of the important scientific and practical problem of upper shearer drum rotation speed control automation with the aim to reduce specific power consumption and the amount of chips.
38

Brady, Henry E. "The Challenge of Big Data and Data Science". Annual Review of Political Science 22, n. 1 (11 maggio 2019): 297–323. http://dx.doi.org/10.1146/annurev-polisci-090216-023229.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Big data and data science are transforming the world in ways that spawn new concerns for social scientists, such as the impacts of the internet on citizens and the media, the repercussions of smart cities, the possibilities of cyber-warfare and cyber-terrorism, the implications of precision medicine, and the consequences of artificial intelligence and automation. Along with these changes in society, powerful new data science methods support research using administrative, internet, textual, and sensor-audio-video data. Burgeoning data and innovative methods facilitate answering previously hard-to-tackle questions about society by offering new ways to form concepts from data, to do descriptive inference, to make causal inferences, and to generate predictions. They also pose challenges as social scientists must grasp the meaning of concepts and predictions generated by convoluted algorithms, weigh the relative value of prediction versus causal inference, and cope with ethical challenges as their methods, such as algorithms for mobilizing voters or determining bail, are adopted by policy makers.
39

Mao, W., e J. Gratch. "Modeling Social Causality and Responsibility Judgment in Multi-Agent Interactions". Journal of Artificial Intelligence Research 44 (30 maggio 2012): 223–73. http://dx.doi.org/10.1613/jair.3526.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Social causality is the inference an entity makes about the social behavior of other entities and self. Besides physical cause and effect, social causality involves reasoning about epistemic states of agents and coercive circumstances. Based on such inference, responsibility judgment is the process whereby one singles out individuals to assign responsibility, credit or blame for multi-agent activities. Social causality and responsibility judgment are a key aspect of social intelligence, and a model for them facilitates the design and development of a variety of multi-agent interactive systems. Based on psychological attribution theory, this paper presents a domain-independent computational model to automate social inference and judgment process according to an agent’s causal knowledge and observations of interaction. We conduct experimental studies to empirically validate the computational model. The experimental results show that our model predicts human judgments of social attributions and makes inferences consistent with what most people do in their judgments. Therefore, the proposed model can be generically incorporated into an intelligent system to augment its social and cognitive functionality.
40

Ge, Hangli, Xiaohui Peng e Noboru Koshizuka. "Applying Knowledge Inference on Event-Conjunction for Automatic Control in Smart Building". Applied Sciences 11, n. 3 (20 gennaio 2021): 935. http://dx.doi.org/10.3390/app11030935.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Smart building, one of IoT-based emerging applications is where energy-efficiency, human comfort, automation, security could be managed even better. However, at the current stage, a unified and practical framework for knowledge inference inside the smart building is still lacking. In this paper, we present a practical proposal of knowledge extraction on event-conjunction for automatic control in smart buildings. The proposal consists of a unified API design, ontology model, inference engine for knowledge extraction. Two types of models: finite state machine(FSMs) and bayesian network (BN) have been used for capturing the state transition and sensor data fusion. In particular, to solve the problem that the size of time interval observations between two correlated events was too small to be approximated for estimation, we utilized the Markov Chain Monte Carlo (MCMC) sampling method to optimize the sampling on time intervals. The proposal has been put into use in a real smart building environment. 78-days data collection of the light states and elevator states has been conducted for evaluation. Several events have been inferred in the evaluation, such as room occupancy, elevator moving, as well as the event conjunction of both. The inference on the users’ waiting time of elevator-using revealed the potentials and effectiveness of the automatic control on the elevator.
41

Hassin, Ran R., Henk Aarts e Melissa J. Ferguson. "Automatic goal inferences". Journal of Experimental Social Psychology 41, n. 2 (marzo 2005): 129–40. http://dx.doi.org/10.1016/j.jesp.2004.06.008.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
42

González-Vanegas, Wilson, Andrés Álvarez-Meza, José Hernández-Muriel e Álvaro Orozco-Gutiérrez. "AKL-ABC: An Automatic Approximate Bayesian Computation Approach Based on Kernel Learning". Entropy 21, n. 10 (24 settembre 2019): 932. http://dx.doi.org/10.3390/e21100932.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Bayesian statistical inference under unknown or hard to asses likelihood functions is a very challenging task. Currently, approximate Bayesian computation (ABC) techniques have emerged as a widely used set of likelihood-free methods. A vast number of ABC-based approaches have appeared in the literature; however, they all share a hard dependence on free parameters selection, demanding expensive tuning procedures. In this paper, we introduce an automatic kernel learning-based ABC approach, termed AKL-ABC, to automatically compute posterior estimations from a weighting-based inference. To reach this goal, we propose a kernel learning stage to code similarities between simulation and parameter spaces using a centered kernel alignment (CKA) that is automated via an Information theoretic learning approach. Besides, a local neighborhood selection (LNS) algorithm is used to highlight local dependencies over simulations relying on graph theory. Attained results on synthetic and real-world datasets show our approach is a quite competitive method compared to other non-automatic state-of-the-art ABC techniques.
43

Zhang, Yong, Guangjun He e Guangjian Li. "Automatic Electrical System Fault Diagnosis Using a Fuzzy Inference System and Wavelet Transform". Processes 11, n. 8 (25 luglio 2023): 2231. http://dx.doi.org/10.3390/pr11082231.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Electrical systems consist of varied components that are used for power distribution, supply, and transfer. During transmission, component failures occur as a result of signal interruptions and peak utilization. Therefore, fault diagnosis should be performed to prevent fluctuations in the power distribution. This article proposes a fluctuation-reducing fault diagnosis method (FRFDM) for use in power distribution networks. The designed method employs fuzzy linear inferences to identify fluctuations in electrical signals that occur due to peak load demand and signal interruptions. The fuzzy process identifies the fluctuations in electrical signals that occur during distribution intervals. The linear relationship between two peak wavelets throughout the intervals are verified across successive distribution phases. In this paper, non-recurrent validation for these fluctuations is considered based on the limits found between the power drop and failure. This modification is used for preventing surge-based faults due to external signals. The inference process hinders the distribution of new devices and re-assigns them based on availability and the peak load experienced. Therefore, the device from which the inference outputs are taken is non-linear, and the frequently employed wavelet transforms are recommended for replacement or diagnosis. This method improves the fault detection process and ensures minimal distribution failures.
44

Robillard, Martin P., Eric Bodden, David Kawrykow, Mira Mezini e Tristan Ratchford. "Automated API Property Inference Techniques". IEEE Transactions on Software Engineering 39, n. 5 (maggio 2013): 613–37. http://dx.doi.org/10.1109/tse.2012.63.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Unkel, Christopher, e Monica S. Lam. "Automatic inference of stationary fields". ACM SIGPLAN Notices 43, n. 1 (14 gennaio 2008): 183–95. http://dx.doi.org/10.1145/1328897.1328463.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Kuperstein, Michael, Martin Vechev e Eran Yahav. "Automatic inference of memory fences". ACM SIGACT News 43, n. 2 (11 giugno 2012): 108–23. http://dx.doi.org/10.1145/2261417.2261438.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Bermudez, Ignacio, Alok Tongaonkar, Marios Iliofotou, Marco Mellia e Maurizio M. Munafò. "Towards automatic protocol field inference". Computer Communications 84 (giugno 2016): 40–51. http://dx.doi.org/10.1016/j.comcom.2016.02.015.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Hsu, S., Harpreet S. Sawhney e R. Kumar. "Automated mosaics via topology inference". IEEE Computer Graphics and Applications 22, n. 2 (marzo 2002): 44–54. http://dx.doi.org/10.1109/38.988746.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Miller, Michael, e Donald Perlis. "Automated inference in active logics". Journal of Applied Non-Classical Logics 6, n. 1 (gennaio 1996): 9–27. http://dx.doi.org/10.1080/11663081.1996.10510864.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Baldwin, J. F. "Automated fuzzy and probabilistic inference". Fuzzy Sets and Systems 18, n. 3 (aprile 1986): 219–35. http://dx.doi.org/10.1016/0165-0114(86)90003-5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Vai alla bibliografia