Letteratura scientifica selezionata sul tema "Automaton inference"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Automaton inference".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Automaton inference":

1

Richetin, M., e M. Naranjo. "Inference of Automata by dialectic learning". Robotica 3, n. 3 (settembre 1985): 159–63. http://dx.doi.org/10.1017/s0263574700009085.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
SUMMARYAn algorithm for the inference of the external behaviour model of an automaton is given. It uses a sequential learning procedure based on induction-contradiction-correction concepts. The induction is a generalization of relationships between automaton state properties, and the correction consists in a more and more accurate discrimination of the automaton state properties. These properties are defined from the input/output contradictory sequences which are discovered after the observed contradictions between successive predictions and observations.
2

HÖGBERG, JOHANNA. "A randomised inference algorithm for regular tree languages". Natural Language Engineering 17, n. 2 (21 marzo 2011): 203–19. http://dx.doi.org/10.1017/s1351324911000064.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractWe present a randomised inference algorithm for regular tree languages. The algorithm takes as input two disjoint finite nonempty sets of trees 𝒫 and 𝒩 and outputs a nondeterministic finite tree automaton that accepts every tree in 𝒫 and rejects every tree in 𝒩. The output automaton typically represents a nontrivial generalisation of the examples given in 𝒫 and 𝒩. To obtain compact output automata, we use a heuristics similar to bisimulation minimisation. The algorithm has time complexity of $\ordo{\negsize \cdot \possize^2}$, where n𝒩 and n𝒫 are the size of 𝒩 and 𝒫, respectively. Experiments are conducted on a prototype implementation, and the empirical results appear to second the theoretical results.
3

Wieczorek, Wojciech, Tomasz Jastrzab e Olgierd Unold. "Answer Set Programming for Regular Inference". Applied Sciences 10, n. 21 (30 ottobre 2020): 7700. http://dx.doi.org/10.3390/app10217700.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
We propose an approach to non-deterministic finite automaton (NFA) inductive synthesis that is based on answer set programming (ASP) solvers. To that end, we explain how an NFA and its response to input samples can be encoded as rules in a logic program. We then ask an ASP solver to find an answer set for the program, which we use to extract the automaton of the required size. We conduct a series of experiments on some benchmark sets, using the implementation of our approach. The results show that our method outperforms, in terms of CPU time, a SAT approach and other exact algorithms on all benchmarks.
4

Grachev, Petr, Sergey Muravyov, Andrey Filchenkov e Anatoly Shalyto. "Automata generation based on recurrent neural networks and automated cauterization selection". Information and Control Systems, n. 1 (19 febbraio 2020): 34–43. http://dx.doi.org/10.31799/1684-8853-2020-1-34-43.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Intoduction: The regular inference problem is to synthesize deterministic finite-state automata by a list of words which are examplesand counterexamples of some unknown regular language. This problem is one of the main in the theory of formal languages and relatedfields. One of the most successful solutions to this problem is training a recurrent neural network on word classification and clusteringthe vectors in the space of RNN inner weights. However, it is not guaranteed that a consistent automaton can be constructed based onthe clustering results. More complex models require more memory, training time and training samples. Purpose: Creating a brand newgrammar inference algorithm which would use modern machine learning methods. Methods: A recurrent neural network with an errorfunction proposed by the authors was used for classification. For clustering, the method of joint selection and tuning of hyperparameterwas used. Results: Ten different datasets were used for testing the models, corresponding to ten different regular grammars and tenautomata. According to the test results, the developed model successfully synthesize automata with no more than five input charactersand states. For four grammars, out of the seven successfully inferred ones, the constructed automaton was minimal. For three datasets,an automaton could not be built, either because of an insufficient number of clusters in the proposed partition, or because of the inabilityto build a consistent automaton for this partition. Discussion: Applying the algorithm of search for maximum likelihood between theclusters of vector and the corresponding states in order to resolve structural conflicts may expand the scope of the model.
5

Topper, Noah, George Atia, Ashutosh Trivedi e Alvaro Velasquez. "Active Grammatical Inference for Non-Markovian Planning". Proceedings of the International Conference on Automated Planning and Scheduling 32 (13 giugno 2022): 647–51. http://dx.doi.org/10.1609/icaps.v32i1.19853.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Planning in finite stochastic environments is canonically posed as a Markov decision process where the transition and reward structures are explicitly known. Reinforcement learning (RL) lifts the explicitness assumption by working with sampling models instead. Further, with the advent of reward machines, we can relax the Markovian assumption on the reward. Angluin's active grammatical inference algorithm L* has found novel application in explicating reward machines for non-Markovian RL. We propose maintaining the assumption of explicit transition dynamics, but with an implicit non-Markovian reward signal, which must be inferred from experiments. We call this setting non-Markovian planning, as opposed to non-Markovian RL. The proposed approach leverages L* to explicate an automaton structure for the underlying planning objective. We exploit the environment model to learn an automaton faster and integrate it with value iteration to accelerate the planning. We compare against recent non-Markovian RL solutions which leverage grammatical inference, and establish complexity results that illustrate the difference in runtime between grammatical inference in planning and RL settings.
6

Di, Chong, Fangqi Li, Shenghong Li e Jianwei Tian. "Bayesian inference based learning automaton scheme in Q-model environments". Applied Intelligence 51, n. 10 (10 marzo 2021): 7453–68. http://dx.doi.org/10.1007/s10489-021-02230-8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

CHTOUROU, MOHAMED, MAHER BEN JEMAA e RAOUF KETATA. "A learning-automaton-based method for fuzzy inference system identification". International Journal of Systems Science 28, n. 9 (luglio 1997): 889–96. http://dx.doi.org/10.1080/00207729708929451.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Senthil Kumar, K., e D. Malathi. "Context Free Grammar Identification from Positive Samples". International Journal of Engineering & Technology 7, n. 3.12 (20 luglio 2018): 1096. http://dx.doi.org/10.14419/ijet.v7i3.12.17768.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In grammatical inference one aims to find underlying grammar or automaton which explains the target language in some way. Context free grammar which represents type 2 grammar in Chomsky hierarchy has many applications in Formal Language Theory, pattern recognition, Speech recognition, Machine learning , Compiler design and Genetic engineering etc. Identification of unknown Context Free grammar of the target language from positive examples is an extensive area in Grammatical Inference/ Grammar induction. In this paper we propose a novel method which finds the equivalent Chomsky Normal form.
9

Kosala, Raymond, Hendrik Blockeel, Maurice Bruynooghe e Jan Van den Bussche. "Information extraction from structured documents using k-testable tree automaton inference". Data & Knowledge Engineering 58, n. 2 (agosto 2006): 129–58. http://dx.doi.org/10.1016/j.datak.2005.05.002.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Tîrnăucă, Cristina. "A Survey of State Merging Strategies for DFA Identification in the Limit". Triangle, n. 8 (29 giugno 2018): 121. http://dx.doi.org/10.17345/triangle8.121-136.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Identication of deterministic nite automata (DFAs) has an extensive history, both in passive learning and in active learning. Intractability results by Gold [5] and Angluin [1] show that nding the smallest automaton consistent with a set of accepted and rejected strings is NP-complete. Nevertheless, a lot of work has been done on learning DFAs from examples within specic heuristics, starting with Trakhtenbrot and Barzdin's algorithm [15], rediscovered and applied to the discipline of grammatical inference by Gold [5]. Many other algorithms have been developed, the convergence of most of which is based on characteristic sets: RPNI (Regular Positive and Negative Inference) by J. Oncina and P. García [11, 12], Traxbar by K. Lang [8], EDSM (Evidence Driven State Merging), Windowed EDSM and Blue- Fringe EDSM by K. Lang, B. Pearlmutter and R. Price [9], SAGE (Self-Adaptive Greedy Estimate) by H. Juillé [7], etc. This paper provides a comprehensive study of the most important state merging strategies developed so far.

Tesi sul tema "Automaton inference":

1

Ansin, Rasmus, e Didrik Lundberg. "Automated Inference of Excitable Cell Models as Hybrid Automata". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154065.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper, we explore from an experimental point of view the possibilities and limitations of the new HYCGE learning algorithm for hybrid automata. As an example of a practical application, we study the algorithm’s performance on learning the behaviour of the action potential in excitable cells, specifically the Hodgkin-Huxley model of a squid giant axon, the Luo-Rudy model of a guinea pig ventricular cell, and the Entcheva model of a neonatal rat ventricular cell. The validity and accuracy of the algorithm is also visualized through graphical means.
I denna uppsats undersöker vi från en experimentell synvinkel möjligheter och begränsningar i den nya inlärningsalgoritmen HYCGE för hybridautomater. Som ett exempel på en praktisk tillämpning, studerar vi algoritmens förmåga att lära sig aktionspotentialens beteende i retbara celler, specifikt Hodgkin-Huxleymodellen av en bläckfisks jätteaxon, Luo-Rudymodellen av en ventrikulärcell i marsvin, och Entchevas modell av en ventrikulär cell i nyfödd råtta .Giltigheten och noggrannheten hos algoritmen visualiseras även genom grafiskamedel.
2

Rasoamanana, Aina Toky. "Derivation and Analysis of Cryptographic Protocol Implementation". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
TLS et SSH sont deux protocoles de sécurité très répandu et étudiés par la communauté de la recherche. Dans cette thèse, nous nous concentrons sur une classe spécifique de vulnérabilités affectant les implémentations TLS et SSH, tels que les problèmes de machine à états. Ces vulnérabilités sont dues par des différences d'interprétation de la norme et correspondent à des écarts par rapport aux spécifications, par exemple l'acceptation de messages non valides ou l'acceptation de messages valides hors séquence.Nous développons une méthodologie généralisée et systématique pour déduire les machines d'état des protocoles tels que TLS et SSH à partir de stimuli et d'observations, et pour étudier leur évolution au fil des révisions. Nous utilisons l'algorithme L* pour calculer les machines d'état correspondant à différents scénarios d'exécution.Nous reproduisons plusieurs vulnérabilités connues (déni de service, contournement d'authentification) et en découvrons de nouvelles. Nous montrons également que l'inférence des machines à états est suffisamment efficace et pratique dans de nombreux cas pour être intégrée dans un pipeline d'intégration continue, afin d'aider à trouver de nouvelles vulnérabilités ou déviations introduites au cours du développement.Grâce à notre approche systématique en boîte noire, nous étudions plus de 600 versions différentes d'implémentations de serveurs et de clients dans divers scénarios (versions de protocoles, options). En utilisant les machines d'état résultantes, nous proposons un algorithme robuste pour identifier les piles TLS et SSH. Il s'agit de la première application de cette approche sur un périmètre aussi large, en termes de nombre de piles TLS et SSH, de révisions ou de scénarios étudiés
TLS and SSH are two well-known and thoroughly studied security protocols. In this thesis, we focus on a specific class of vulnerabilities affecting both protocols implementations, state machine errors. These vulnerabilities are caused by differences in interpreting the standard and correspond to deviations from the specifications, e.g. accepting invalid messages, or accepting valid messages out of sequence.We develop a generalized and systematic methodology to infer the protocol state machines such as the major TLS and SSH stacks from stimuli and observations, and to study their evolution across revisions. We use the L* algorithm to compute state machines corresponding to different execution scenarios.We reproduce several known vulnerabilities (denial of service, authentication bypasses), and uncover new ones. We also show that state machine inference is efficient and practical enough in many cases for integration within a continuous integration pipeline, to help find new vulnerabilities or deviations introduced during development.With our systematic black-box approach, we study over 600 different versions of server and client implementations in various scenarios (protocol versions, options). Using the resulting state machines, we propose a robust algorithm to fingerprint TLS and SSH stacks. To the best of our knowledge, this is the first application of this approach on such a broad perimeter, in terms of number of TLS and SSH stacks, revisions, or execution scenarios studied
3

Gransden, Thomas Glenn. "Automating proofs with state machine inference". Thesis, University of Leicester, 2017. http://hdl.handle.net/2381/40814.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Interactive theorem provers are tools that help to produce formal proofs in a semiautomatic fashion. Originally designed to verify mathematical statements, they can be potentially useful in an industrial context. Despite being endorsed by leading mathematicians and computer scientists, these tools are not widely used. This is mainly because constructing proofs requires a large amount of human effort and knowledge. Frustratingly, there is limited proof automation available in many theorem proving systems. To address this limitation, a new technique called SEPIA (Search for Proofs Using Inferred Automata) is introduced. There are typically large libraries of completed proofs available. However, identifying useful information from these can be difficult and time-consuming. SEPIA uses state-machine inference techniques to produce descriptive models from corpora of Coq proofs. The resulting models can then be used to automatically generate proofs. Subsequently, SEPIA is also combined with other approaches to form an intelligent suite of methods (called Coq-PR3) to help automatically generate proofs. All of the techniques presented are available as extensions for the ProofGeneral interface. In the experimental work, the new techniques are evaluated on two large Coq datasets. They are shown to prove more theorems automatically than compared to existing proof automation. Additionally, various aspects of the discovered proofs are explored, including a comparison between the automatically generated proofs and manually created ones. Overall, the techniques are demonstrated to be a potentially useful addition to the proof development process because of their ability to automate proofs in Coq.
4

Paige, Timothy Brooks. "Automatic inference for higher-order probabilistic programs". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:d912c4de-4b08-4729-aa19-766413735e2a.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Probabilistic models used in quantitative sciences have historically co-evolved with methods for performing inference: specific modeling assumptions are made not because they are appropriate to the application domain, but because they are required to leverage existing software packages or inference methods. The intertwined nature of modeling and computational concerns leaves much of the promise of probabilistic modeling out of reach for data scientists, forcing practitioners to turn to off-the-shelf solutions. The emerging field of probabilistic programming aims to reduce the technical and cognitive overhead for writing and designing novel probabilistic models, by introducing a specialized programming language as an abstraction barrier between modeling and inference. The aim of this thesis is to develop inference algorithms that scale well and are applicable to broad model families. We focus particularly on methods that can be applied to models written in general-purpose higher-order probabilistic programming languages, where programs may make use of recursion, arbitrary deterministic simulation, and higher-order functions to create more accurate models of an application domain. In a probabilistic programming system, probabilistic models are defined using a modeling language; a backend implements generic inference methods applicable to any model written in this language. Probabilistic programs - models - can be written without concern for how inference will later be performed. We begin by considering several existing probabilistic programming languages, their design choices, and tradeoffs. We then demonstrate how programs written in higher-order languages can be used to define coherent probability models, describing possible approaches to inference, and providing explicit algorithms for efficient implementations of both classic and novel inference methods based on and extending sequential Monte Carlo. This is followed by an investigation into the use of variational inference methods within higher-order probabilistic programming languages, with application to policy learning, adaptive importance sampling, and amortization of inference.
5

MERINO, JORGE SALVADOR PAREDES. "AUTOMATIC SYNTHESIS OF FUZZY INFERENCE SYSTEMS FOR CLASSIFICATION". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=27007@1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
Hoje em dia, grande parte do conhecimento acumulado está armazenado em forma de dados. Para muitos problemas de classificação, tenta-se aprender a relação entre um conjunto de variáveis (atributos) e uma variável alvo de interesse. Dentre as ferramentas capazes de atuar como modelos representativos de sistemas reais, os Sistemas de Inferência Fuzzy são considerados excelentes com respeito à representação do conhecimento de forma compreensível, por serem baseados em regras linguísticas. Este quesito de interpretabilidade linguística é relevante em várias aplicações em que não se deseja apenas um modelo do tipo caixa preta, que, por mais precisão que proporcione, não fornece uma explicação de como os resultados são obtidos. Esta dissertação aborda o desenvolvimento de um Sistema de Inferência Fuzzy de forma automática, buscando uma base de regras que valorize a interpretabilidade linguística e que, ao mesmo tempo, forneça uma boa acurácia. Para tanto, é proposto o modelo AutoFIS-Class, um método automático para a geração de Sistemas de Inferência Fuzzy para problemas de classificação. As características do modelo são: (i) geração de premissas que garantam critérios mínimos de qualidade, (ii) associação de cada premissa a um termo consequente mais compatível e (iii) agregação de regras de uma mesma classe por meio de operadores que ponderem a influência de cada regra. O modelo proposto é avaliado em 45 bases de dados benchmark e seus resultados são comparados com modelos da literatura baseados em Algoritmos Evolucionários. Os resultados comprovam que o Sistema de Inferência gerado é competitivo, apresentando uma boa acurácia com um baixo número de regras.
Nowadays, much of the accumulated knowledge is stored as data. In many classification problems the relationship between a set of variables (attributes) and a target variable of interest must be learned. Among the tools capable of modeling real systems, Fuzzy Inference Systems are considered excellent with respect to the knowledge representation in a comprehensible way, as they are based on inference rules. This is relevant in applications where a black box model does not suffice. This model may attain good accuracy, but does not explain how results are obtained. This dissertation presents the development of a Fuzzy Inference System in an automatic manner, where the rule base should favour linguistic interpretability and at the same time provide good accuracy. In this sense, this work proposes the AutoFIS-Class model, an automatic method for generating Fuzzy Inference Systems for classification problems. Its main features are: (i) generation of premises to ensure minimum, quality criteria, (ii) association of each rule premise to the most compatible consequent term; and (iii) aggregation of rules for each class through operator that weigh the relevance of each rule. The proposed model was evaluated for 45 datasets and their results were compared to existing models based on Evolutionary Algorithms. Results show that the proposed Fuzzy Inference System is competitive, presenting good accuracy with a low number of rules.
6

Rainforth, Thomas William Gamlen. "Automating inference, learning, and design using probabilistic programming". Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:e276f3b4-ff1d-44bf-9d67-013f68ce81f0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Imagine a world where computational simulations can be inverted as easily as running them forwards, where data can be used to refine models automatically, and where the only expertise one needs to carry out powerful statistical analysis is a basic proficiency in scientific coding. Creating such a world is the ambitious long-term aim of probabilistic programming. The bottleneck for improving the probabilistic models, or simulators, used throughout the quantitative sciences, is often not an ability to devise better models conceptually, but a lack of expertise, time, or resources to realize such innovations. Probabilistic programming systems (PPSs) help alleviate this bottleneck by providing an expressive and accessible modeling framework, then automating the required computation to draw inferences from the model, for example finding the model parameters likely to give rise to a certain output. By decoupling model specification and inference, PPSs streamline the process of developing and drawing inferences from new models, while opening up powerful statistical methods to non-experts. Many systems further provide the flexibility to write new and exciting models which would be hard, or even impossible, to convey using conventional statistical frameworks. The central goal of this thesis is to improve and extend PPSs. In particular, we will make advancements to the underlying inference engines and increase the range of problems which can be tackled. For example, we will extend PPSs to a mixed inference-optimization framework, thereby providing automation of tasks such as model learning and engineering design. Meanwhile, we make inroads into constructing systems for automating adaptive sequential design problems, providing potential applications across the sciences. Furthermore, the contributions of the work reach far beyond probabilistic programming, as achieving our goal will require us to make advancements in a number of related fields such as particle Markov chain Monte Carlo methods, Bayesian optimization, and Monte Carlo fundamentals.
7

Dixon, Heidi. "Automating pseudo-Boolean inference within a DPLL framework /". view abstract or download file of text, 2004. http://wwwlib.umi.com/cr/uoregon/fullcit?p3153782.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (Ph. D.)--University of Oregon, 2004.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 140-146). Also available for download via the World Wide Web; free to University of Oregon users.
8

MacNish, Craig Gordon. "Nonmonotonic inference systems for modelling dynamic processes". Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240195.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Lin, Ye. "Internet data extraction based on automatic regular expression inference". [Ames, Iowa : Iowa State University], 2007.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

El, Kaliouby Rana Ayman. "Mind-reading machines : automated inference of complex mental states". Thesis, University of Cambridge, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615030.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "Automaton inference":

1

Lee, Won Don. Probabilistic inference. Urbana, Ill: Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1986.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Lee, Won Don. Probabilistic inference: Theory and practice. Urbana, Ill: Dept. of Computer Science, University of Illinois at Urbana-Champaign, 1986.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Pouly, Marc. Generic Inference: A Unifying Theory for Automated Reasoning. Hoboken, New Jersey: Wiley, 2011.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Farreny, Henri. AI and expertise: Heuristic search, inference engines, automatic proving. Chichester: E. Horwood, 1989.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Varlamov, Oleg. Fundamentals of creating MIVAR expert systems. ru: INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1513119.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Methodological and applied issues of the basics of creating knowledge bases and expert systems of logical artificial intelligence are considered. The software package "MIV Expert Systems Designer" (KESMI) Wi!Mi RAZUMATOR" (version 2.1), which is a convenient tool for the development of intelligent information systems. Examples of creating mivar expert systems and several laboratory works are given. The reader, having studied this tutorial, will be able to independently create expert systems based on KESMI. The textbook in the field of training "Computer Science and Computer Engineering" is intended for students, bachelors, undergraduates, postgraduates studying artificial intelligence methods used in information processing and management systems, as well as for users and specialists who create mivar knowledge models, expert systems, automated control systems and decision support systems. Keywords: cybernetics, artificial intelligence, mivar, mivar networks, databases, data models, expert system, intelligent systems, multidimensional open epistemological active network, MOGAN, MIPRA, KESMI, Wi!Mi, Razumator, knowledge bases, knowledge graphs, knowledge networks, Big knowledge, products, logical inference, decision support systems, decision-making systems, autonomous robots, recommendation systems, universal knowledge tools, expert system designers, logical artificial intelligence.
6

Varlamov, Oleg. Mivar databases and rules. ru: INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1508665.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The multidimensional open epistemological active network MOGAN is the basis for the transition to a qualitatively new level of creating logical artificial intelligence. Mivar databases and rules became the foundation for the creation of MOGAN. The results of the analysis and generalization of data representation structures of various data models are presented: from relational to "Entity — Relationship" (ER-model). On the basis of this generalization, a new model of data and rules is created: the mivar information space "Thing-Property-Relation". The logic-computational processing of data in this new model of data and rules is shown, which has linear computational complexity relative to the number of rules. MOGAN is a development of Rule - Based Systems and allows you to quickly and easily design algorithms and work with logical reasoning in the "If..., Then..." format. An example of creating a mivar expert system for solving problems in the model area "Geometry"is given. Mivar databases and rules can be used to model cause-and-effect relationships in different subject areas and to create knowledge bases of new-generation applied artificial intelligence systems and real-time mivar expert systems with the transition to"Big Knowledge". The textbook in the field of training "Computer Science and Computer Engineering" is intended for students, bachelors, undergraduates, postgraduates studying artificial intelligence methods used in information processing and management systems, as well as for users and specialists who create mivar knowledge models, expert systems, automated control systems and decision support systems. Keywords: cybernetics, artificial intelligence, mivar, mivar networks, databases, data models, expert system, intelligent systems, multidimensional open epistemological active network, MOGAN, MIPRA, KESMI, Wi!Mi, Razumator, knowledge bases, knowledge graphs, knowledge networks, Big knowledge, products, logical inference, decision support systems, decision-making systems, autonomous robots, recommendation systems, universal knowledge tools, expert system designers, logical artificial intelligence.
7

Higuera, Colin De La. Grammatical Inference: Learning Automata and Grammars. Cambridge University Press, 2010.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Higuera, Colin de la. Grammatical Inference: Learning Automata and Grammars. Cambridge University Press, 2010.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Higuera, Colin de la. Grammatical Inference: Learning Automata and Grammars. Cambridge University Press, 2014.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Higuera, Colin de la. Grammatical Inference: Learning Automata and Grammars. Cambridge University Press, 2010.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Automaton inference":

1

Dupont, Pierre, e Lin Chase. "Using symbol clustering to improve probabilistic automaton inference". In Grammatical Inference, 232–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0054079.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Firoiu, Laura, Tim Oates e Paul R. Cohen. "Learning a deterministic finite automaton with a recurrent neural network". In Grammatical Inference, 90–101. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0054067.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Xu, Zhe, Bo Wu, Aditya Ojha, Daniel Neider e Ufuk Topcu. "Active Finite Reward Automaton Inference and Reinforcement Learning Using Queries and Counterexamples". In Lecture Notes in Computer Science, 115–35. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-84060-0_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Yang, Hui, Yue Ma e Nicole Bidoit. "Hypergraph-Based Inference Rules for Computing $$\mathcal{EL}\mathcal{}^+$$-Ontology Justifications". In Automated Reasoning, 310–28. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10769-6_19.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractTo give concise explanations for a conclusion obtained by reasoning over ontologies, justifications have been proposed as minimal subsets of an ontology that entail the given conclusion. Even though computing one justification can be done in polynomial time for tractable Description Logics such as $$\mathcal{EL}\mathcal{}^+$$ EL + , computing all justifications is complicated and often challenging for real-world ontologies. In this paper, based on a graph representation of $$\mathcal{EL}\mathcal{}^+$$ EL + -ontologies, we propose a new set of inference rules (called H-rules) and take advantage of them for providing a new method of computing all justifications for a given conclusion. The advantage of our setting is that most of the time, it reduces the number of inferences (generated by H-rules) required to derive a given conclusion. This accelerates the enumeration of justifications relying on these inferences. We validate our approach by running real-world ontology experiments. Our graph-based approach outperforms PULi [14], the state-of-the-art algorithm, in most of cases.
5

Newborn, Monty. "Inference Procedures". In Automated Theorem Proving, 29–42. New York, NY: Springer New York, 2001. http://dx.doi.org/10.1007/978-1-4613-0089-2_4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Bhayat, Ahmed, Johannes Schoisswohl e Michael Rawson. "Superposition with Delayed Unification". In Automated Deduction – CADE 29, 23–40. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-38499-8_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
AbstractClassically, in saturation-based proof systems, unification has been considered atomic. However, it is also possible to move unification to the calculus level, turning the steps of the unification algorithm into inferences. For calculi that rely on unification procedures returning large or even infinite sets of unifiers, integrating unification into the calculus is an attractive method of dovetailing unification and inference. This applies, for example, to AC-superposition and higher-order superposition. We show that first-order superposition remains complete when moving unification rules to the calculus level. We discuss some of the benefits this has even for standard first-order superposition and provide an experimental evaluation.
7

de la Higuera, Colin. "Learning stochastic finite automata from experts". In Grammatical Inference, 79–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0054066.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Viechnicki, Peter. "A performance evaluation of automatic survey classifiers". In Grammatical Inference, 244–56. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/bfb0054080.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Stickel, Mark E. "PTTP and Linked Inference". In Automated Reasoning Series, 283–95. Dordrecht: Springer Netherlands, 1991. http://dx.doi.org/10.1007/978-94-011-3488-0_14.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Stachniak, Zbigniew. "Nonmonotonic Resolution Inference Systems". In Automated Reasoning Series, 165–78. Dordrecht: Springer Netherlands, 1996. http://dx.doi.org/10.1007/978-94-009-1677-7_8.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Automaton inference":

1

Zhaohua, Huang, e Yang Fan. "Information Extraction from Web Documents Based on Unranked Tree Automaton Inference". In 2012 4th International Conference on Multimedia Information Networking and Security (MINES). IEEE, 2012. http://dx.doi.org/10.1109/mines.2012.128.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Grantner, Janos L., Sean T. Fuller e Jozsef Dombi. "Fuzzy automaton model with adaptive inference mechanism for intelligent decision support systems". In 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 2016. http://dx.doi.org/10.1109/fuzz-ieee.2016.7737991.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Saika, Yohei, Shouta Akiyama e Hiroki Sakaematsu. "Bayesian inference in optical measurement due to remote sensing to synthetic aperture radar interferometry". In 2013 13th International Conference on Control, Automaton and Systems (ICCAS). IEEE, 2013. http://dx.doi.org/10.1109/iccas.2013.6704157.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Asami, Atsushi, Tatsuki Yamada e Yohei Saika. "Probabilistic inference of environmental factors via time series analysis using mean-field theory of ising model". In 2013 13th International Conference on Control, Automaton and Systems (ICCAS). IEEE, 2013. http://dx.doi.org/10.1109/iccas.2013.6704168.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Xu, Zhe, e Ufuk Topcu. "Transfer of Temporal Logic Formulas in Reinforcement Learning". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/557.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Transferring high-level knowledge from a source task to a target task is an effective way to expedite reinforcement learning (RL). For example, propositional logic and first-order logic have been used as representations of such knowledge. We study the transfer of knowledge between tasks in which the timing of the events matters. We call such tasks temporal tasks. We concretize similarity between temporal tasks through a notion of logical transferability, and develop a transfer learning approach between different yet similar temporal tasks. We first propose an inference technique to extract metric interval temporal logic (MITL) formulas in sequential disjunctive normal form from labeled trajectories collected in RL of the two tasks. If logical transferability is identified through this inference, we construct a timed automaton for each sequential conjunctive subformula of the inferred MITL formulas from both tasks. We perform RL on the extended state which includes the locations and clock valuations of the timed automata for the source task. We then establish mappings between the corresponding components (clocks, locations, etc.) of the timed automata from the two tasks, and transfer the extended Q-functions based on the established mappings. Finally, we perform RL on the extended state for the target task, starting with the transferred extended Q-functions. Our implementation results show, depending on how similar the source task and the target task are, that the sampling efficiency for the target task can be improved by up to one order of magnitude by performing RL in the extended state space, and further improved by up to another order of magnitude using the transferred extended Q-functions.
6

Bhoyar, A., S. Sharma, S. Barve e R. Kumar Rana. "Intelligent Control of Autonomous Vessels: Bayesian Estimation Instead of Statistical Learning?" In International Conference on Marine Engineering and Technology Oman. London: IMarEST, 2019. http://dx.doi.org/10.24868/icmet.oman.2019.008.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Marine vessels have been recently considered for redesign with a view towards autonomous operation. This brings forth a number of safety concerns as regards malware attacks on intra-vehicle communications systems as well as on sensor based communication with their environment. Designing suitable hybrid systems or cyber physical systems as the above, which are data driven, involves a challenge by way of difficulty in abstraction. The current modeling paradigm for cyber physical systems is based upon the abstract idea of a hybrid automaton which involves discrete as well as continuous mathematical models for the physical device (marine vessel/s) Incorporating statistical inference techniques to introduce an element of autonomy in this has been recently proposed in literature. An engineering situation is explored in which a pair of marine vessels is being deployed to navigate avoiding collision with the help of deterministic control as well as with a particle filtering state estimator. A security intrusion is considered to occur in the communication channels and the robustness of the system is studied with the state estimation. Such intrusions can indeed be expected to defeat the collision protection design if sufficiently intense. However, better protection is offered by such Bayesian estimation based intelligent control as compare to statistical learning base control. Our results suggest that the hybrid automaton modeling paradigm with autonomy incorporated needs to be suitably abstracted in order to better design their defence against cyber-attacks.
7

Pastore, Fabrizio, Daniela Micucci e Leonardo Mariani. "Timed k-Tail: Automatic Inference of Timed Automata". In 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST). IEEE, 2017. http://dx.doi.org/10.1109/icst.2017.43.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Byrne, Ruth M. J. "Good Explanations in Explainable Artificial Intelligence (XAI): Evidence from Human Explanatory Reasoning". In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/733.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Insights from cognitive science about how people understand explanations can be instructive for the development of robust, user-centred explanations in eXplainable Artificial Intelligence (XAI). I survey key tendencies that people exhibit when they construct explanations and make inferences from them, of relevance to the provision of automated explanations for decisions by AI systems. I first review experimental discoveries of some tendencies people exhibit when they construct explanations, including evidence on the illusion of explanatory depth, intuitive versus reflective explanations, and explanatory stances. I then consider discoveries of how people reason about causal explanations, including evidence on inference suppression, causal discounting, and explanation simplicity. I argue that central to the XAI endeavor is the requirement that automated explanations provided by an AI system should make sense to human users.
9

Deb, Sankha, e Kalyan Ghosh. "Artificial Intelligence Based Inference Techniques for Automated Process Planning for Machined Parts". In ASME 2002 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2002. http://dx.doi.org/10.1115/detc2002/cie-34507.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Many areas of research in manufacturing are increasingly turning to applications of Artificial Intelligence (AI). The problem of developing inference strategies for automated process planning in machining is one such area of successful application of AI based approaches. Given the high complexity of the process planning expertise, development of inference techniques for automated process planning is a big challenge to researchers. The traditional inference methods based on variant and generative approaches using decision trees and decision tables suffer from a number of shortcomings, which have prompted researchers to seek alternative approaches and turn to AI for developing intelligent inference techniques. In this article, we have reviewed, categorized and summarized the research on applications of AI for developing inference methods for automated process planning systems. We have described our ongoing research work on developing an intelligent inference strategy based on artificial neural networks for implementing machining process selection for rotationally symmetric parts.
10

Eichhoff, Julian R., Felix Baumann e Dieter Roller. "Two Approaches to the Induction of Graph-Rewriting Rules for Function-Based Design Synthesis". In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-59915.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper we demonstrate and compare two complementary approaches to the automatic generation of production rules from a set of given graphs representing sample designs. The first approach generates a complete rule set from scratch by means of frequent subgraph discovery. Whereas the second approach is intended to learn additional rules that fit an existing, yet incomplete, rule set using genetic programming. Both approaches have been developed and tested in the context of an application for automated conceptual engineering design, more specifically functional decomposition. They can be considered feasible, complementary approaches to the automatic inference of graph rewriting rules for conceptual design applications.

Rapporti di organizzazioni sul tema "Automaton inference":

1

Baader, Franz, Jan Hladik e Rafael Peñaloza. PSpace Automata with Blocking for Description Logics. Aachen University of Technology, 2006. http://dx.doi.org/10.25368/2022.157.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In Description Logics (DLs), both tableau-based and automatabased algorithms are frequently used to show decidability and complexity results for basic inference problems such as satisfiability of concepts. Whereas tableau-based algorithms usually yield worst-case optimal algorithms in the case of PSpace-complete logics, it is often very hard to design optimal tableau-based algorithms for ExpTime-complete DLs. In contrast, the automata-based approach is usually well-suited to prove ExpTime upper-bounds, but its direct application will usually also yield an ExpTime-algorithm for a PSpace-complete logic since the (tree) automaton constructed for a given concept is usually exponentially large. In the present paper, we formulate conditions under which an on-the-fly construction of such an exponentially large automaton can be used to obtain a PSpace-algorithm. We illustrate the usefulness of this approach by proving a new PSpace upper-bound for satisfiability of concepts w.r.t. acyclic terminologies in the DL SI, which extends the basic DL ALC with transitive and inverse roles.
2

Baader, Franz, e Benjamin Zarrieß. Verification of Golog Programs over Description Logic Actions. Technische Universität Dresden, 2013. http://dx.doi.org/10.25368/2022.198.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
High-level action programming languages such as Golog have successfully been used to model the behavior of autonomous agents. In addition to a logic-based action formalism for describing the environment and the effects of basic actions, they enable the construction of complex actions using typical programming language constructs. To ensure that the execution of such complex actions leads to the desired behavior of the agent, one needs to specify the required properties in a formal way, and then verify that these requirements are met by any execution of the program. Due to the expressiveness of the action formalism underlying Golog (situation calculus), the verification problem for Golog programs is in general undecidable. Action formalisms based on Description Logic (DL) try to achieve decidability of inference problems such as the projection problem by restricting the expressiveness of the underlying base logic. However, until now these formalisms have not been used within Golog programs. In the present paper, we introduce a variant of Golog where basic actions are defined using such a DL-based formalism, and show that the verification problem for such programs is decidable. This improves on our previous work on verifying properties of infinite sequences of DL actions in that it considers (finite and infinite) sequences of DL actions that correspond to (terminating and non-terminating) runs of a Golog program rather than just infinite sequences accepted by a Büchi automaton abstracting the program.
3

Baader, Franz, Oliver Fernández Gil e Maximilian Pensel. Standard and Non-Standard Inferences in the Description Logic FL₀ Using Tree Automata. Technische Universität Dresden, 2018. http://dx.doi.org/10.25368/2022.240.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Although being quite inexpressive, the description logic (DL) FL₀, which provides only conjunction, value restriction and the top concept as concept constructors, has an intractable subsumption problem in the presence of terminologies (TBoxes): subsumption reasoning w.r.t. acyclic FL₀ TBoxes is coNP-complete, and becomes even ExpTime-complete in case general TBoxes are used. In the present paper, we use automata working on infinite trees to solve both standard and non-standard inferences in FL₀ w.r.t. general TBoxes. First, we give an alternative proof of the ExpTime upper bound for subsumption in FL₀ w.r.t. general TBoxes based on the use of looping tree automata. Second, we employ parity tree automata to tackle non-standard inference problems such as computing the least common subsumer and the difference of FL₀ concepts w.r.t. general TBoxes.
4

Brown, Frank M. Automatic Inference in Quantified Computational Logic. Fort Belvoir, VA: Defense Technical Information Center, ottobre 1988. http://dx.doi.org/10.21236/ada200909.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Videa, Aldo, e Yiyi Wang. Inference of Transit Passenger Counts and Waiting Time Using Wi-Fi Signals. Western Transportation Institute, agosto 2021. http://dx.doi.org/10.15788/1715288737.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Passenger data such as real-time origin-destination (OD) flows and waiting times are central to planning public transportation services and improving visitor experience. This project explored the use of Internet of Things (IoT) Technology to infer transit ridership and waiting time at bus stops. Specifically, this study explored the use of Raspberry Pi computers, which are small and inexpensive sets of hardware, to scan the Wi-Fi networks of passengers’ smartphones. The process was used to infer passenger counts and obtain information on passenger trajectories based on Global Positioning System (GPS) data. The research was conducted as a case study of the Streamline Bus System in Bozeman, Montana. To evaluate the reliability of the data collected with the Raspberry Pi computers, the study conducted technology-based estimation of ridership, OD flows, wait time, and travel time for a comparison with ground truth data (passenger surveys, manual data counts, and bus travel times). This study introduced the use of a wireless Wi-Fi scanning device for transit data collection, called a Smart Station. It combines an innovative set of hardware and software to create a non-intrusive and passive data collection mechanism. Through the field testing and comparison evaluation with ground truth data, the Smart Station produced accurate estimates of ridership, origin-destination characteristics, wait times, and travel times. Ridership data has traditionally been collected through a combination of manual surveys and Automatic Passenger Counter (APC) systems, which can be time-consuming and expensive, with limited capabilities to produce real-time data. The Smart Station shows promise as an accurate and cost-effective alternative. The advantages of using Smart Station over traditional data collection methods include the following: (1) Wireless, automated data collection and retrieval, (2) Real-time observation of passenger behavior, (3) Negligible maintenance after programming and installing the hardware, (4) Low costs of hardware, software, and installation, and (5) Simple and short programming and installation time. If further validated through additional research and development, the device could help transit systems facilitate data collection for route optimization, trip planning tools, and traveler information systems.
6

de Kemp, E. A., H. A. J. Russell, B. Brodaric, D. B. Snyder, M. J. Hillier, M. St-Onge, C. Harrison et al. Initiating transformative geoscience practice at the Geological Survey of Canada: Canada in 3D. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/331097.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Application of 3D technologies to the wide range of Geosciences knowledge domains is well underway. These have been operationalized in workflows of the hydrocarbon sector for a half-century, and now in mining for over two decades. In Geosciences, algorithms, structured workflows and data integration strategies can support compelling Earth models, however challenges remain to meet the standards of geological plausibility required for most geoscientific studies. There is also missing links in the institutional information infrastructure supporting operational multi-scale 3D data and model development. Canada in 3D (C3D) is a vision and road map for transforming the Geological Survey of Canada's (GSC) work practice by leveraging emerging 3D technologies. Primarily the transformation from 2D geological mapping, to a well-structured 3D modelling practice that is both data-driven and knowledge-driven. It is tempting to imagine that advanced 3D computational methods, coupled with Artificial Intelligence and Big Data tools will automate the bulk of this process. To effectively apply these methods there is a need, however, for data to be in a well-organized, classified, georeferenced (3D) format embedded with key information, such as spatial-temporal relations, and earth process knowledge. Another key challenge for C3D is the relative infancy of 3D geoscience technologies for geological inference and 3D modelling using sparse and heterogeneous regional geoscience information, while preserving the insights and expertise of geoscientists maintaining scientific integrity of digital products. In most geological surveys, there remains considerable educational and operational challenges to achieve this balance of digital automation and expert knowledge. Emerging from the last two decades of research are more efficient workflows, transitioning from cumbersome, explicit (manual) to reproducible implicit semi-automated methods. They are characterized by integrated and iterative, forward and reverse geophysical modelling, coupled with stratigraphic and structural approaches. The full impact of research and development with these 3D tools, geophysical-geological integration and simulation approaches is perhaps unpredictable, but the expectation is that they will produce predictive, instructive models of Canada's geology that will be used to educate, prioritize and influence sustainable policy for stewarding our natural resources. On the horizon are 3D geological modelling methods spanning the gulf between local and frontier or green-fields, as well as deep crustal characterization. These are key components of mineral systems understanding, integrated and coupled hydrological modelling and energy transition applications, e.g. carbon sequestration, in-situ hydrogen mining, and geothermal exploration. Presented are some case study examples at a range of scales from our efforts in C3D.
7

de Kemp, E. A., H. A. J. Russell, B. Brodaric, D. B. Snyder, M. J. Hillier, M. St-Onge, C. Harrison et al. Initiating transformative geoscience practice at the Geological Survey of Canada: Canada in 3D. Natural Resources Canada/CMSS/Information Management, 2023. http://dx.doi.org/10.4095/331871.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Application of 3D technologies to the wide range of Geosciences knowledge domains is well underway. These have been operationalized in workflows of the hydrocarbon sector for a half-century, and now in mining for over two decades. In Geosciences, algorithms, structured workflows and data integration strategies can support compelling Earth models, however challenges remain to meet the standards of geological plausibility required for most geoscientific studies. There is also missing links in the institutional information infrastructure supporting operational multi-scale 3D data and model development. Canada in 3D (C3D) is a vision and road map for transforming the Geological Survey of Canada's (GSC) work practice by leveraging emerging 3D technologies. Primarily the transformation from 2D geological mapping, to a well-structured 3D modelling practice that is both data-driven and knowledge-driven. It is tempting to imagine that advanced 3D computational methods, coupled with Artificial Intelligence and Big Data tools will automate the bulk of this process. To effectively apply these methods there is a need, however, for data to be in a well-organized, classified, georeferenced (3D) format embedded with key information, such as spatial-temporal relations, and earth process knowledge. Another key challenge for C3D is the relative infancy of 3D geoscience technologies for geological inference and 3D modelling using sparse and heterogeneous regional geoscience information, while preserving the insights and expertise of geoscientists maintaining scientific integrity of digital products. In most geological surveys, there remains considerable educational and operational challenges to achieve this balance of digital automation and expert knowledge. Emerging from the last two decades of research are more efficient workflows, transitioning from cumbersome, explicit (manual) to reproducible implicit semi-automated methods. They are characterized by integrated and iterative, forward and reverse geophysical modelling, coupled with stratigraphic and structural approaches. The full impact of research and development with these 3D tools, geophysical-geological integration and simulation approaches is perhaps unpredictable, but the expectation is that they will produce predictive, instructive models of Canada's geology that will be used to educate, prioritize and influence sustainable policy for stewarding our natural resources. On the horizon are 3D geological modelling methods spanning the gulf between local and frontier or green-fields, as well as deep crustal characterization. These are key components of mineral systems understanding, integrated and coupled hydrological modelling and energy transition applications, e.g. carbon sequestration, in-situ hydrogen mining, and geothermal exploration. Presented are some case study examples at a range of scales from our efforts in C3D.
8

Burstein, Jill, Geoffrey LaFlair, Antony Kunnan e Alina von Davier. A Theoretical Assessment Ecosystem for a Digital-First Assessment - The Duolingo English Test. Duolingo, marzo 2022. http://dx.doi.org/10.46999/kiqf4328.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The Duolingo English Test is a groundbreaking, digital­first, computer­adaptive measure of English language proficiency for communication and use in English­medium settings. The test measures four key English language proficiency constructs: Speaking, Writing, Reading, and Listening (SWRL), and is aligned with the Common European Framework of Reference for Languages (CEFR) proficiency levels and descriptors. As a digital­first assessment, the test uses “human­in­the­loop AI” from end to end for test security, automated item generation, and scoring of test­taker responses. This paper presents a novel theoretical assessment ecosystem for the Duolingo English Test. It is a theoretical representation of language assessment design, measurement, and test security processes, as well as the test­taker experience factors that contribute to the test validity argument and test impact. The test validity argument is constructed with a digitally­informed chain of inferences that addresses digital affordances applied to the test. The ecosystem is composed of an integrated set of complex frameworks: (1) the Language Assessment Design Framework, (2) the Expanded Evidence­Centered Design Framework, (3) the Computational Psychometrics Framework, and (4) the Test Security Framework. Test­taker experience (TTX) is a test priority throughout the test­taking pipeline, such as low cost, anytime/anywhere, and shorter testing time. The test’s expected impact is aligned with Duolingo’s social mission to lower barriers to education access and offer a secure and delightful test experience, while providing a valid, fair, and reliable test score. The ecosystem leverages principles from assessment theory, computational psychometrics, design, data science, language assessment theory, NLP/AI, and test security.
9

Paule, Bernard, Flourentzos Flourentzou, Tristan de KERCHOVE d’EXAERDE, Julien BOUTILLIER e Nicolo Ferrari. PRELUDE Roadmap for Building Renovation: set of rules for renovation actions to optimize building energy performance. Department of the Built Environment, 2023. http://dx.doi.org/10.54337/aau541614638.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In the context of climate change and the environmental and energy constraints we face, it is essential to develop methods to encourage the implementation of efficient solutions for building renovation. One of the objectives of the European PRELUDE project [1] is to develop a "Building Renovation Roadmap"(BRR) aimed at facilitating decision-making to foster the most efficient refurbishment actions, the implementation of innovative solutions and the promotion of renewable energy sources in the renovation process of existing buildings. In this context, Estia is working on the development of inference rules that will make it possible. On the basis of a diagnosis such as the Energy Performance Certificate, it will help establishing a list of priority actions. The dynamics that drive this project permit to decrease the subjectivity of a human decisions making scheme. While simulation generates digital technical data, interpretation requires the translation of this data into natural language. The purpose is to automate the translation of the results to provide advice and facilitate decision-making. In medicine, the diagnostic phase is a process by which a disease is identified by its symptoms. Similarly, the idea of the process is to target the faulty elements potentially responsible for poor performance and to propose remedial solutions. The system is based on the development of fuzzy logic rules [2],[3]. This choice was made to be able to manipulate notions of membership with truth levels between 0 and 1, and to deliver messages in a linguistic form, understandable by non-specialist users. For example, if performance is low and parameter x is unfavourable, the algorithm can gives an incentive to improve the parameter such as: "you COULD, SHOULD or MUST change parameter x". Regarding energy performance analysis, the following domains are addressed: heating, domestic hot water, cooling, lighting. Regarding the parameters, the analysis covers the following topics: Characteristics of the building envelope. and of the technical installations (heat production-distribution, ventilation system, electric lighting, etc.). This paper describes the methodology used, lists the fields studied and outlines the expected outcomes of the project.
10

Deep learning for individual heterogeneity: an automatic inference framework. Cemmap, luglio 2021. http://dx.doi.org/10.47004/wp.cem.2021.2921.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Vai alla bibliografia