Tesi sul tema "Automaton inference"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Automaton inference.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Automaton inference".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Ansin, Rasmus, e Didrik Lundberg. "Automated Inference of Excitable Cell Models as Hybrid Automata". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154065.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In this paper, we explore from an experimental point of view the possibilities and limitations of the new HYCGE learning algorithm for hybrid automata. As an example of a practical application, we study the algorithm’s performance on learning the behaviour of the action potential in excitable cells, specifically the Hodgkin-Huxley model of a squid giant axon, the Luo-Rudy model of a guinea pig ventricular cell, and the Entcheva model of a neonatal rat ventricular cell. The validity and accuracy of the algorithm is also visualized through graphical means.
I denna uppsats undersöker vi från en experimentell synvinkel möjligheter och begränsningar i den nya inlärningsalgoritmen HYCGE för hybridautomater. Som ett exempel på en praktisk tillämpning, studerar vi algoritmens förmåga att lära sig aktionspotentialens beteende i retbara celler, specifikt Hodgkin-Huxleymodellen av en bläckfisks jätteaxon, Luo-Rudymodellen av en ventrikulärcell i marsvin, och Entchevas modell av en ventrikulär cell i nyfödd råtta .Giltigheten och noggrannheten hos algoritmen visualiseras även genom grafiskamedel.
2

Rasoamanana, Aina Toky. "Derivation and Analysis of Cryptographic Protocol Implementation". Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAS005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
TLS et SSH sont deux protocoles de sécurité très répandu et étudiés par la communauté de la recherche. Dans cette thèse, nous nous concentrons sur une classe spécifique de vulnérabilités affectant les implémentations TLS et SSH, tels que les problèmes de machine à états. Ces vulnérabilités sont dues par des différences d'interprétation de la norme et correspondent à des écarts par rapport aux spécifications, par exemple l'acceptation de messages non valides ou l'acceptation de messages valides hors séquence.Nous développons une méthodologie généralisée et systématique pour déduire les machines d'état des protocoles tels que TLS et SSH à partir de stimuli et d'observations, et pour étudier leur évolution au fil des révisions. Nous utilisons l'algorithme L* pour calculer les machines d'état correspondant à différents scénarios d'exécution.Nous reproduisons plusieurs vulnérabilités connues (déni de service, contournement d'authentification) et en découvrons de nouvelles. Nous montrons également que l'inférence des machines à états est suffisamment efficace et pratique dans de nombreux cas pour être intégrée dans un pipeline d'intégration continue, afin d'aider à trouver de nouvelles vulnérabilités ou déviations introduites au cours du développement.Grâce à notre approche systématique en boîte noire, nous étudions plus de 600 versions différentes d'implémentations de serveurs et de clients dans divers scénarios (versions de protocoles, options). En utilisant les machines d'état résultantes, nous proposons un algorithme robuste pour identifier les piles TLS et SSH. Il s'agit de la première application de cette approche sur un périmètre aussi large, en termes de nombre de piles TLS et SSH, de révisions ou de scénarios étudiés
TLS and SSH are two well-known and thoroughly studied security protocols. In this thesis, we focus on a specific class of vulnerabilities affecting both protocols implementations, state machine errors. These vulnerabilities are caused by differences in interpreting the standard and correspond to deviations from the specifications, e.g. accepting invalid messages, or accepting valid messages out of sequence.We develop a generalized and systematic methodology to infer the protocol state machines such as the major TLS and SSH stacks from stimuli and observations, and to study their evolution across revisions. We use the L* algorithm to compute state machines corresponding to different execution scenarios.We reproduce several known vulnerabilities (denial of service, authentication bypasses), and uncover new ones. We also show that state machine inference is efficient and practical enough in many cases for integration within a continuous integration pipeline, to help find new vulnerabilities or deviations introduced during development.With our systematic black-box approach, we study over 600 different versions of server and client implementations in various scenarios (protocol versions, options). Using the resulting state machines, we propose a robust algorithm to fingerprint TLS and SSH stacks. To the best of our knowledge, this is the first application of this approach on such a broad perimeter, in terms of number of TLS and SSH stacks, revisions, or execution scenarios studied
3

Gransden, Thomas Glenn. "Automating proofs with state machine inference". Thesis, University of Leicester, 2017. http://hdl.handle.net/2381/40814.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Interactive theorem provers are tools that help to produce formal proofs in a semiautomatic fashion. Originally designed to verify mathematical statements, they can be potentially useful in an industrial context. Despite being endorsed by leading mathematicians and computer scientists, these tools are not widely used. This is mainly because constructing proofs requires a large amount of human effort and knowledge. Frustratingly, there is limited proof automation available in many theorem proving systems. To address this limitation, a new technique called SEPIA (Search for Proofs Using Inferred Automata) is introduced. There are typically large libraries of completed proofs available. However, identifying useful information from these can be difficult and time-consuming. SEPIA uses state-machine inference techniques to produce descriptive models from corpora of Coq proofs. The resulting models can then be used to automatically generate proofs. Subsequently, SEPIA is also combined with other approaches to form an intelligent suite of methods (called Coq-PR3) to help automatically generate proofs. All of the techniques presented are available as extensions for the ProofGeneral interface. In the experimental work, the new techniques are evaluated on two large Coq datasets. They are shown to prove more theorems automatically than compared to existing proof automation. Additionally, various aspects of the discovered proofs are explored, including a comparison between the automatically generated proofs and manually created ones. Overall, the techniques are demonstrated to be a potentially useful addition to the proof development process because of their ability to automate proofs in Coq.
4

Paige, Timothy Brooks. "Automatic inference for higher-order probabilistic programs". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:d912c4de-4b08-4729-aa19-766413735e2a.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Probabilistic models used in quantitative sciences have historically co-evolved with methods for performing inference: specific modeling assumptions are made not because they are appropriate to the application domain, but because they are required to leverage existing software packages or inference methods. The intertwined nature of modeling and computational concerns leaves much of the promise of probabilistic modeling out of reach for data scientists, forcing practitioners to turn to off-the-shelf solutions. The emerging field of probabilistic programming aims to reduce the technical and cognitive overhead for writing and designing novel probabilistic models, by introducing a specialized programming language as an abstraction barrier between modeling and inference. The aim of this thesis is to develop inference algorithms that scale well and are applicable to broad model families. We focus particularly on methods that can be applied to models written in general-purpose higher-order probabilistic programming languages, where programs may make use of recursion, arbitrary deterministic simulation, and higher-order functions to create more accurate models of an application domain. In a probabilistic programming system, probabilistic models are defined using a modeling language; a backend implements generic inference methods applicable to any model written in this language. Probabilistic programs - models - can be written without concern for how inference will later be performed. We begin by considering several existing probabilistic programming languages, their design choices, and tradeoffs. We then demonstrate how programs written in higher-order languages can be used to define coherent probability models, describing possible approaches to inference, and providing explicit algorithms for efficient implementations of both classic and novel inference methods based on and extending sequential Monte Carlo. This is followed by an investigation into the use of variational inference methods within higher-order probabilistic programming languages, with application to policy learning, adaptive importance sampling, and amortization of inference.
5

MERINO, JORGE SALVADOR PAREDES. "AUTOMATIC SYNTHESIS OF FUZZY INFERENCE SYSTEMS FOR CLASSIFICATION". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=27007@1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
Hoje em dia, grande parte do conhecimento acumulado está armazenado em forma de dados. Para muitos problemas de classificação, tenta-se aprender a relação entre um conjunto de variáveis (atributos) e uma variável alvo de interesse. Dentre as ferramentas capazes de atuar como modelos representativos de sistemas reais, os Sistemas de Inferência Fuzzy são considerados excelentes com respeito à representação do conhecimento de forma compreensível, por serem baseados em regras linguísticas. Este quesito de interpretabilidade linguística é relevante em várias aplicações em que não se deseja apenas um modelo do tipo caixa preta, que, por mais precisão que proporcione, não fornece uma explicação de como os resultados são obtidos. Esta dissertação aborda o desenvolvimento de um Sistema de Inferência Fuzzy de forma automática, buscando uma base de regras que valorize a interpretabilidade linguística e que, ao mesmo tempo, forneça uma boa acurácia. Para tanto, é proposto o modelo AutoFIS-Class, um método automático para a geração de Sistemas de Inferência Fuzzy para problemas de classificação. As características do modelo são: (i) geração de premissas que garantam critérios mínimos de qualidade, (ii) associação de cada premissa a um termo consequente mais compatível e (iii) agregação de regras de uma mesma classe por meio de operadores que ponderem a influência de cada regra. O modelo proposto é avaliado em 45 bases de dados benchmark e seus resultados são comparados com modelos da literatura baseados em Algoritmos Evolucionários. Os resultados comprovam que o Sistema de Inferência gerado é competitivo, apresentando uma boa acurácia com um baixo número de regras.
Nowadays, much of the accumulated knowledge is stored as data. In many classification problems the relationship between a set of variables (attributes) and a target variable of interest must be learned. Among the tools capable of modeling real systems, Fuzzy Inference Systems are considered excellent with respect to the knowledge representation in a comprehensible way, as they are based on inference rules. This is relevant in applications where a black box model does not suffice. This model may attain good accuracy, but does not explain how results are obtained. This dissertation presents the development of a Fuzzy Inference System in an automatic manner, where the rule base should favour linguistic interpretability and at the same time provide good accuracy. In this sense, this work proposes the AutoFIS-Class model, an automatic method for generating Fuzzy Inference Systems for classification problems. Its main features are: (i) generation of premises to ensure minimum, quality criteria, (ii) association of each rule premise to the most compatible consequent term; and (iii) aggregation of rules for each class through operator that weigh the relevance of each rule. The proposed model was evaluated for 45 datasets and their results were compared to existing models based on Evolutionary Algorithms. Results show that the proposed Fuzzy Inference System is competitive, presenting good accuracy with a low number of rules.
6

Rainforth, Thomas William Gamlen. "Automating inference, learning, and design using probabilistic programming". Thesis, University of Oxford, 2017. http://ora.ox.ac.uk/objects/uuid:e276f3b4-ff1d-44bf-9d67-013f68ce81f0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Imagine a world where computational simulations can be inverted as easily as running them forwards, where data can be used to refine models automatically, and where the only expertise one needs to carry out powerful statistical analysis is a basic proficiency in scientific coding. Creating such a world is the ambitious long-term aim of probabilistic programming. The bottleneck for improving the probabilistic models, or simulators, used throughout the quantitative sciences, is often not an ability to devise better models conceptually, but a lack of expertise, time, or resources to realize such innovations. Probabilistic programming systems (PPSs) help alleviate this bottleneck by providing an expressive and accessible modeling framework, then automating the required computation to draw inferences from the model, for example finding the model parameters likely to give rise to a certain output. By decoupling model specification and inference, PPSs streamline the process of developing and drawing inferences from new models, while opening up powerful statistical methods to non-experts. Many systems further provide the flexibility to write new and exciting models which would be hard, or even impossible, to convey using conventional statistical frameworks. The central goal of this thesis is to improve and extend PPSs. In particular, we will make advancements to the underlying inference engines and increase the range of problems which can be tackled. For example, we will extend PPSs to a mixed inference-optimization framework, thereby providing automation of tasks such as model learning and engineering design. Meanwhile, we make inroads into constructing systems for automating adaptive sequential design problems, providing potential applications across the sciences. Furthermore, the contributions of the work reach far beyond probabilistic programming, as achieving our goal will require us to make advancements in a number of related fields such as particle Markov chain Monte Carlo methods, Bayesian optimization, and Monte Carlo fundamentals.
7

Dixon, Heidi. "Automating pseudo-Boolean inference within a DPLL framework /". view abstract or download file of text, 2004. http://wwwlib.umi.com/cr/uoregon/fullcit?p3153782.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (Ph. D.)--University of Oregon, 2004.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 140-146). Also available for download via the World Wide Web; free to University of Oregon users.
8

MacNish, Craig Gordon. "Nonmonotonic inference systems for modelling dynamic processes". Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240195.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Lin, Ye. "Internet data extraction based on automatic regular expression inference". [Ames, Iowa : Iowa State University], 2007.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

El, Kaliouby Rana Ayman. "Mind-reading machines : automated inference of complex mental states". Thesis, University of Cambridge, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615030.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Mugambi, Ernest Muthomi. "Automated inference of comprehensible models for medical data mining". Thesis, University of Sunderland, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425238.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Serrano, Lucas. "Automatic inference of system software transformation rules from examples". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS425.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Le noyau Linux est aujourd'hui présent dans tous les types de systèmes informatiques, des smartphones aux supercalculateurs, comprenant à la fois le matériel le plus récent et les systèmes "anciens". Cette diversité d'environnement a pour conséquence une base de code importante, d'une dizaine de millions de lignes de code, pour les pilotes matériels. Cependant par souci d'introduction de nouvelles fonctionnalités, ou pour des raisons de performance ou de sécurité, certaines interfaces de programmation (APIs) internes doivent être parfois revues, ce qui peut impliquer des changements pour des milliers de pilotes les utilisant.Cette thèse propose une nouvelle approche, Spinfer, permettant d'effectuer ces migrations d'utilisation d'APIs de manière automatique. Cette nouvelle approche, basée sur l'assemblage de motifs en suivant des contraintes de flot de contrôle, est capable d'apprendre à partir d'exemples, même imparfaits, des règles de transformations adaptées aux enjeux des migrations d'utilisations d'APIs dans le noyau Linux
The Linux kernel is present today in all kinds of computing environments, from smartphones to supercomputers, including both the latest hardware and "ancient" systems. This multiplicity of environments has come at the expense of a large code size, of approximately ten million lines of code, dedicated to device drivers. However, to add new functionalities, or for performance or security reasons, some internal Application Programming Interfaces (APIs) can be redesigned, triggering the need for changes of potentially thousands of drivers using them.This thesis proposes a novel approach, Spinfer, that can automatically perform these API usage updates. This new approach, based on pattern assembly constrained by control-flow relationships, can learn transformation rules from even imperfect examples. Learned rules are suitable for the challenges found in Linux kernel API usage updates
13

Lipovetzky, Nir. "Structure and inference in classical planning". Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/101416.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Classical planning is the problem of finding a sequence of actions for achieving a goal from an initial state assuming that actions have deterministic effects. The most effective approach for finding such plans is based on heuristic search guided by heuristics extracted automatically from the problem representation. In this thesis, we introduce alternative approaches for performing inference over the structure of planning problems that do not appeal to heuristic functions, nor to reductions to other formalisms such as SAT or CSP. We show that many of the standard benchmark domains can be solved with almost no search or a polynomially bounded amount of search, once the structure of planning problems is taken into account. In certain cases we can characterize this structure in terms of a novel width parameter for classical planning.
Los problemas en planificación clásica consisten en encontrar la secuencia de acciones que lleve a un agente a su objetivo desde un estado inicial, asumiendo que los efectos de las acciones son determinísticos. El enfoque más efectivo para encontrar dichos planes es la búsqueda heurística, extrayendo de la representación del problema de forma automática heurísticas que guien la búsqueda. En esta tesis, introducimos enfoques alternativos para realizar inferencias sobre la estructura del los problemas de planificación, sin apelar a funciones heurísticas, reducciones a SAT o CSP. Demostramos que la mayoría de problemas estándares pueden ser resueltos casi sin búsqueda o con una cantidad de búsqueda polinomialmente limitada, en algunos casos, caracterizando la estructura de los problemas en término de un nuevo parámetro de complejidad para la planificación clásica.
14

Voss, Chelsea (Chelsea S. ). "A tool for automated inference in rule-based biological models". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106447.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 45-46).
Rule-based biological models help researchers investigate systems such as cellular signalling pathways. Although these models are generally programmed by hand, some research efforts aim to program them automatically using biological facts extracted from papers via natural language processing. However, NLP facts cannot always be directly converted into mechanistic reaction rules for a rule-based model. Thus, there is a need for tools that can convert biological facts into mechanistic rules in a logically sound way. We construct such a tool specifically for Kappa, a model programming language, by implementing Iota, a logic language for Kappa models. Our tool can translate biological facts into Iota predicates, check predicates for satisfiability, and find models that satisfy predicates. We test our system against realistic use cases, and show that it can construct rule-based mechanistic models that are sound with respect to the semantics of the biological facts from which they were constructed.
by Chelsea Voss.
M. Eng.
15

Raghavendra, Archana. "(Semi) automatic wrapper generation for production systems by knowledge inference". [Gainesville, Fla.] : University of Florida, 2001. http://purl.fcla.edu/fcla/etd/UFE0000345.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (M.S.)--University of Florida, 2001.
Title from title page of source document. Document formatted into pages; contains viii, 73 p.; also contains graphics. Includes vita. Includes bibliographical references.
16

Bhuiyan, Touhid. "Trust-based automated recommendation making". Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/49168/1/Touhid_Bhuiyan_Thesis.pdf.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Recommender systems are one of the recent inventions to deal with ever growing information overload in relation to the selection of goods and services in a global economy. Collaborative Filtering (CF) is one of the most popular techniques in recommender systems. The CF recommends items to a target user based on the preferences of a set of similar users known as the neighbours, generated from a database made up of the preferences of past users. With sufficient background information of item ratings, its performance is promising enough but research shows that it performs very poorly in a cold start situation where there is not enough previous rating data. As an alternative to ratings, trust between the users could be used to choose the neighbour for recommendation making. Better recommendations can be achieved using an inferred trust network which mimics the real world "friend of a friend" recommendations. To extend the boundaries of the neighbour, an effective trust inference technique is required. This thesis proposes a trust interference technique called Directed Series Parallel Graph (DSPG) which performs better than other popular trust inference algorithms such as TidalTrust and MoleTrust. Another problem is that reliable explicit trust data is not always available. In real life, people trust "word of mouth" recommendations made by people with similar interests. This is often assumed in the recommender system. By conducting a survey, we can confirm that interest similarity has a positive relationship with trust and this can be used to generate a trust network for recommendation. In this research, we also propose a new method called SimTrust for developing trust networks based on user's interest similarity in the absence of explicit trust data. To identify the interest similarity, we use user's personalised tagging information. However, we are interested in what resources the user chooses to tag, rather than the text of the tag applied. The commonalities of the resources being tagged by the users can be used to form the neighbours used in the automated recommender system. Our experimental results show that our proposed tag-similarity based method outperforms the traditional collaborative filtering approach which usually uses rating data.
17

Rybalka, A. I., A. S. Kutsenko e S. V. Kovalenko. "Modelling of an automated food quality assessment system based on fuzzy inference". Thesis, Харківський національний університет радіоелектроніки, 2020. http://openarchive.nure.ua/handle/document/14769.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The purpose of this study is to create a methodology for developing an automated system for assessing the quality of food products based on a comprehensive quality indicator and the use of fuzzy logic theory, namely, fuzzy inference. In our opinion, such an approach to quality assessment can reduce the subjective component that has a significant impact on making a final decision. The system, built on a given algorithm, allows us to assess the quality of food products, taking into account the data of laboratory studies on measurable quality indicators and expert opinions on difficult to measure indicators.
18

Marques, Henrique Costa. "An inference model with probabilistic ontologies to support automation in effects-based operations planning". Instituto Tecnológico de Aeronáutica, 2012. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=2190.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
In modern day operations, planning has been an increasingly complex activity. This is especially true in scenarios where there is an interaction between civilian and military organizations, involving multiple actors in a diversified way, with the intertwining requirements that limit the solution space in non-trivial ways. Under these circumstances, decision support systems are an essential tool that can also become a problem if not properly used. Although this has been widely recognized by the planning and decision support systems communities, there has been little progress in designing a comprehensive methodology for course of action (COA) representation that supports the diverse aspects of the Command and Control planning cycle in Effects-Based Operations (EBO). This work proposes an approach based on probabilistic ontologies capable to support task planning cycle in EBO at the Command and Control tactical planning level. At this level, we need to specify the tasks that will possibly achieve the desired effects defined by the upper echelon, with uncertainty not only in the execution, but also from the environment parameters. Current approaches suggest solutions to the operational level, giving greater importance to the process of targeting while approaches to the tactical level do not take into account the uncertainty present in the environment and actions in their ability to achieve the desired effect. To offer a possible solution to knowledge representation at the tactical level, an inference model was developed to generate the planning problem to be sent to a planning system. The proposed model also describes simulation as a tool to assist the plan';s refinement. The main contribution of this work is the development of a process of probabilistic inference against a knowledge base that is capable of dealing with uncertainty at the tactical level, where different tasks can achieve the same effect, but with different probabilities of success. Obtained results indicate the feasibility of the proposal once valid plans are generated in reasonable time from general orders or requests.
19

Gennari, Rosella. "Mapping Inferences: Constraint Propagation and Diamond Satisfaction". Diss., Universiteit van Amsterdam, 2002. http://hdl.handle.net/10919/71553.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The main theme shared by the two main parts of this thesis is EFFICIENT AUTOMATED REASONING.Part I is focussed on a general theory underpinning a number of efficient approximate algorithms for Constraint Satisfaction Problems (CSPs),the constraint propagation algorithms.In Chapter 3, we propose a Structured Generic Algorithm schema (SGI) for these algorithms. This iterates functions according to a certain strategy, i.e. by searching for a common fixpoint of the functions. A simple theory for SGI is developed by studying properties of functions and of the ways these influence the basic strategy. One of the primary objectives of our theorisation is thus the following: using SGI or some of its variations for DESCRIBINING and ANALISYING HOW the "pruning" and "propagation" process is carried through by constraint propagation algorithms.Hence, in Chapter 4, different domains of functions (e.g., domain orderings) are related to different classes of constraint propagation algorithms (e.g., arc consistency algorithms); thus each class of constraint propagation algorithms is associated with a "type" of function domains, and so separated from the others. Then we analys each such class: we distinguished functions on the same domains for their different ways of performing pruning (point or set based), and consequently differentiated between algorithms of the same class (e.g., AC-1 and AC-3 versus AC-4 or AC-5). Besides, we also show how properties of functions (e.g., commutativity or stationarity) are related to different strategies of propagation in constraint algorithms of the same class (see, for instance, AC-1 versus AC-3). In Chapter 5 we apply the SGI schema to the case of soft CSPs (a generalisation of CSPs with sort-of preferences), thereby clarifying some of the similarities and differences between the "classical" and soft constraint-propagation algorithms. Finally, in Chapter 6, we summarise and characterise all the functions used for constraint propagation; in fact, the other goal of our theorisation is abstracting WHICH functions, iterated as in SGI or its variations, perform the task of "pruning" or "propagation" of inconsistencies in constraint propagation algorithms.We focus on relations and relational structures in Part II of the thesis. More specifically, modal languages allow us to talk about various relational structures and their properties. Once the latter are formulated in a modal language, they can be passed to automated theorem provers and tested for satisfiability, with respect to certain modal logics. Our task, in this part, can be described as follows: determining the satisfiability of modal formulas in an efficient manner. In Chapter 8, we focus on one way of doing this: we refine the standard translation as the layered translation, and use existing theorem provers for first-order logic on the output of this refined translation. We provide ample experimental evidence on the improvements in performances that were obtained by means of the refinement.The refinement of the standard translation is based on the tree model property. This property is also used in the basic algorithm schema in Chapter 9 ---the original schema is due to~\cite{seb97}. The proposed algorithm proceeds layer by layer in the modal formula and in its candidate models, applying constraint propagation and satisfaction algorithms for finite CSPs at each layer. With Chapter 9, we wish to draw the attention of constraint programmers to modal logics, and of modal logicians to CSPs.Modal logics themselves express interesting problems in terms of relations and unary predicates, like temporal reasoning tasks. On the other hand, constraint algorithms manipulate relations in the form of constraints, and unary predicates in the form of domains or unary constraints, see Chapter 6. Thus the question of how efficiently those algorithms can be applied to modal reasoning problems seems quite natural and challenging.
20

Siegel, Holger [Verfasser]. "Numeric Inference of Heap Shapes for the Automated Analysis of Heap-Allocating Programs / Holger Siegel". München : Verlag Dr. Hut, 2016. http://d-nb.info/108438521X/34.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Morettin, Paolo. "Learning and Reasoning in Hybrid Structured Spaces". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/264203.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Many real world AI applications involve reasoning on both continuous and discrete variables, while requiring some level of symbolic reasoning that can provide guarantees on the system's behaviour. Unfortunately, most of the existing probabilistic models do not efficiently support hard constraints or they are limited to purely discrete or continuous scenarios. Weighted Model Integration (WMI) is a recent and general formalism that enables probabilistic modeling and inference in hybrid structured domains. A difference of WMI-based inference algorithms with respect to most alternatives is that probabilities are computed inside a structured support involving both logical and algebraic relationships between variables. While some progress has been made in the last years and the topic is increasingly gaining interest from the community, research in this area is at an early stage. These aspects motivate the study of hybrid and symbolic probabilistic models and the development of scalable inference procedures and effective learning algorithms in these domains. This PhD Thesis embodies my effort in studying scalable reasoning and learning techniques in the context of WMI.
22

Morettin, Paolo. "Learning and Reasoning in Hybrid Structured Spaces". Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/264203.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Many real world AI applications involve reasoning on both continuous and discrete variables, while requiring some level of symbolic reasoning that can provide guarantees on the system's behaviour. Unfortunately, most of the existing probabilistic models do not efficiently support hard constraints or they are limited to purely discrete or continuous scenarios. Weighted Model Integration (WMI) is a recent and general formalism that enables probabilistic modeling and inference in hybrid structured domains. A difference of WMI-based inference algorithms with respect to most alternatives is that probabilities are computed inside a structured support involving both logical and algebraic relationships between variables. While some progress has been made in the last years and the topic is increasingly gaining interest from the community, research in this area is at an early stage. These aspects motivate the study of hybrid and symbolic probabilistic models and the development of scalable inference procedures and effective learning algorithms in these domains. This PhD Thesis embodies my effort in studying scalable reasoning and learning techniques in the context of WMI.
23

TEIXEIRA, TAIRO DOS PRAZERES. "A FUZZY INFERENCE SYSTEM WITH AUTOMATIC RULE EXTRACTION FOR GAS PATH DIAGNOSIS OF AVIATION GAS TURBINES". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28405@1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Turbinas a gás são equipamentos muito complexos e caros. No caso de falha em uma turbina, há obviamente perdas diretas, mas as indiretas são normalmente muito maiores, uma vez que tal equipamento é crítico para a operação de instalações industriais, aviões e veículos pesados. Portanto, é fundamental que turbinas a gás sejam providas com um sistema eficiente de monitoramento e diagnóstico. Isto é especialmente relevante no Brasil, cuja frota de turbinas tem crescido muito nos últimos anos, devido, principalmente, ao aumento do número de usinas termelétricas e ao crescimento da aviação civil. Este trabalho propõe um Sistema de Inferência Fuzzy (SIF) com extração automática de regras para diagnóstico de desempenho de turbinas a gás aeronáuticas. O sistema proposto faz uso de uma abordagem residual – medições da turbina real são comparadas frente a uma referência de turbina saudável – para tratamento dos dados brutos de entrada para os módulos de detecção e isolamento, que, de forma hierárquica, são responsáveis por detectar e isolar falhas em nível de componentes, sensores e atuadores. Como dados reais de falhas em turbinas a gás são de difícil acesso e de obtenção cara, a metodologia é validada frente a uma base de dados de falhas simuladas por um software especialista. Os resultados mostram que o SIF é capaz de detectar e isolar corretamente falhas, além de fornecer interpretabilidade linguística, característica importante no processo de tomada de decisão no contexto de manutenção.
A Gas turbine is a complex and expensive equipment. In case of a failure indirect losses are typically much larger than direct ones, since such equipment plays a critical role in the operation of industrial installations, aircrafts, and heavy vehicles. Therefore, it is vital that gas turbines be provided with an efficient monitoring and diagnostic system. This is especially relevant in Brazil, where the turbines fleet has risen substantially in recent years, mainly due to the increasing number of thermal power plants and to the growth of civil aviation. This work proposes a Fuzzy Inference System (FIS) with automatic rule extraction for gas path diagnosis. The proposed system makes use of a residual approach – gas path measurements are compared to a healthy engine reference – for preprocessing raw input data that are forwarded to the detection and isolation modules. These operate in a hierarchical manner and are responsible for fault detection and isolation in components, sensors and actuators. Since gas turbines failure data are difficult to access and expensive to obtain, the methodology is validated by using a database fault simulated by a specialist software. The results show that the SIF is able to correctly detect and isolate failures and to provide linguistic interpretability, which is an important feature in the decision-making process regarding maintenance.
24

Cura, Rémi. "Inverse procedural Street Modelling : from interactive to automatic reconstruction". Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1034/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La population mondiale augmente rapidement, et avec elle, le nombre de citadins, ce qui rend d'autant plus importantes la planification et la gestion des villes.La gestion "intelligente" de ces villes et les nombreuses applications (gestion, tourisme virtuel, simulation de trafic, etc.) nécessitent plus de données réunies dans des modèles virtuels de villes.En milieu urbain, les rues et routes sont essentielles par leur rôle d'interface entre les espaces publics et privés, et entre ces différents usages.Il est difficile de modéliser les rues (ou de les reconstruire virtuellement) car celles-ci sont très diverses (de par leur forme, fonction, morphologie), et contiennent des objets très divers (mobilier, marquages, panneaux).Ce travail de thèse propose une méthode (semi-) automatique pour reconstruire des rues en utilisant le paradigme de la modélisation procédurale inverse dont le principe est de générer un modèle procéduralement, puis de l'adapter à des observations de la réalité.Notre méthode génère un premier modèle approximatif - à partir de très peu d'informations (un réseau d'axes routiers + attributs associés) - assez largement disponible.Ce modèle est ensuite adapté à des observations de façon interactive (interaction en base compatible avec les logiciels SIG communs) et (semi-) automatique (optimisation).L'adaptation (semi-) automatique déforme le modèle de route de façon à ce qu'il corresponde à des observations (bords de trottoir, objets urbains) extraites d'images et de nuages de points.La génération (StreetGen) et l'édition interactive se font dans un serveur de base de données ; de même que la gestion des milliards de points Lidar (Point Cloud Server).La génération de toutes les rues de la ville de Paris prends quelques minutes, l'édition multi-utilisateurs est interactive (<0.3 s). Les premiers résultats de l'adaptation (semi-) automatique (qq minute) sont prometteurs (la distance moyenne à la vérité terrain passe de 2.0 m à 0.5 m).Cette méthode, combinée avec d'autres telles que la reconstruction de bâtiment, de végétation, etc., pourrait permettre rapidement et semi automatiquement la création de modèles précis et à jour de ville
World urban population is growing fast, and so are cities, inducing an urgent need for city planning and management.Increasing amounts of data are required as cities are becoming larger, "Smarter", and as more related applications necessitate those data (planning, virtual tourism, traffic simulation, etc.).Data related to cities then become larger and are integrated into more complex city model.Roads and streets are an essential part of the city, being the interface between public and private space, and between urban usages.Modelling streets (or street reconstruction) is difficult because streets can be very different from each other (in layout, functions, morphology) and contain widely varying urban features (furniture, markings, traffic signs), at different scales.In this thesis, we propose an automatic and semi-automatic framework to model and reconstruct streets using the inverse procedural modelling paradigm.The main guiding principle is to generate a procedural generic model and then to adapt it to reality using observations.In our framework, a "best guess" road model is first generated from very little information (road axis network and associated attributes), that is available in most of national databases.This road model is then fitted to observations by combining in-base interactive user edition (using common GIS software as graphical interface) with semi-automated optimisation.The optimisation approach adapts the road model so it fits observations of urban features extracted from diverse sensing data.Both street generation (StreetGen) and interactions happen in a database server, as well as the management of large amount of street Lidar data (sensing data) as the observations using a Point Cloud Server.We test our methods on the entire Paris city, whose streets are generated in a few minutes, can be edited interactively (<0.3 s) by several concurrent users.Automatic fitting (few m) shows promising results (average distance to ground truth reduced from 2.0 m to 0.5m).In the future, this method could be mixed with others dedicated to reconstruction of buildings, vegetation, etc., so an affordable, precise, and up to date City model can be obtained quickly and semi-automatically.This will also allow to such models to be used in other application areas.Indeed, the possibility to have common, more generic, city models is an important challenge given the cost an complexity of their construction
25

El, Maadani Khalid. "Identification de systèmes séquentiels structurés : Application à la validation du test". Toulouse, INSA, 1993. http://www.theses.fr/1993ISAT0003.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Le travail presente dans ce memoire s'inscrit dans le cadre de l'evaluation et de la validation du test des systemes sequentiels deterministes, au niveau comportemental. Le critere d'evaluation adopte est celui de l'identification. L'identification d'un systeme sequentiel est generalement un probleme d'inference d'automate, dans les domaines de la synthese de systemes sequentiels, de l'inference reguliere et de l'apprentissage sequentiel, et un probleme de generation de sequence, dans le domaine de la generation de test. L'evaluation d'une sequence de test par identification est traitee ici comme un probleme d'inference qui consiste a determiner l'ensemble des machines acceptant cette sequence mais qui sont incompatibles avec le modele dont elle est issue. Le nombre de machines distinctes obtenues donne une mesure relative de la couverture de la sequence par rapport au modele. Deux approches differentes de l'evaluation et de la validation du test des systemes sequentiels sont proposees. La premiere, dite boite noire, correspond au cas ou le systeme analyse est decrit par un modele fonctionnel (machine a etats finie). La seconde, dite boite grise s'adresse aux systemes decrits par un modele structuro-fonctionnel (ensemble de machines interconnectees); elle exploite la connaissance structurelle du systeme et permet une reduction de la complexite algorithmique globale du processus. Cette approche consiste a evaluer la sequence, successivement, par rapport a chaque machine du systeme, supposee inconnue au sein d'un environnement connu. Les contraintes de commandabilite et d'observabilite induites par l'environnement sur le comportement externe de la machine consideree doivent alors etre prises en compte. Un prototype informatique nomme ida a ete developpe en prolog sur station de travail sun4 pour valider ces approches
26

Pernestål, Anna. "A Bayesian approach to fault isolation with application to diesel engine diagnosis". Licentiate thesis, KTH, School of Electrical Engineering (EES), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4294.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):

Users of heavy trucks, as well as legislation, put increasing demands on heavy trucks. The vehicles should be more comfortable, reliable and safe. Furthermore, they should consume less fuel and be more environmentally friendly. For example, this means that faults that cause the emissions to increase must be detected early. To meet these requirements on comfort and performance, advanced sensor-based computer control-systems are used. However, the increased complexity makes the vehicles more difficult for the workshop mechanic to maintain and repair. A diagnosis system that detects and localizes faults is thus needed, both as an aid in the repair process and for detecting and isolating (localizing) faults on-board, to guarantee that safety and environmental goals are satisfied.

Reliable fault isolation is often a challenging task. Noise, disturbances and model errors can cause problems. Also, two different faults may lead to the same observed behavior of the system under diagnosis. This means that there are several faults, which could possibly explain the observed behavior of the vehicle.

In this thesis, a Bayesian approach to fault isolation is proposed. The idea is to compute the probabilities, given ``all information at hand'', that certain faults are present in the system under diagnosis. By ``all information at hand'' we mean qualitative and quantitative information about how probable different faults are, and possibly also data which is collected during test drives with the vehicle when faults are present. The information may also include knowledge about which observed behavior that is to be expected when certain faults are present.

The advantage of the Bayesian approach is the possibility to combine information of different characteristics, and also to facilitate isolation of previously unknown faults as well as faults from which only vague information is available. Furthermore, Bayesian probability theory combined with decision theory provide methods for determining the best action to perform to reduce the effects from faults.

Using the Bayesian approach to fault isolation to diagnose large and complex systems may lead to computational and complexity problems. In this thesis, these problems are solved in three different ways. First, equivalence classes are introduced for different faults with equal probability distributions. Second, by using the structure of the computations, efficient storage methods can be used. Finally, if the previous two simplifications are not sufficient, it is shown how the problem can be approximated by partitioning it into a set of sub problems, which each can be efficiently solved using the presented methods.

The Bayesian approach to fault isolation is applied to the diagnosis of the gas flow of an automotive diesel engine. Data collected from real driving situations with implemented faults, is used in the evaluation of the methods. Furthermore, the influences of important design parameters are investigated.

The experiments show that the proposed Bayesian approach has promising potentials for vehicle diagnosis, and performs well on this real problem. Compared with more classical methods, e.g. structured residuals, the Bayesian approach used here gives higher probability of detection and isolation of the true underlying fault.


Både användare och lagstiftare ställer idag ökande krav på prestanda hos tunga lastbilar. Fordonen ska var bekväma, tillförlitliga och säkra. Dessutom ska de ha bättre bränsleekonomi vara mer miljövänliga. Detta betyder till exempel att fel som orsakar förhöjda emissioner måste upptäckas i ett tidigt stadium.

För att möta dessa krav på komfort och prestanda används avancerade sensorbaserade reglersystem.

Emellertid leder den ökade komplexiteten till att fordonen blir mer komplicerade för en mekaniker att underhålla, felsöka och reparera.

Därför krävs det ett diagnossystem som detekterar och lokaliserar felen, både som ett hjälpmedel i reparationsprocessen, och för att kunna detektera och lokalisera (isolera) felen ombord för att garantera att säkerhetskrav och miljömål är uppfyllda.

Tillförlitlig felisolering är ofta en utmanande uppgift. Brus, störningar och modellfel kan orsaka problem. Det kan också det faktum två olika fel kan leda till samma observerade beteende hos systemet som diagnosticeras. Detta betyder att det finns flera fel som möjligen skulle kunna förklara det observerade beteendet hos fordonet.

I den här avhandlingen föreslås användandet av en Bayesianska ansats till felisolering. I metoden beräknas sannolikheten för att ett visst fel är närvarande i det diagnosticerade systemet, givet ''all tillgänglig information''. Med ''all tillgänglig information'' menas både kvalitativ och kvantitativ information om hur troliga fel är och möjligen även data som samlats in under testkörningar med fordonet, då olika fel finns närvarande. Informationen kan även innehålla kunskap om vilket beteende som kan förväntas observeras då ett särskilt fel finns närvarande.

Fördelarna med den Bayesianska metoden är möjligheten att kombinera information av olika karaktär, men också att att den möjliggör isolering av tidigare okända fel och fel från vilka det endast finns vag information tillgänglig. Vidare kan Bayesiansk sannolikhetslära kombineras med beslutsteori för att erhålla metoder för att bestämma nästa bästa åtgärd för att minska effekten från fel.

Användandet av den Bayesianska metoden kan leda till beräknings- och komplexitetsproblem. I den här avhandlingen hanteras dessa problem på tre olika sätt. För det första så introduceras ekvivalensklasser för fel med likadana sannolikhetsfördelningar. För det andra, genom att använda strukturen på beräkningarna kan effektiva lagringsmetoder användas. Slutligen, om de två tidigare förenklingarna inte är tillräckliga, visas det hur problemet kan approximeras med ett antal delproblem, som vart och ett kan lösas effektivt med de presenterade metoderna.

Den Bayesianska ansatsen till felisolering har applicerats på diagnosen av gasflödet på en dieselmotor. Data som har samlats från riktiga körsituationer med fel implementerade används i evalueringen av metoderna. Vidare har påverkan av viktiga parametrar på isoleringsprestandan undersökts.

Experimenten visar att den föreslagna Bayesianska ansatsen har god potential för fordonsdiagnos, och prestandan är bra på detta reella problem. Jämfört med mer klassiska metoder baserade på strukturerade residualer ger den Bayesianska metoden högre sannolikhet för detektion och isolering av det sanna, underliggande, felet.

27

Surovič, Marek. "Statická detekce malware nad LLVM IR". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255427.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Tato práce se zabývá metodami pro behaviorální detekci malware, které využívají techniky formální analýzy a verifikace. Základem je odvozování stromových automatů z grafů závislostí systémových volání, které jsou získány pomocí statické analýzy LLVM IR. V rámci práce je implementován prototyp detektoru, který využívá překladačovou infrastrukturu LLVM. Pro experimentální ověření detektoru je použit překladač jazyka C/C++, který je schopen generovat mutace malware za pomoci obfuskujících transformací. Výsledky předběžných experimentů a případná budoucí rozšíření detektoru jsou diskutovány v závěru práce.
28

Ahnlén, Fredrik. "Automatic Detection of Low Passability Terrain Features in the Scandinavian Mountains". Thesis, KTH, Geodesi och satellitpositionering, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254709.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
During recent years, much focus have been put on replacing time consuming manual mappingand classification tasks with automatic methods, having minimal human interaction. Now it ispossible to quickly classify land cover and terrain features covering large areas to a digital formatand with a high accuracy. This can be achieved using nothing but remote sensing techniques,which provide a far more sustainable process and product. Still, some terrain features do not havean established methodology for high quality automatic mapping.The Scandinavian Mountains contain several terrain features with low passability, such asmires, shrub and stony ground. It would be of interest to anyone passing the land to avoid theseareas. However, they are not sufficiently mapped in current map products.The aim of this thesis was to find a methodology to classify and map these terrain featuresin the Scandinavian Mountains with high accuracy and minimal human interaction, using remotesensing techniques. The study area chosen for the analysis is a large valley and mountain sidesouth-east of the small town Abisko in northern Sweden, which contain clearly visible samplesof the targeted terrain features. The methodology was based on training a Fuzzy Logic classifierusing labeled training samples and descriptors derived from ortophotos, LiDAR data and currentmap products, chosen to separate the classes from each other by their characteristics. Firstly,a set of candidate descriptors were chosen, from which the final descriptors were obtained byimplementing a Fisher score filter. Secondly a Fuzzy Inference System was constructed usinglabeled training data from the descriptors, created by the user. Finally the entire study area wasclassified pixel-by-pixel by using the trained classifier and a majority filter was used to cluster theoutputs. The result was validated by visual inspection, comparison to the current map productsand by constructing Confusion Matrices, both for the training data and validation samples as wellas for the clustered- and non-clustered results.The results showed that
De senaste åren har mycket fokus lagts på att ersätta tidskrävande manuella karterings- och klassificeringsmetodermed automatiserade lösningar med minimal mänsklig inverkan. Det är numeramöjligt att digitalt klassificera marktäcket och terrängföremål över stora områden, snabbt och medhög noggrannhet. Detta med hjälp av enbart fjärranalys, vilket medför en betydligt mer hållbarprocess och slutprodukt. Trots det finns det fortfarande terrängföremål som inte har en etableradmetod för noggrann automatisk kartering.Den skandinaviska fjällkedjan består till stor del av svårpasserade terrängföremål som sankmarker,videsnår och stenig mark. Alla som tar sig fram i terrängen obanat skulle ha nytta av attkunna undvika dessa områden men de är i nuläget inte karterade med önskvärd noggrannhet.Målet med denna analys var att utforma en metod för att klassificera och kartera dessa terrängföremåli Skanderna, med hög noggrannhet och minimal mänsklig inverkan med hjälp avfjärranalys. Valet av testområde för analysen är en större dal och bergssida sydost om Abisko inorra Sverige som innehåller tydliga exemplar av alla berörda terrängföremål. Metoden baseradespå att träna en Fuzzy Logic classifier med manuellt utvald träningsdata och deskriptorer,valda för att bäst separera klasserna utifrån deras karaktärsdrag. Inledningsvis valdes en uppsättningav kandidatdeskriptorer som sedan filtrerades till den slutgiltiga uppsättningen med hjälp avett Fisher score filter. Ett Fuzzy Inference System byggdes och tränades med träningsdata fråndeskriptorerna vilket slutligen användes för att klassificera hela testområdet pixelvis. Det klassificeraderesultatet klustrades därefter med hjälp av ett majoritetsfilter. Resultatet validerades genomvisuell inspektion, jämförelse med befintliga kartprodukter och genom confusion matriser, vilkaberäknades både för träningsdata och valideringsdata samt för det klustrade och icke-klustraderesultatet.Resultatet visade att de svårpasserade terrängföremålen sankmark, videsnår och stenig markkan karteras med hög noggrannhet med hjälp denna metod och att resultaten generellt är tydligtbättre än nuvarande kartprodukter. Däremot kan metoden finjusteras på flera plan för att optimeras.Bland annat genom att implementera deskriptorer för markvattenrörelser och användandeav LiDAR med högre spatial upplösning, samt med ett mer fulltäckande och spritt val av klasser.
29

Sun, Wenzhe. "Bus Bunching Prediction and Transit Route Demand Estimation Using Automatic Vehicle Location Data". Kyoto University, 2020. http://hdl.handle.net/2433/253498.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Bossert, Georges. "Exploiting Semantic for the Automatic Reverse Engineering of Communication Protocols". Thesis, Supélec, 2014. http://www.theses.fr/2014SUPL0027/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Cette thèse propose une approche pratique pour la rétro-conception automatisée de protocoles de communication non-documentés. Les travaux existants dans ce domaine ne permettent qu'un apprentissage incomplet des spécifications ou exigent trop de stimulation de l'implémentation du protocol cible avec le risque d'être vaincu par des techniques de contre-inférence. Cette thèse adresse ces problématiques en s'appuyant sur la sémantique du protocole cible pour améliorer la qualité, la rapidité et la furtivité du processus d'inférence. Nous appliquons cette approche à la rétro-conception des deux principaux aspects de la définition d'un protocole à savoir l'inférence de sa syntaxe et de sa grammaire. Nous proposons un outil open-source, appelé Netzob, qui implémente nos contributions pour aider les experts en sécurité dans leur lutte contre les dernières menaces informatiques. Selons nos recherches, Netzob apparait comme l'outil publié le plus avancé pour la rétro-conception et la simulation de protocoles de communications non-documentés
This thesis exposes a practical approach for the automatic reverse engineering of undocumented communication protocols. Current work in the field of automated protocol reverse engineering either infer incomplete protocol specifications or require too many stimulation of the targeted implementation with the risk of being defeated by counter-inference techniques. We propose to tackle these issues by leveraging the semantic of the protocol to improve the quality, the speed and the stealthiness of the inference process. This work covers the two main aspects of the protocol reverse engineering, the inference of its syntactical definition and of its grammatical definition. We propose an open-source tool, called Netzob, that implements our work to help security experts in their work against latest cyber-threats. We claim Netzob is the most advanced published tool that tackles issues related to the reverse engineering and the simulation of undocumented protocols
31

Zhao, Jinhua 1977. "The planning and analysis implications of automated data collection systems : rail transit OD matrix inference and path choice modeling examples". Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28752.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning; and, (S.M. in Transportation)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2004.
Includes bibliographical references (leaf 124).
(cont.) by presenting two case studies both in the context of the Chicago Transit Authority. One study proposes an enhanced method of inferring the rail trip OD matrix from an origin-only AFC system to replace the routine passenger survey. The proposed algorithm takes advantage of the pattern of a person's consecutive transit trip segments. In particular the study examines the rail-to-bus case (which is ignored by prior studies) by integrating AFC and AVL data and utilizing GIS and DBMS technologies. A software tool is developed to facilitate the implementation of the algorithm. The other study is of rail path choice, which employs the Logit and Mixed Logit models to examine revealed public transit riders' travel behavior based on the inferred OD matrix and the transit network attributes. This study is based on two data sources: the rail trip OD matrix inferred in the first case study and the attributes of alternative paths calculated from a network representation in Trans CAD. This study demonstrates that a rigorous traveler behavior analysis can be performed based on the data source from ADC systems. Both cases illustrate the potential as well as the difficulty of utilizing these systems and more importantly demonstrate that at relatively low marginal cost, ADC systems can provide transit agencies with a rich information source to support decision making. The impact of a new data collection strategy ...
Transit agencies in U.S. are on the brink of a major change in the way they make many critical planning decisions. Until recently transit agencies have lacked the data and the analysis techniques needed to make informed decisions in both long-term planning and day-to-day operations. Now these agencies are entering an era in which a large volume of raw data will be available due to the implementation of ITS technology including Automated Data Collection systems (ADC), such as Automated Fare Collection systems (AFC), Automated Vehicle Location systems (AVL), and Automatic Passenger Counting systems (APC). Automated Data Collection systems have distinct advantages over the traditional data collection methods: large temporal and spatial coverage, continuous data flow and currency, low marginal cost, accuracy, automatic collection and central storage, etc. Thanks to these unique features, there exists a great potential for ADC systems to be used to support decision-making in transit agencies. However, effectively utilizing ADC systems data is not straightforward. Several examples are given to illustrate that there is a critical gap between what ADC systems directly offer and what is needed practically in public transit agencies' decision-making practice. Meanwhile, the framework of data processing and analysis is not readily available, and transit agencies generally lack needed qualified staff. As a consequence, these data sources have not yet been effectively utilized in practice. A strong foundation of ADC data manipulation, analysis methodologies and techniques with support of advanced technologies such DBMS and GIS is required before the full value of the new data source can be exploited. This research is an initial attempt to lay out such a framework
by Jinhua Zhao.
S.M.in Transportation
M.C.P.
32

Aho, P. (Pekka). "Automated state model extraction, testing and change detection through graphical user interface". Doctoral thesis, Oulun yliopisto, 2019. http://urn.fi/urn:isbn:9789526224060.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Abstract Testing is an important part of quality assurance, and the use of agile processes, continuous integration and DevOps is increasing the pressure for automating all aspects of testing. Testing through graphical user interfaces (GUIs) is commonly automated by scripts that are captured or manually created with a script editor, automating the execution of test cases. A major challenge with script-based GUI test automation is the manual effort required for maintaining the scripts when the GUI changes. Model-based testing (MBT) is an approach for automating also the design of test cases. Traditionally, models for MBT are designed manually with a modelling tool, and an MBT tool is used for generating abstract test cases from the model. Then, an adapter is implemented to translate the abstract test cases into concrete test cases that can be executed on system under test (SUT). When the GUI changes, the model has to be updated and the test cases can be generated from the updated model, reducing the maintenance effort. However, designing models and implementing adapters requires effort and specialized expertise. The main research questions of this thesis are 1) how to automatically extract state-based models of software systems with GUI, and 2) how to use the extracted models to automate testing. Our focus is on using dynamic analysis through the GUI during automated exploration of the system, and we concentrate on desktop applications. Our results show that extracting state models through GUI is possible and the models can be used to generate regression test cases, but a more promising approach is to use model comparison on extracted models of consequent system versions to automatically detect changes between the versions
Tiivistelmä Testaaminen on tärkeä osa laadun varmistusta. Ketterät kehitysprosessit ja jatkuva integrointi lisäävät tarvetta automatisoida kaikki testauksen osa-alueet. Testaus graafisten käyttöliittymien kautta automatisoidaan yleensä skripteinä, jotka luodaan joko tallentamalla manuaalista testausta tai kirjoittamalla käyttäen skriptieditoria. Tällöin scriptit automatisoivat testitapausten suorittamista. Muutokset graafisessa käyttöliittymässä vaativat scriptien päivittämistä ja scriptien ylläpitoon kuluva työmäärä on iso ongelma. Mallipohjaisessa testauksessa automatisoidaan testien suorittamisen lisäksi myös testitapausten suunnittelu. Perinteisesti mallipohjaisessa testauksessa mallit suunnitellaan manuaalisesti käyttämällä mallinnustyökalua, ja mallista luodaan abstrakteja testitapauksia automaattisesti mallipohjaisen testauksen työkalun avulla. Sen jälkeen implementoidaan adapteri, joka muuttaa abstraktit testitapaukset konkreettisiksi, jotta ne voidaan suorittaa testattavassa järjestelmässä. Kun testattava graafinen käyttöliittymä muuttuu, vain mallia täytyy päivittää ja testitapaukset voidaan luoda automaattisesti uudelleen, vähentäen ylläpitoon käytettävää työmäärää. Mallien suunnittelu ja adapterien implementointi vaatii kuitenkin huomattavan työmäärän ja erikoisosaamista. Tämä väitöskirja tutkii 1) voidaanko tilamalleja luoda automaattisesti järjestelmistä, joissa on graafinen käyttöliittymä, ja 2) voidaanko automaattisesti luotuja tilamalleja käyttää testauksen automatisointiin. Tutkimus keskittyy työpöytäsovelluksiin ja dynaamisen analyysin käyttämiseen graafisen käyttöliittymän kautta järjestelmän automatisoidun läpikäynnin aikana. Tutkimustulokset osoittavat, että tilamallien automaattinen luominen graafisen käyttöliittymän kautta on mahdollista, ja malleja voidaan käyttää testitapausten generointiin regressiotestauksessa. Lupaavampi lähestymistapa on kuitenkin vertailla malleja, jotka on luotu järjestelmän peräkkäisistä versioista, ja havaita versioiden väliset muutokset automaattisesti
33

Durand, William. "Automated test generation for production systems with a model-based testing approach". Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22691/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ce manuscrit de thèse porte sur le problème du test basé modèle de systèmes de production existants, tels ceux de notre partenaire industriel Michelin, l’un des trois plus grands fabricants de pneumatiques au monde. Un système de production est composé d’un ensemble de machines de production contrôlées par un ou plusieurs logiciels au sein d’un atelier dans une usine. Malgré les nombreux travaux dans le domaine du test basé modèle, l’écriture de modèles permettant de décrire un système sous test ou sa spécification reste un problème récurrent, en partie à cause de la complexité d’une telle tâche. De plus, un modèle est utile lorsqu’il est à jour par rapport à ce qu’il décrit, ce qui implique de le maintenir dans le temps. Pour autant, conserver une documentation à jour reste compliqué puisqu’il faut souvent le faire manuellement. Dans notre contexte, il est important de souligner le fait qu’un système de production fonctionne en continu et ne doit être ni arrêté ni perturbé, ce qui limite l’usage des techniques de test classiques. Pour pallier le problème de l’écriture de modèles, nous proposons une approche pour construire automatiquement des modèles depuis des séquences d’événements observés (traces) dans un environnement de production. Pour se faire, nous utilisons les informations fournies par les données échangées entre les éléments qui composent un système de production. Nous adoptons une approche boîte noire et combinons les notions de système expert, inférence de modèles et machine learning, afin de créer des modèles comportementaux. Ces modèles inférés décrivent des comportements complets, enregistrés sur un système analysé. Ces modèles sont partiels, mais également très grands (en terme de taille), ce qui les rend difficilement utilisable par la suite. Nous proposons une technique de réduction spécifique à notre contexte qui conserve l’équivalence de traces entre les modèles de base et les modèles fortement réduits. Grâce à cela, ces modèles inférés deviennent intéressant pour la génération de documentation, la fouille de données, mais également le test. Nous proposons une méthode passive de test basé modèle pour répondre au problème du test de systèmes de production sans interférer sur leur bon fonctionnement. Cette technique permet d’identifier des différences entre deux systèmes de production et réutilise l’inférence de modèles décrite précédemment. Nous introduisons deux relations d’implantation : une relation basée sur l’inclusion de traces, et une seconde relation plus faible proposée, pour remédier au fait que les modèles inférés soient partiels. Enfin, ce manuscrit de thèse présente Autofunk, un framework modulaire pour l’inférence de modèles et le test de systèmes de production qui aggrège les notions mentionnées précédemment. Son implémentation en Java a été appliquée sur différentes applications et systèmes de production chez Michelin dont les résultats sont donnés dans ce manuscrit. Le prototype développé lors de la thèse a pour vocation de devenir un outil standard chez Michelin
This thesis tackles the problem of testing (legacy) production systems such as those of our industrial partner Michelin, one of the three largest tire manufacturers in the world, by means of Model-based Testing. A production system is defined as a set of production machines controlled by a software, in a factory. Despite the large body of work within the field of Model-based Testing, a common issue remains the writing of models describing either the system under test or its specification. It is a tedious task that should be performed regularly in order to keep the models up to date (which is often also true for any documentation in the Industry). A second point to take into account is that production systems often run continuously and should not be disrupted, which limits the use of most of the existing classical testing techniques. We present an approach to infer exact models from traces, i.e. sequences of events observed in a production environment, to address the first issue. We leverage the data exchanged among the devices and software in a black-box perspective to construct behavioral models using different techniques such as expert systems, model inference, and machine learning. It results in large, yet partial, models gathering the behaviors recorded from a system under analysis. We introduce a context-specific algorithm to reduce such models in order to make them more usable while preserving trace equivalence between the original inferred models and the reduced ones. These models can serve different purposes, e.g., generating documentation, data mining, but also testing. To address the problem of testing production systems without disturbing them, this thesis introduces an offline passive Model-based Testing technique, allowing to detect differences between two production systems. This technique leverages the inferred models, and relies on two implementation relations: a slightly modified version of the existing trace preorder relation, and a weaker implementation proposed to overcome the partialness of the inferred models.Overall, the thesis presents Autofunk, a modular framework for model inference and testing of production systems, gathering the previous notions. Its Java implementation has been applied to different applications and production systems at Michelin, and this thesis gives results from different case studies. The prototype developed during this thesis should become a standard tool at Michelin
34

Gordon, Jason B. (Jason Benjamin). "Intermodal passenger flows on London's public transport network : automated inference of full passenger journeys using fare-transaction and vehicle-location data". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/78242.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Thesis (M.C.P.)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning; and, (S.M. in Transportation)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2012.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 147-155).
Urban public transport providers have historically planned and managed their networks and services with limited knowledge of their customers' travel patterns. While ticket gates and bus fareboxes yield counts of passenger activity in specific stations and vehicles, the relationships between these transactions-the origins, interchanges, and destinations of individual passengers-have typically been acquired only through costly and therefore small and infrequent rider surveys. Building upon recent work on the utilization of automated fare-collection and vehicle-location systems for passenger-behavior analysis, this thesis presents methods for inferring the full journeys of all riders on a large public transport network. Using complete daily sets of data from London's Oyster farecard and iBus vehicle-location system, boarding and alighting times and locations are inferred for individual bus passengers, interchanges are inferred between passenger trips of various public modes, and full-journey origin-interchange-destination matrices are constructed, which include the estimated flows of non-farecard passengers. The outputs are validated against surveys and traditional origin-destination matrices, and the software implementation demonstrates that the procedure is efficient enough to be performed daily, enabling transport providers to observe travel behavior on all services at all times.
by Jason B. Gordon.
S.M.in Transportation
M.C.P.
35

Kazakov, Mikhaïl. "A Methodology of semi-automated software integration : an approach based on logical inference. Application to numerical simulation solutions of Open CASCADE". INSA de Rouen, 2004. http://www.theses.fr/2004ISAM0001.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Application integration is a process of bringing of data or functionality from one program together with that from another application programs that were not initially created to work together. Recently, the integration of numerical simulation solvers gained the importance. Integration within this domain has high complexity due to the presence of non-standard application interfaces that exchange complex, diverse and often ambiguous data. Nowadays, the integration is done mostly manually. Difficulties of the manual process force to increase the level of automation of the integration process. The author of this dissertation created a methodology and its software implementation for semi-automated (i. E. Partially automated) application integration. Application interfaces are usually represented by their syntactical definitions, but they miss the high-level semantics of applicative domains - human understanding on what the software does. The author proposes to use formal specifications (ontologies) expressed in Description Logics in order to specify software interfaces and define their high-level semantics. The author proposes a three-tier informational model for structuring ontologies and the integration process. This model distinguishes among computation-indeoendent domain knowledge (domain ontology), platform-independent interface specifications (interface ontology) and platform-specific technological integration information (technological ontology). A mediation ontology is defined to fuse the specifications. A reasoning procedure over these ontologies searches for semantic links among syntactic definitions of application interfaces. Connectors among applications are generated using the information about semantic links. Integrated applications communicate later via the connectors. The author designed a meta-model based data manipulation approach that facilitates and supports the software implementation of the integration process.
36

Zheng, Ning. "Discovering interpretable topics in free-style text diagnostics, rare topics, and topic supervision /". Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1199237529.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Lopes, Victor Dias. "Proposta de integração entre tecnologias adaptativas e algoritmos genéticos". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-01072009-133614/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Este trabalho é um estudo inicial sobre a integração de duas áreas da engenharia da computação, as tecnologias adaptativas e os algoritmos genéticos. Para tanto, foi realizada a aplicação de algoritmos genéticos na inferência de autômatos adaptativos. Várias tácnicas foram estudas e propostas para a implementação do algoritmo, visando µa obtenção de resultados cada vez mais satisfatórios. Ambas as tecnologias, algoritmos genéticos e tecnologia adaptativa, possuem caráter fortemente adaptativo, porém com características bastante diferentes na forma que são implementadas e executadas. As inferências, propostas neste trabalho, foram realizadas com sucesso, de maneira que as técnicas descritas podem ser empregadas em ferramentas de auxílio para projetistas desses tipos de dispositivos. Ferramentas que podem vir a ser úteis devido µa complexidade envolvida no desenvolvimento de um autômato adaptativo. Através desta aplicação dos algoritmos genéticos, observando como os autômatos evoluíram durante a execução dos ensaios realizados, acredita-se que foi obtido um entendimento melhor da estrutura e funcionamento dos autômatos adaptativos e de como essas duas tecnologias, tão importantes, podem ser combinadas.
This work is an initial study about the integration of two computing engineering areas, the adaptive technologies and the genetic algorithms. For that, it was per- formed the application of genetic algorithms for the adaptive automata inference. Several techniques were studied and proposed along the algorithm implementation, always seeking for more satisfying results. Both technologies, genetic algorithm and adaptive technology, hold very strong adaptive features, however, with very di®erent characteristics in the way they are implemented and executed. The inferences, proposed in this work, were performed with success, so that the techniques described may be employed in aid tools for designers of such de- vices. Tools that may be useful due to the complexity involved in the development of an adaptive automaton. Through this genetic algorithm application, observing how automata evolved during the algorithm execution, we believe that it was obtained a better under- standing about the adaptive automaton structure and how those two, so impor- tant, technologies can be integrated.
38

Galvanin, Edinéia Aparecida dos Santos. "Extração automática de contornos de telhados de edifícios em um modelo digital de elevação, utilizando inferência Bayesiana e campos aleatórios de Markov /". Presidente Prudente : [s.n.], 2007. http://hdl.handle.net/11449/100258.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Orientador: Aluir Porfírio Dal Poz
Banca: Nilton Nobuhiro Imai
Banca: Maurício Galo
Banca: Edson Aparecido Mitishita
Resumo: As metodologias para a extração automática de telhados desempenham um papel importante no contexto de aquisição de informação espacial para Sistemas de Informação Geográficas (SIG). Neste sentido, este trabalho propõe uma metodologia para extração automática de contornos de telhado de edifícios utilizando dados de varredura a laser. A metodologia baseiase em duas etapas principais: 1) Extração de regiões altas (edifícios, árvores etc.) de um Modelo Digital de Elevação (MDE) gerado a partir dos dados laser; 2) Extração das regiões altas que correspondem a contornos de telhados. Na primeira etapa são utilizadas as técnicas de divisão recursiva, via estrutura quadtree e de fusão Bayesiana de regiões considerando Markov Random Field (MRF). Inicialmente a técnica de divisão recursiva é usada para particionar o MDE em regiões homogêneas. No entanto, devido a ligeiras diferenças de altura no MDE, nesta etapa a fragmentação das regiões pode ser relativamente alta. Para minimizar essa fragmentação, a técnica de fusão Bayesiana de regiões é aplicada nos dados segmentados. Utiliza-se para tanto um modelo hierárquico, cujas alturas médias das regiões dependem de uma média geral e de um efeito aleatório, que incorpora a relação de vizinhança entre elas. A distribuição a priori para o efeito aleatório é especificada como um modelo condicional auto-regressivo (CAR). As distribuições a posteriori para os parâmetros de interesse foram obtidas utilizando o Amostrador de Gibbs. Na segunda etapa os contornos de telhados são identificados entre todos os objetos altos extraídos na etapa anterior. Levando em conta algumas propriedades de telhados e as medidas de alguns atributos (por exemplo, área, retangularidade, ângulos entre eixos principais de objetos) é construída uma função de energia a partir do modelo MRF.
Abstract: Methodologies for automatic building roof extraction are important in the context of spatial information acquisition for geographical information systems (GIS). Thus, this work proposes a methodology for automatic extraction of building roof contour from laser scanning data. The methodology is based on two stages: 1) Extraction of high regions (buildings, trees etc.) from a Digital Elevation Model (DEM) derived from laser scanning data; 2) Building roof contour extraction. In the first stage is applied the recursive splitting technique using the quadtree structure followed by a Bayesian merging technique considering Markov Random Field (MRF) model. The recursive splitting technique subdivides the DEM into homogeneous regions. However, due to slight height differences in the DEM, in this stage the region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Bayesian framework is applied to the previously segmented data. Thus, a hierarchical model is proposed, whose height values in the data depend on a general mean plus a random effect. The prior distribution for the random effects is specified by the Conditional Autoregressive (CAR) model. The posterior probability distributions are obtained by the Gibbs sampler. In the second stage the building roof contours are identified among all high objects extracted previously.
Doutor
39

Furlong, Vitor Badiale. "Automation of a reactor for enzymatic hydrolysis of sugar cane bagasse : Computational intelligencebased adaptive control". Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/7394.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Submitted by Luciana Sebin (lusebin@ufscar.br) on 2016-09-21T13:52:44Z No. of bitstreams: 1 DissVBF.pdf: 4418595 bytes, checksum: aaae3efb173c8760a1039251a31ea973 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-23T18:23:48Z (GMT) No. of bitstreams: 1 DissVBF.pdf: 4418595 bytes, checksum: aaae3efb173c8760a1039251a31ea973 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-09-23T18:24:01Z (GMT) No. of bitstreams: 1 DissVBF.pdf: 4418595 bytes, checksum: aaae3efb173c8760a1039251a31ea973 (MD5)
Made available in DSpace on 2016-09-23T18:24:10Z (GMT). No. of bitstreams: 1 DissVBF.pdf: 4418595 bytes, checksum: aaae3efb173c8760a1039251a31ea973 (MD5) Previous issue date: 2015-03-20
Não recebi financiamento
The continuous demand growth for liquid fuels, alongside with the decrease of fossil oil reserves, unavoidable in the long term, induces investigations for new energy sources. A possible alternative is the use of bioethanol, produced by renewable resources such as sugarcane bagasse. Two thirds of the cultivated sugarcane biomass are sugarcane bagasse and leaves, not fermentable when the current, first-generation (1G) process is used. A great interest has been given to techniques capable of utilizing the carbohydrates from this material. Among them, production of second generation (2G) ethanol is a possible alternative. 2G ethanol requires two additional operations: a pretreatment and a hydrolysis stage. Regarding the hydrolysis, the dominant technical solution has been based on the use of enzymatic complexes to hydrolyze the lignocellulosic substrate. To ensure the feasibility of the process, a high final concentration of glucose after the enzymatic hydrolysis is desirable. To achieve this objective, a high solid consistency in the reactor is necessary. However, a high load of solids generates a series of operational difficulties within the reactor. This is a crucial bottleneck of the 2G process. A possible solution is using a fed-batch process, with feeding profiles of enzymes and substrate that enhance in the process yield and productivity. The main objective of this work was to implement and test a system to infer online concentrations of fermentable carbohydrates in the reactive system, and to optimize the feeding strategy of substrate and/or enzymatic complex, according to a model-based control strategy. Batch and fed-batch experiments were conducted in order to test the adherence of four simplified kinetic models. The model with best adherence to the experimental data (a modified Michaelis-Mentem model with inhibition by the product) was used to train an Artificial Neural Network (ANN) as a softsensor to predict glucose concentrations. Further, this ANN may be used in a closedloop control strategy. A feeding profile optimizer was implemented, based on the optimal control approach. The ANN was capable of inferring the product concentration from the available data with good adherence (Determination Coefficient of 0.972). The optimization algorithm generated profiles that increased a process performance index while maintaining operational levels within the reactor, reaching glucose concentrations close to those utilized in current first generation technology a (ranging between 156.0 g.L⁻¹ and 168.3 g.L⁻¹). However rough estimates for scaling up the reactor to industrial dimensions indicate that this conventional reactor design must be replaced by a two-stage reactor, to minimize the volume of liquid to be stirred.
A crescente demanda por combustíveis líquidos, bem como a diminuição das reservas de petróleo, inevitáveis a longo prazo, induzem pesquisas por novas fontes de energia. Uma possível solução é o uso do bioetanol, produzido de resíduos, como o bagaço de cana-deaçúcar. Dois terços da biomassa cultivada são bagaço e folhas. Estas frações não são fermentescíveis quando se usa a tecnologia de primeira geração atual (1G). Um grande interesse vem sendo prestado a técnicas capazes de utilizar os carboidratos deste material. Dentre elas, a produção de etanol de segunda geração (2G) é uma possível alternativa. Etanol 2G requer duas operações adicionais: etapas de pré-tratamento e hidrólise. Considerando a hidrólise, a técnica dominante tem sido a utilização de complexos enzimáticos para hidrolisar o substrato lignocelulósico. Para assegurar a viabilidade do processo, uma alta concentração final de glicose é necessária ao final do processo. Para atingir esse objetivo, uma alta concentração de sólidos no reator é necessária. No entanto, uma carga grande de sólidos gera uma série de dificuldades operacionais para o processo. Este é um gargalo crucial do processo 2G. Uma possível solução é utilizar um processo de batelada alimentada, com perfis de alimentação de enzima e substrato para aumentar produtividade e rendimento. O principal objetivo deste trabalho é implementar e testar um sistema para inferir concentração de carboidratos fermentescíveis automaticamente e otimizar a política de substrato e/ou enzima em tempo real, de acordo com uma estratégia de controle baseada em modelo cinético. Experimentos de batelada e batelada alimentada foram realizados a fim de testar a aderência de 4 modelos cinéticos simplificados. O modelo com melhor aderência aos dados experimentais (um modelo de Michaelis-Mentem modificado com inibição por produto) foi utilizado para gerar dados a fim de treinar uma rede neural artificial para predizer concentrações de glicose automaticamente. Em estudos futuros, esta rede pode ser utilizada para compor o fechamento da malha de controle. Um otimizador de perfil de alimentação foi implementado, este foi baseado em uma abordagem de controle ótimo. A rede neural foi capaz de predizer a concentração de produto com os dados disponíveis de maneira satisfatória (Coeficiente de Determinação de 0.972). O algoritmo de otimização gerou perfis que aumentaram a performance do processo enquanto manteve as condições da hidrólise dentro de níveis operacionais, e gerou concentrações de glicose próximas as obtidas pelo caldo de cana-de-açúcar da primeira geração (valores entre 156.0 g.L ¹ e 168.3 g.L ¹). No entanto, estimativas iniciais de ⁻ ⁻ aumento de escala do processo demonstraram que para atingir dimensões industriais o projeto do reator utilizado deve ser analisado, substituindo o mesmo por um processo em dois estágios para diminuir o volume do reator e energia para agitação.
40

Sandillon, Rezer Noémie Fleur. "Apprentissage de grammaires catégorielles : transducteurs d’arbres et clustering pour induction de grammaires catégorielles". Thesis, Bordeaux 1, 2013. http://www.theses.fr/2013BOR14940/document.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
De nos jours, il n’est pas rare d’utiliser des logiciels capables d’avoir une conversation, d’interagir avec nous (systèmes questions/réponses pour les SAV, gestion d’interface ou simplement Intelligence Artificielle - IA - de discussion). Ceux-ci doivent comprendre le contexte ou réagir par mot-clefs, mais générer ensuite des réponses cohérentes, aussi bien au niveau du sens de la phrase (sémantique) que de la forme (syntaxe). Si les premières IA se contentaient de phrases toutes faites et réagissaient en fonction de mots-clefs, le processus s’est complexifié avec le temps. Pour améliorer celui-ci, il faut comprendre et étudier la construction des phrases. Nous nous focalisons sur la syntaxe et sa modélisation avec des grammaires catégorielles. L’idée est de pouvoir aussi bien générer des squelettes de phrases syntaxiquement correctes que vérifier l’appartenance d’une phrase à un langage, ici le français (il manque l’aspect sémantique). On note que les grammaires AB peuvent, à l’exception de certains phénomènes comme la quantification et l’extraction, servir de base pour la sémantique en extrayant des λ-termes. Nous couvrons aussi bien l’aspect d’extraction de grammaire à partir de corpus arborés que l’analyse de phrases. Pour ce faire, nous présentons deux méthodes d’extraction et une méthode d’analyse de phrases permettant de tester nos grammaires. La première méthode consiste en la création d’un transducteur d’arbres généralisé, qui transforme les arbres syntaxiques en arbres de dérivation d’une grammaire AB. Appliqué sur les corpus français que nous avons à notre disposition, il permet d’avoir une grammaire assez complète de la langue française, ainsi qu’un vaste lexique. Le transducteur, même s’il s’éloigne peu de la définition usuelle d’un transducteur descendant, a pour particularité d’offrir une nouvelle méthode d’écriture des règles de transduction, permettant une définition compacte de celles-ci. Nous transformons actuellement 92,5% des corpus en arbres de dérivation. Pour notre seconde méthode, nous utilisons un algorithme d’unification en guidant celui-ci avec une étape préliminaire de clustering, qui rassemble les mots en fonction de leur contexte dans la phrase. La comparaison avec les arbres extraits du transducteur donne des résultats encourageants avec 91,3% de similarité. Enfin, nous mettons en place une version probabiliste de l’algorithme CYK pour tester l’efficacité de nos grammaires en analyse de phrases. La couverture obtenue est entre 84,6% et 92,6%, en fonction de l’ensemble de phrases pris en entrée. Les probabilités, appliquées aussi bien sur le type des mots lorsque ceux-ci en ont plusieurs que sur les règles, permettent de sélectionner uniquement le “meilleur” arbre de dérivation.Tous nos logiciels sont disponibles au téléchargement sous licence GNU GPL
Nowadays, we have become familiar with software interacting with us using natural language (for example in question-answering systems for after-sale services, human-computer interaction or simple discussion bots). These tools have to either react by keyword extraction or, more ambitiously, try to understand the sentence in its context. Though the simplest of these programs only have a set of pre-programmed sentences to react to recognized keywords (these systems include Eliza but also more modern systems like Siri), more sophisticated systems make an effort to understand the structure and the meaning of sentences (these include systems like Watson), allowing them to generate consistent answers, both with respect to the meaning of the sentence (semantics) and with respect to its form (syntax). In this thesis, we focus on syntax and on how to model syntax using categorial grammars. Our goal is to generate syntactically accurate sentences (without the semantic aspect) and to verify that a given sentence belongs to a language - the French language. We note that AB grammars, with the exception of some phenomena like quantification or extraction, are also a good basis for semantic purposes. We cover both grammar extraction from treebanks and parsing using the extracted grammars. On this purpose, we present two extraction methods and test the resulting grammars using standard parsing algorithms. The first method focuses on creating a generalized tree transducer, which transforms syntactic trees into derivation trees corresponding to an AB grammar. Applied on the various French treebanks, the transducer’s output gives us a wide-coverage lexicon and a grammar suitable for parsing. The transducer, even if it differs only slightly from the usual definition of a top-down transducer, offers several new, compact ways to express transduction rules. We currently transduce 92.5% of all sen- tences in the treebanks into derivation trees.For our second method, we use a unification algorithm, guiding it with a preliminary clustering step, which gathers the words according to their context in the sentence. The comparision between the transduced trees and this method gives the promising result of 91.3% of similarity.Finally, we have tested our grammars on sentence analysis with a probabilistic CYK algorithm and a formula assignment step done with a supertagger. The obtained coverage lies between 84.6% and 92.6%, depending on the input corpus. The probabilities, estimated for the type of words and for the rules, enable us to select only the “best” derivation tree. All our software is available for download under GNU GPL licence
41

Galvanin, Edinéia Aparecida dos Santos [UNESP]. "Extração automática de contornos de telhados de edifícios em um modelo digital de elevação, utilizando inferência Bayesiana e campos aleatórios de Markov". Universidade Estadual Paulista (UNESP), 2007. http://hdl.handle.net/11449/100258.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Made available in DSpace on 2014-06-11T19:30:31Z (GMT). No. of bitstreams: 0 Previous issue date: 2007-03-29Bitstream added on 2014-06-13T18:40:49Z : No. of bitstreams: 1 galvanin_eas_dr_prud.pdf: 5173646 bytes, checksum: aae51c2e230277eff607da015efe8a65 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
As metodologias para a extração automática de telhados desempenham um papel importante no contexto de aquisição de informação espacial para Sistemas de Informação Geográficas (SIG). Neste sentido, este trabalho propõe uma metodologia para extração automática de contornos de telhado de edifícios utilizando dados de varredura a laser. A metodologia baseiase em duas etapas principais: 1) Extração de regiões altas (edifícios, árvores etc.) de um Modelo Digital de Elevação (MDE) gerado a partir dos dados laser; 2) Extração das regiões altas que correspondem a contornos de telhados. Na primeira etapa são utilizadas as técnicas de divisão recursiva, via estrutura quadtree e de fusão Bayesiana de regiões considerando Markov Random Field (MRF). Inicialmente a técnica de divisão recursiva é usada para particionar o MDE em regiões homogêneas. No entanto, devido a ligeiras diferenças de altura no MDE, nesta etapa a fragmentação das regiões pode ser relativamente alta. Para minimizar essa fragmentação, a técnica de fusão Bayesiana de regiões é aplicada nos dados segmentados. Utiliza-se para tanto um modelo hierárquico, cujas alturas médias das regiões dependem de uma média geral e de um efeito aleatório, que incorpora a relação de vizinhança entre elas. A distribuição a priori para o efeito aleatório é especificada como um modelo condicional auto-regressivo (CAR). As distribuições a posteriori para os parâmetros de interesse foram obtidas utilizando o Amostrador de Gibbs. Na segunda etapa os contornos de telhados são identificados entre todos os objetos altos extraídos na etapa anterior. Levando em conta algumas propriedades de telhados e as medidas de alguns atributos (por exemplo, área, retangularidade, ângulos entre eixos principais de objetos) é construída uma função de energia a partir do modelo MRF.
Methodologies for automatic building roof extraction are important in the context of spatial information acquisition for geographical information systems (GIS). Thus, this work proposes a methodology for automatic extraction of building roof contour from laser scanning data. The methodology is based on two stages: 1) Extraction of high regions (buildings, trees etc.) from a Digital Elevation Model (DEM) derived from laser scanning data; 2) Building roof contour extraction. In the first stage is applied the recursive splitting technique using the quadtree structure followed by a Bayesian merging technique considering Markov Random Field (MRF) model. The recursive splitting technique subdivides the DEM into homogeneous regions. However, due to slight height differences in the DEM, in this stage the region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Bayesian framework is applied to the previously segmented data. Thus, a hierarchical model is proposed, whose height values in the data depend on a general mean plus a random effect. The prior distribution for the random effects is specified by the Conditional Autoregressive (CAR) model. The posterior probability distributions are obtained by the Gibbs sampler. In the second stage the building roof contours are identified among all high objects extracted previously.
42

Vitorino, dos Santos Filho Jairson. "CHROME: a model-driven component-based rule engine". Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/1638.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Made available in DSpace on 2014-06-12T15:51:39Z (GMT). No. of bitstreams: 2 arquivo2757_1.pdf: 5759741 bytes, checksum: 8075c58c36a6d409b242f2a7873fb02f (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Vitorino dos Santos Filho, Jairson; Pierre Louis Robin, Jacques. CHROME: a model-driven component-based rule engine. 2009. Tese (Doutorado). Programa de Pós-Graduação em Ciência da Computação, Universidade Federal de Pernambuco, Recife, 2009.
43

Chatalic, Philippe. "Raisonnement deductif en presence de connaissances imprecises et incertaines : un systeme base sur la theorie de dempster-shafer". Toulouse 3, 1986. http://www.theses.fr/1986TOU30189.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Ce memoire s'interesse aux approches quantitatives permettant de modeliser les notions d'incertitude et d'imprecision, dans les methodes de raisonnement automatiques. La premiere partie donne une vue d'ensemble des outils actuels, permettant de representer et manipuler des connaissances imprecises ou incertaines. La seconde partie a pour cadre general, celui des fonctions de croyances de shafer qui presente l'avantage d'englober les cadres propabiliste et possibliste comme cas particuliers
44

Maddali, Hanuma Teja. "Inferring social structure and dominance relationships between rhesus macaques using RFID tracking data". Thesis, Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51866.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This research address the problem of inferring, through Radio-Frequency Identification (RFID) tracking data, the graph structures underlying social interactions in a group of rhesus macaques (a species of monkey). These social interactions are considered as independent affiliative and dominative components and are characterized by a variety of visual and auditory displays and gestures. Social structure in a group is an important indicator of its members’ relative level of access to resources and has interesting implications for an individual’s health. Automatic inference of the social structure in an animal group enables a number of important capabilities, including: 1. A verifiable measure of how the social structure is affected by an intervention such as a change in the environment, or the introduction of another animal, and 2. A potentially significant reduction in person hours normally used for assessing these changes. The behaviors of interest in the context of this research are those definable using the macaques’ spatial (x,y,z) position and motion inside an enclosure. Periods of time spent in close proximity with other group members are considered to be events of passive interaction and are used in the calculation of an Affiliation Matrix. This represents the strength of undirected interaction or tie-strength between individual animals. Dominance is a directed relation that is quantified using a heuristic for the detection of withdrawal and displacement behaviors. The results of an analysis based on these approaches for a group of 6 male monkeys that were tracked over a period of 60 days at the Yerkes Primate Research Center are presented in this Thesis.
45

Rusinowitch, Michaël. "Démonstration automatique par des techniques de réécritures". Nancy 1, 1987. http://www.theses.fr/1987NAN10358.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Introduction à la logique du premier ordre et aux systèmes de réécriture. Étude de quelques ordres de simplification. Arbres sémantiques transfinis. Stratégies de paramodulation. Complétude en présence de règles de réduction. Stratégies de superposition. Ensembles complets de règles d'inférence pour les axiomes de régularité
46

Singh, Vidisha. "Integrative analysis and modeling of molecular pathways dysregulated in rheumatoid arthritis Computational systems biology approach for the study of rheumatoid arthritis: from a molecular map to a dynamical model RA-map: building a state-of-the-art interactive knowledge base for rheumatoid arthritis Automated inference of Boolean models from molecular interaction maps using CaSQ". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASL039.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La polyarthrite rhumatoïde (PR) est unemaladie auto-immune complexe qui entraîne uneinflammation synoviale et une hyperplasie pouvantprovoquer une érosion osseuse et une destruction ducartilage dans les articulations. L'étiologie de la PR restepartiellement inconnue, mais elle implique de multiplescascades de signalisation croisées et l'expression demédiateurs pro-inflammatoires. Dans la première partie demon projet de doctorat, nous présentons un effortsystématique pour construire une base de connaissancessur la PR, entièrement annotée et validée par des experts.Cette carte de la PR illustre les voies moléculaires et designalisation importantes impliquées dans la maladie. Latransduction du signal est systématiquement représentéedes récepteurs au noyau en utilisant la représentationstandard de notation graphique en biologie des systèmes(SBGN). La curation manuelle est basée sur des critèresstricts et spécifique aux études sur l'homme, limitantl'apparition de faux positifs sur la carte. Cette carte peutservir de base de connaissances interactive pour la maladiemais aussi de tableau pour la visualisation des donnéesomiques. De plus, c’est une excellente base pour ledéveloppement d'un modèle informatique. La naturestatique de la carte PR pourrait fournir une compréhensionrelativement limitée du comportement émergeant dusystème dans différentes conditions. La modélisationinformatique pourra révéler les propriétés dynamiques duréseau par le biais de perturbations in silico et peut êtreutilisée pour tester et prédire des hypothèses.Dans la deuxième partie du projet, nous présentons unpipeline permettant la construction automatisée d'un grandmodèle booléen, à partir d'une carte d'interactionsmoléculaires. Pour cela, nous avons développé l'outilCaSQ (CellDesigner as SBML-qual), qui automatise laconversion des cartes moléculaires en modèles booléensexécutables basés sur la topologie et la sémantique descartes. Le modèle booléen résultant pourrait être utilisépour des simulations in silico afin de reproduire lecomportement biologique connu du système et de prédirede nouvelles cibles thérapeutiques. Pour l'analyse deperformance de l’outil, nous avons utilisé différentescartes et modèles de maladies en mettant l'accent sur lagrande carte moléculaire de la PR.Dans la troisième partie du projet, nous présentons nosefforts pour créer un modèle dynamique (booléen) àgrande échelle pour les synoviocytes de type fibroblastede polyarthrite rhumatoïde (RA-FLS). Parmi denombreuses cellules de l'articulation et du systèmeimmunitaire impliquées dans la pathogenèse de la PR, lesRA-FLS joue un rôle important dans l'initiation et laperpétuation de l'inflammation articulaire destructrice.Les RA-FLS expriment des cytokinesimmunomodulatrices, des molécules d'adhésion et desenzymes de modélisation matricielle. De plus, les RAFLSprésentent des taux de prolifération élevés et unphénotype résistant à l'apoptose. Les RA-FLS peuventégalement se comporter comme les principaux moteurs del'inflammation, et les thérapies dirigées contre les RA FLSpourraient devenir une approche complémentaire auximmunothérapies. Le défi est de prédire les conditionsoptimales qui favoriseraient l'apoptose des RA FLS,limiteraient l'inflammation, ralentiraient le taux deprolifération et minimiseraient l'érosion osseuse et ladestruction du cartilage
Rheumatoid arthritis (RA) is a complexautoimmune disease that results in synovial inflammationand hyperplasia leading to bone erosion and cartilagedestruction in the joints. The aetiology of RA remainspartially unknown, yet, it involves a variety of intertwinedsignalling cascades and the expression of pro-inflammatorymediators. In the first part of my PhD project, we present asystematic effort to construct a fully annotated, expertvalidated, state of the art knowledge-base for RA. The RAmap illustrates significant molecular and signallingpathways implicated in the disease. Signal transduction isdepicted from receptors to the nucleus systematically usingthe systems biology graphical notation (SBGN) standardrepresentation. Manual curation based on strict criteria andrestricted to only human-specific studies limits theoccurrence of false positives in the map. The RA map canserve as an interactive knowledge base for the disease butalso as a template for omic data visualization and as anexcellent base for the development of a computationalmodel. The static nature of the RA map could provide arelatively limited understanding of the emerging behaviorof the system under different conditions. Computationalmodeling can reveal dynamic network properties throughin silico perturbations and can be used to test and predictassumptions.In the second part of the project, we present a pipelineallowing the automated construction of a large Booleanmodel, starting from a molecular interaction map. For thispurpose, we developed the tool CaSQ (CellDesigner asSBML-qual), which automates the conversion ofmolecular maps to executable Boolean models based ontopology and map semantics. The resulting Booleanmodel could be used for in silico simulations to reproduceknown biological behavior of the system and to furtherpredict novel therapeutic targets. For benchmarking, weused different disease maps and models with a focus onthe large molecular map for RA.In the third part of the project we present our efforts tocreate a large scale dynamical (Boolean) model forrheumatoid arthritis fibroblast-like synoviocytes (RAFLS).Among many cells of the joint and of the immunesystem involved in the pathogenesis of RA, RA FLS playa significant role in the initiation and perpetuation ofdestructive joint inflammation. RA-FLS are shown toexpress immuno-modulating cytokines, adhesionmolecules, and matrix-modelling enzymes. Moreover,RA-FLS display high proliferative rates and an apoptosisresistantphenotype. RA-FLS can also behave as primarydrivers of inflammation, and RA FLS-directed therapiescould become a complementary approach to immunedirectedtherapies. The challenge is to predict the optimalconditions that would favour RA FLS apoptosis, limitinflammation, slow down the proliferation rate andminimize bone erosion and cartilage destruction
47

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting". Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.
48

Krtek, Lukáš. "Učení jazykových obrázků pomocí restartovacích automatů". Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-335550.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
There are many existing models of automata working on two-dimensional inputs (pictures), though very little work has been done on the subject of learning of these automata. In this thesis, we introduce a new model called two-dimensional limited context restarting automaton. Our model works similarly as the two-dimensional restarting tiling automaton, yet we show that it is equally powerful as the two-dimensional sgraffito automaton. We propose an algorithm for learning of such automata from positive and negative samples of pictures. The algorithm is implemented and subsequently tested with several basic picture languages. Powered by TCPDF (www.tcpdf.org)
49

Kovářová, Lenka. "Testování učení restartovacích automatů genetickými algoritmy". Master's thesis, 2012. http://www.nusl.cz/ntk/nusl-313874.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Title: Testing the Learning of Restarting Automata using Genetic Algorithm Author: Bc. Lenka Kovářová Department: Department of Software and Computer Science Education Supervisor: RNDr. František Mráz, CSc. Abstract: Restarting automaton is a theoretical model of device recognizing a formal language. The construction of various versions of restarting automata can be a hard work. Many different methods of learning such automata have been developed till now. Among them are also methods for learning target restarting automaton from a finite set of positive and negative samples using genetic algorithms. In this work, we propose a method for improving learning of restarting automata by means of evolutionary algorithms. The improvement consists in inserting new rules of a special form enabling adaption of the learning algorithm to the particular language. Furthermore, there is proposed a system for testing of learning algorithms for restarting automata supporting especially learning by evolutionary algorithms. A part of the work is a program for learning restarting automata using the proposed new method with a subsequent testing of discovered automata and its evaluation in a graphic form mainly. Keywords: machine learning, grammatical inference, restarting automata, genetic algorithms
50

McAllester, David. "Automatic Recognition of Tractability in Inference Relations". 1990. http://hdl.handle.net/1721.1/6528.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
A procedure is given for recognizing sets of inference rules that generate polynomial time decidable inference relations. The procedure can automatically recognize the tractability of the inference rules underlying congruence closure. The recognition of tractability for that particular rule set constitutes mechanical verification of a theorem originally proved independently by Kozen and Shostak. The procedure is algorithmic, rather than heuristic, and the class of automatically recognizable tractable rule sets can be precisely characterized. A series of examples of rule sets whose tractability is non-trivial, yet machine recognizable, is also given. The technical framework developed here is viewed as a first step toward a general theory of tractable inference relations.

Vai alla bibliografia